分類目錄:《自然語言處理從入門到應用》總目錄
對話令牌緩沖存儲器ConversationTokenBufferMemory
ConversationTokenBufferMemory
在內存中保留了最近的一些對話交互,并使用標記長度來確定何時刷新交互,而不是交互數量。
from langchain.memory import ConversationTokenBufferMemory
from langchain.llms import OpenAI
llm = OpenAI()
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.load_memory_variables({})
輸出:
{‘history’: ‘Human: not much you\nAI: not much’}
我們還可以將歷史記錄作為消息列表獲取,如果我們正在使用聊天模型,將非常有用:
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10, return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
在鏈式模型中的應用
讓我們通過一個例子來演示如何在鏈式模型中使用它,同樣設置verbose=True
,以便我們可以看到提示信息。
from langchain.chains import ConversationChain
conversation_with_summary = ConversationChain(llm=llm, # We set a very low max_token_limit for the purposes of testing.memory=ConversationTokenBufferMemory(llm=OpenAI(), max_token_limit=60),verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi, what's up?
AI:> Finished chain.
輸出:
" Hi there! I'm doing great, just enjoying the day. How about you?"
輸入:
conversation_with_summary.predict(input="Just working on writing some documentation!")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: Hi, what's up?
AI: Hi there! I'm doing great, just enjoying the day. How about you?
Human: Just working on writing some documentation!
AI:> Finished chain.
輸出:
' Sounds like a productive day! What kind of documentation are you writing?'
輸入:
conversation_with_summary.predict(input="For LangChain! Have you heard of it?")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: Hi, what's up?
AI: Hi there! I'm doing great, just enjoying the day. How about you?
Human: Just working on writing some documentation!
AI: Sounds like a productive day! What kind of documentation are you writing?
Human: For LangChain! Have you heard of it?
AI:> Finished chain.
輸出:
" Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about?"
輸入:
# 我們可以看到這里緩沖區被更新了
conversation_with_summary.predict(input="Haha nope, although a lot of people confuse it for that")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: For LangChain! Have you heard of it?
AI: Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about?
Human: Haha nope, although a lot of people confuse it for that
AI:> Finished chain.
輸出:
" Oh, I see. Is there another language learning platform you're referring to?"
基于向量存儲的記憶VectorStoreRetrieverMemory
VectorStoreRetrieverMemory
將內存存儲在VectorDB中,并在每次調用時查詢最重要的前 K K K個文檔。與大多數其他Memory類不同,它不明確跟蹤交互的順序。在這種情況下,“文檔”是先前的對話片段。這對于提及AI在對話中早些時候得知的相關信息非常有用。
from datetime import datetime
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.memory import VectorStoreRetrieverMemory
from langchain.chains import ConversationChain
from langchain.prompts import PromptTemplate
初始化VectorStore
根據我們選擇的存儲方式,此步驟可能會有所不同,我們可以查閱相關的VectorStore文檔以獲取更多詳細信息。
import faissfrom langchain.docstore import InMemoryDocstore
from langchain.vectorstores import FAISSembedding_size = 1536 # Dimensions of the OpenAIEmbeddings
index = faiss.IndexFlatL2(embedding_size)
embedding_fn = OpenAIEmbeddings().embed_query
vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {})
創建VectorStoreRetrieverMemory
記憶對象是從VectorStoreRetriever
實例化的。
# In actual usage, you would set `k` to be a higher value, but we use k=1 to show that the vector lookup still returns the semantically relevant information
retriever = vectorstore.as_retriever(search_kwargs=dict(k=1))
memory = VectorStoreRetrieverMemory(retriever=retriever)# When added to an agent, the memory object can save pertinent information from conversations or used tools
memory.save_context({"input": "My favorite food is pizza"}, {"output": "thats good to know"})
memory.save_context({"input": "My favorite sport is soccer"}, {"output": "..."})
memory.save_context({"input": "I don't the Celtics"}, {"output": "ok"}) #
# Notice the first result returned is the memory pertaining to tax help, which the language model deems more semantically relevant
# to a 1099 than the other documents, despite them both containing numbers.
print(memory.load_memory_variables({"prompt": "what sport should i watch?"})["history"])
輸出:
input: My favorite sport is soccer
output: ...
在對話鏈中使用
讓我們通過一個示例來演示,在此示例中我們繼續設置verbose=True
以便查看提示。
llm = OpenAI(temperature=0) # Can be any valid LLM
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
{history}(You do not need to use these pieces of information if not relevant)Current conversation:
Human: {input}
AI:"""
PROMPT = PromptTemplate(input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(llm=llm, prompt=PROMPT,# We set a very low max_token_limit for the purposes of testing.memory=memory,verbose=True
)
conversation_with_summary.predict(input="Hi, my name is Perry, what's up?")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite food is pizza
output: thats good to know(You do not need to use these pieces of information if not relevant)Current conversation:
Human: Hi, my name is Perry, what's up?
AI:> Finished chain.
輸出:
" Hi Perry, I'm doing well. How about you?"
輸入:
# Here, the basketball related content is surfaced
conversation_with_summary.predict(input="what's my favorite sport?")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite sport is soccer
output: ...(You do not need to use these pieces of information if not relevant)Current conversation:
Human: what's my favorite sport?
AI:> Finished chain.
輸出:
' You told me earlier that your favorite sport is soccer.'
輸入:
# Even though the language model is stateless, since relavent memory is fetched, it can "reason" about the time.
# Timestamping memories and data is useful in general to let the agent determine temporal relevance
conversation_with_summary.predict(input="Whats my favorite food")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite food is pizza
output: thats good to know(You do not need to use these pieces of information if not relevant)Current conversation:
Human: Whats my favorite food
AI:> Finished chain.
輸出:
' You said your favorite food is pizza.'
輸入:
# The memories from the conversation are automatically stored,
# since this query best matches the introduction chat above,
# the agent is able to 'remember' the user's name.
conversation_with_summary.predict(input="What's my name?")
日志輸出:
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: Hi, my name is Perry, what's up?
response: Hi Perry, I'm doing well. How about you?(You do not need to use these pieces of information if not relevant)Current conversation:
Human: What's my name?
AI:> Finished chain.
輸出:
' Your name is Perry.'
參考文獻:
[1] LangChain官方網站:https://www.langchain.com/
[2] LangChain 🦜?🔗 中文網,跟著LangChain一起學LLM/GPT開發:https://www.langchain.com.cn/
[3] LangChain中文網 - LangChain 是一個用于開發由語言模型驅動的應用程序的框架:http://www.cnlangchain.com/