自然語言處理從入門到應用——LangChain:記憶(Memory)-[記憶的類型Ⅲ]

分類目錄:《自然語言處理從入門到應用》總目錄


對話令牌緩沖存儲器ConversationTokenBufferMemory

ConversationTokenBufferMemory在內存中保留了最近的一些對話交互,并使用標記長度來確定何時刷新交互,而不是交互數量。

from langchain.memory import ConversationTokenBufferMemory
from langchain.llms import OpenAI
llm = OpenAI()
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.load_memory_variables({})

輸出:

{‘history’: ‘Human: not much you\nAI: not much’}

我們還可以將歷史記錄作為消息列表獲取,如果我們正在使用聊天模型,將非常有用:

memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10, return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
在鏈式模型中的應用

讓我們通過一個例子來演示如何在鏈式模型中使用它,同樣設置verbose=True,以便我們可以看到提示信息。

from langchain.chains import ConversationChain
conversation_with_summary = ConversationChain(llm=llm, # We set a very low max_token_limit for the purposes of testing.memory=ConversationTokenBufferMemory(llm=OpenAI(), max_token_limit=60),verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:Human: Hi, what's up?
AI:> Finished chain.

輸出:

" Hi there! I'm doing great, just enjoying the day. How about you?"

輸入:

conversation_with_summary.predict(input="Just working on writing some documentation!")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: Hi, what's up?
AI:  Hi there! I'm doing great, just enjoying the day. How about you?
Human: Just working on writing some documentation!
AI:> Finished chain.

輸出:

    ' Sounds like a productive day! What kind of documentation are you writing?'

輸入:

conversation_with_summary.predict(input="For LangChain! Have you heard of it?")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: Hi, what's up?
AI:  Hi there! I'm doing great, just enjoying the day. How about you?
Human: Just working on writing some documentation!
AI:  Sounds like a productive day! What kind of documentation are you writing?
Human: For LangChain! Have you heard of it?
AI:> Finished chain.

輸出:

    " Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about?"

輸入:

# 我們可以看到這里緩沖區被更新了
conversation_with_summary.predict(input="Haha nope, although a lot of people confuse it for that")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Current conversation:
Human: For LangChain! Have you heard of it?
AI:  Yes, I have heard of LangChain! It is a decentralized language-learning platform that connects native speakers and learners in real time. Is that the documentation you're writing about?
Human: Haha nope, although a lot of people confuse it for that
AI:> Finished chain.

輸出:

" Oh, I see. Is there another language learning platform you're referring to?"

基于向量存儲的記憶VectorStoreRetrieverMemory

VectorStoreRetrieverMemory將內存存儲在VectorDB中,并在每次調用時查詢最重要的前 K K K個文檔。與大多數其他Memory類不同,它不明確跟蹤交互的順序。在這種情況下,“文檔”是先前的對話片段。這對于提及AI在對話中早些時候得知的相關信息非常有用。

from datetime import datetime
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.memory import VectorStoreRetrieverMemory
from langchain.chains import ConversationChain
from langchain.prompts import PromptTemplate
初始化VectorStore

根據我們選擇的存儲方式,此步驟可能會有所不同,我們可以查閱相關的VectorStore文檔以獲取更多詳細信息。

import faissfrom langchain.docstore import InMemoryDocstore
from langchain.vectorstores import FAISSembedding_size = 1536 # Dimensions of the OpenAIEmbeddings
index = faiss.IndexFlatL2(embedding_size)
embedding_fn = OpenAIEmbeddings().embed_query
vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {})
創建VectorStoreRetrieverMemory

記憶對象是從VectorStoreRetriever實例化的。

# In actual usage, you would set `k` to be a higher value, but we use k=1 to show that the vector lookup still returns the semantically relevant information
retriever = vectorstore.as_retriever(search_kwargs=dict(k=1))
memory = VectorStoreRetrieverMemory(retriever=retriever)# When added to an agent, the memory object can save pertinent information from conversations or used tools
memory.save_context({"input": "My favorite food is pizza"}, {"output": "thats good to know"})
memory.save_context({"input": "My favorite sport is soccer"}, {"output": "..."})
memory.save_context({"input": "I don't the Celtics"}, {"output": "ok"}) # 
# Notice the first result returned is the memory pertaining to tax help, which the language model deems more semantically relevant
# to a 1099 than the other documents, despite them both containing numbers.
print(memory.load_memory_variables({"prompt": "what sport should i watch?"})["history"])

輸出:

input: My favorite sport is soccer
output: ...
在對話鏈中使用

讓我們通過一個示例來演示,在此示例中我們繼續設置verbose=True以便查看提示。

llm = OpenAI(temperature=0) # Can be any valid LLM
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
{history}(You do not need to use these pieces of information if not relevant)Current conversation:
Human: {input}
AI:"""
PROMPT = PromptTemplate(input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(llm=llm, prompt=PROMPT,# We set a very low max_token_limit for the purposes of testing.memory=memory,verbose=True
)
conversation_with_summary.predict(input="Hi, my name is Perry, what's up?")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite food is pizza
output: thats good to know(You do not need to use these pieces of information if not relevant)Current conversation:
Human: Hi, my name is Perry, what's up?
AI:> Finished chain.

輸出:

" Hi Perry, I'm doing well. How about you?"

輸入:

# Here, the basketball related content is surfaced
conversation_with_summary.predict(input="what's my favorite sport?")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite sport is soccer
output: ...(You do not need to use these pieces of information if not relevant)Current conversation:
Human: what's my favorite sport?
AI:> Finished chain.

輸出:

  ' You told me earlier that your favorite sport is soccer.'

輸入:

# Even though the language model is stateless, since relavent memory is fetched, it can "reason" about the time.
# Timestamping memories and data is useful in general to let the agent determine temporal relevance
conversation_with_summary.predict(input="Whats my favorite food")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: My favorite food is pizza
output: thats good to know(You do not need to use these pieces of information if not relevant)Current conversation:
Human: Whats my favorite food
AI:> Finished chain.

輸出:

  ' You said your favorite food is pizza.'

輸入:

# The memories from the conversation are automatically stored,
# since this query best matches the introduction chat above,
# the agent is able to 'remember' the user's name.
conversation_with_summary.predict(input="What's my name?")

日志輸出:

> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.Relevant pieces of previous conversation:
input: Hi, my name is Perry, what's up?
response:  Hi Perry, I'm doing well. How about you?(You do not need to use these pieces of information if not relevant)Current conversation:
Human: What's my name?
AI:> Finished chain.

輸出:

' Your name is Perry.'

參考文獻:
[1] LangChain官方網站:https://www.langchain.com/
[2] LangChain 🦜?🔗 中文網,跟著LangChain一起學LLM/GPT開發:https://www.langchain.com.cn/
[3] LangChain中文網 - LangChain 是一個用于開發由語言模型驅動的應用程序的框架:http://www.cnlangchain.com/

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/34762.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/34762.shtml
英文地址,請注明出處:http://en.pswp.cn/news/34762.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

基于灰狼優化(GWO)、帝國競爭算法(ICA)和粒子群優化(PSO)對梯度下降法訓練的神經網絡的權值進行了改進(Matlab代碼實現)

💥💥💞💞歡迎來到本博客????💥💥 🏆博主優勢:🌞🌞🌞博客內容盡量做到思維縝密,邏輯清晰,為了方便讀者。 ??座右銘&a…

環保行業如何開發廢品回收微信小程序

廢品回收是近年來受到越來越多人關注的環保行動。為了推動廢品回收的普及和方便,我們可以利用微信小程序進行制作,方便人們隨時隨地參與廢品回收。 首先,我們需要注冊并登錄喬拓云賬號,并進入后臺。喬拓云是一個提供微信小程序制作…

數據結構(一):順序表詳解

在正式介紹順序表之前,我們有必要先了解一個名詞:線性表。 線性表: 線性表是,具有n個相同特性的數據元素的有限序列。常見的線性表:順序表、鏈表、棧、隊列、數組、字符串... 線性表在邏輯上是線性結構,但…

【云原生】Pod詳講

目錄 一、Pod基礎概念1.1//在Kubrenetes集群中Pod有如下兩種使用方式:1.2pause容器使得Pod中的所有容器可以共享兩種資源:網絡和存儲。1.3kubernetes中的pause容器主要為每個容器提供以下功能:1.4Kubernetes設計這樣的Pod概念和特殊組成結構有…

Django中級指南:理解并實現Django的模型和數據庫遷移

Django 是一個極其強大的 Python Web 框架,它提供了許多工具和特性,能夠幫助我們更快速、更便捷地構建 Web 應用。在本文中,我們將會關注 Django 中的模型(Models)和數據庫遷移(Database Migrations&#x…

上傳代碼到GitCode

Git 全局設置 git config --global user.name "AnyaPapa" git config --global user.email "fangtaihongqq.com" 添加SSH密鑰 Mac終端輸入命令 cd existing_folder git init git remote add origin gitgitcode.net:Java_1710/test.git git add . git co…

2023國賽數學建模A題思路分析

文章目錄 0 賽題思路1 競賽信息2 競賽時間3 建模常見問題類型3.1 分類問題3.2 優化問題3.3 預測問題3.4 評價問題 4 建模資料 0 賽題思路 (賽題出來以后第一時間在CSDN分享) https://blog.csdn.net/dc_sinor?typeblog 1 競賽信息 全國大學生數學建模…

Mac電腦如何把照片以文件格式導出?

在Mac電腦上,我們經常會拍攝、保存和編輯各種照片。有時候,我們可能需要將這些照片以文件形式導出,以便與他人共享、打印或備份。無論您是要將照片發送給朋友、上傳到社交媒體,還是保存到外部存儲設備,導出照片為文件是…

我的Python教程:使用Pyecharts畫柱狀圖

Pyecharts是一個用于生成 Echarts 圖表的 Python 庫。Echarts 是一個基于 JavaScript 的數據可視化庫,提供了豐富的圖表類型和交互功能。通過 Pyecharts,你可以使用 Python 代碼生成各種類型的 Echarts 圖表,例如折線圖、柱狀圖、餅圖、散點圖…

java不支持解壓rar5的解決辦法--引用本地7zip.exe

由于rar5算法未開源,沒有合適的JAVA依賴能夠解決解壓rar5。在運行中報錯: javacom.github.junrar.exception.RarException: badRarArchive 通過引用本地7zip.exe,命令行執行解決: private static void unZipRar5File(String fileP…

探索可視化應用的嶄新前景

在當今數據驅動的世界中,可視化應用成為了一種強大的工具,能夠將復雜的數據轉化為易于理解和分析的圖形形式。隨著技術的不斷發展和創新,可視化應用正迎來嶄新的前景。本文將介紹可視化應用的定義、重要性以及當前的發展趨勢,并探…

Controller是單例還是多例?

Controller是單例還是多例? controller默認是單例的,不要使用非靜態的成員變量,否則會發生數據邏輯混亂。正因為單例所以不是線程安全的。 我們下面來簡單的驗證下: package com.riemann.springbootdemo.controller;import org…

docker配置文件

/etc/docker/daemon.json 文件作用 /etc/docker/daemon.json 文件是 Docker 配置文件,用于配置 Docker 守護進程的行為和參數。Docker 守護進程是負責管理和運行 Docker 容器的后臺進程,通過修改 daemon.json 文件,可以對 Docker 守護進程進…

不做Linux就沒前途嗎?

答案當然是——并不會 我晚上回來的時候跟一個今年的畢業生聊天,他入職了一家公司,但是從事的不是Linux相關的工作。 我這里想說的是,做Linux可以賺錢,Linux現在是全世界最牛逼的開源項目一點都不為過,但是Linux也不是…

NLP(六十五)LangChain中的重連(retry)機制

關于LangChain入門,讀者可參考文章NLP(五十六)LangChain入門 。 ??本文將會介紹LangChain中的重連機制,并嘗試給出定制化重連方案。 ??本文以LangChain中的對話功能(ChatOpenAI)為例。 LangChain中的重…

【Mysql】數據庫基礎與基本操作

🌇個人主頁:平凡的小蘇 📚學習格言:命運給你一個低的起點,是想看你精彩的翻盤,而不是讓你自甘墮落,腳下的路雖然難走,但我還能走,比起向陽而生,我更想嘗試逆風…

Centos 7 出現 write error (disk full?)

問題 mysql 導入任務時,由于導出的 sql 文件是在很大 (30G),利用 SQLDumpSpliter 切割工具 切成幾個 1G 大小的 sql 文件 結果在導入大半天,突然報錯 (另一個服務器上更慘,都導入兩天快完成的…

一分鐘上手Vue VueI18n Internationalization(i18n)多國語言系統開發、國際化、中英文語言切換!

這里以Vue2為例子 第一步:安裝vue-i18n npm install vue-i18n8.26.5 第二步:在src下創建js文件夾,繼續創建language文件夾 在language文件夾里面創建zh.js、en.js、index.js這仨文件 這仨文件代碼分別如下: zh.js export de…

在Eclipse在Java里面調用Python腳本的方法

由于項目中需要用到Java調用Python的腳本,來實現一些功能,就對jython做了一些了解,通過jython可以實現java對python腳本的調用。Java調用Python開發環境配置(EclipseJythonPyDev) 1、Jython是什么 Java可以使用Jython庫來調用Python庫。Jyt…