這篇文章會帶給你
- 如何使用 LangChain:一套在大模型能力上封裝的工具框架
- 如何用幾行代碼實現一個復雜的 AI 應用
- 面向大模型的流程開發的過程抽象
文章目錄
- 這篇文章會帶給你
- 寫在前面
- LangChain 的核心組件
- 文檔(以 Python 版為例)
- 模型 I/O 封裝
- 1.1 模型 API:LLM vs. ChatModel
- 1.1.1 OpenAI 模型封裝
- 1.1.2 多輪對話 Session 封裝
- 1.2 模型的輸入與輸出
- 1.2.1 Prompt 模板封裝
- 1.2.2 從文件加載 Prompt 模板
- 1.3 結構化輸出
- 1.3.1 直接輸出 Pydantic 對象
- 1.3.2 輸出指定格式的 JSON
- 1.3.3 使用 OutputParser
- 1.4 Function Calling
- 1.5、小結
- 二、數據連接封裝
- 2.1 文檔加載器:Document Loaders
- 2.2 文檔處理器
- 2.2.1 TextSplitter
- 2.3、向量數據庫與向量檢索
- 2.4、小結
- 三、對話歷史管理
- 3.1、歷史記錄的剪裁
- 3.2、過濾帶標識的歷史記錄
- 四、Chain 和 LangChain Expression Language (LCEL)
- 4.1 Pipeline 式調用 PromptTemplate, LLM 和 OutputParser
- 4.2 用 LCEL 實現 RAG
- 4.3 用 LCEL 實現工廠模式
- 4.4 存儲與管理對話歷史
- 五、LangServe
- 5.1、Server 端
- 5.2、Client 端
- LangChain 與 LlamaIndex 的錯位競爭
- 總結
寫在前面
官網:https://www.langchain.com/
LangChain 也是一套面向大模型的開發框架(SDK)
LangChain 是 AGI 時代軟件工程的一個探索和原型
學習 LangChain 要關注接口變更
LangChain 的核心組件
- 模型 I/O 封裝
- LLMs:大語言模型
- Chat Models:一般基于 LLMs,但按對話結構重新封裝
- PromptTemple:提示詞模板
- OutputParser:解析輸出
- 數據連接封裝
- Document Loaders:各種格式文件的加載器
- Document Transformers:對文檔的常用操作,如:split, filter, translate, extract metadata, etc
- Text Embedding Models:文本向量化表示,用于檢索等操作(啥意思?別急,后面詳細講)
- Verctorstores: (面向檢索的)向量的存儲
- Retrievers: 向量的檢索
- 對話歷史管理
- 對話歷史的存儲、加載與剪裁
- 架構封裝
- Chain:實現一個功能或者一系列順序功能組合
- Agent:根據用戶輸入,自動規劃執行步驟,自動選擇每步需要的工具,最終完成用戶指定的功能
- Tools:調用外部功能的函數,例如:調 google 搜索、文件 I/O、Linux Shell 等等
- Toolkits:操作某軟件的一組工具集,例如:操作 DB、操作 Gmail 等等
- 回調
文檔(以 Python 版為例)
- 功能模塊:https://python.langchain.com/docs/get_started/introduction
- API 文檔:https://api.python.langchain.com/en/latest/langchain_api_reference.html
- 三方組件集成:https://python.langchain.com/docs/integrations/platforms/
- 官方應用案例:https://python.langchain.com/docs/use_cases
- 調試部署等指導:https://python.langchain.com/docs/guides/debugging
劃重點: 創建一個新的 conda 環境,langchain-learn,再開始下面的學習!
conda create -n langchain-learn python=3.10
模型 I/O 封裝
把不同的模型,統一封裝成一個接口,方便更換模型而不用重構代碼。
1.1 模型 API:LLM vs. ChatModel
pip install --upgrade langchain
pip install --upgrade langchain-openai
pip install --upgrade langchain-community
1.1.1 OpenAI 模型封裝
from langchain_openai import ChatOpenAI# 保證操作系統的環境變量里面配置好了OPENAI_API_KEY, OPENAI_BASE_URL
llm = ChatOpenAI(model="gpt-4o-mini") # 默認是gpt-3.5-turbo
response = llm.invoke("你是誰")
print(response.content)
我是一個人工智能助手,旨在回答問題和提供信息。如果你有任何問題或需要幫助的地方,隨時可以問我!
1.1.2 多輪對話 Session 封裝
from langchain.schema import (AIMessage, # 等價于OpenAI接口中的assistant roleHumanMessage, # 等價于OpenAI接口中的user roleSystemMessage # 等價于OpenAI接口中的system role
)messages = [SystemMessage(content="你是聚客AI研究院的課程助理。"),HumanMessage(content="我是學員,我叫大拿。"),AIMessage(content="歡迎!"),HumanMessage(content="我是誰")
]ret = llm.invoke(messages)print(ret.content)
你是大拿,一位學員。有什么我可以幫助你的嗎?
劃重點:通過模型封裝,實現不同模型的統一接口調用
1.2 模型的輸入與輸出
1.2.1 Prompt 模板封裝
- PromptTemplate 可以在模板中自定義變量
from langchain.prompts import PromptTemplatetemplate = PromptTemplate.from_template("給我講個關于{subject}的笑話")
print("===Template===")
print(template)
print("===Prompt===")
print(template.format(subject='小明'))
=Template=
input_variables=[‘subject’] input_types={} partial_variables={} template=‘給我講個關于{subject}的笑話’
=Prompt=
給我講個關于小明的笑話
from langchain_openai import ChatOpenAI# 定義 LLM
llm = ChatOpenAI(model="gpt-4o-mini")
# 通過 Prompt 調用 LLM
ret = llm.invoke(template.format(subject='小明'))
# 打印輸出
print(ret.content)
小明有一天去參加一個學校的科學展覽。他看到有個同學在展示一臺可以自動寫字的機器人。小明覺得很神奇,就問同學:“這個機器人怎么能寫字的?”
同學得意地回答:“因為它有一個超級智能的程序!”
小明想了想,搖了搖頭說:“那我也要給我的機器人裝一個超級智能的程序!”
同學好奇地問:“你打算怎么做?”
小明認真地說:“我打算給它裝上‘懶’這個程序,這樣它就可以幫我寫作業了!”
同學忍不住笑了:“你這是在找借口嘛!”
小明得意地聳聳肩:“反正我只要告訴老師,是機器人寫的!”
- ChatPromptTemplate 用模板表示的對話上下文
from langchain.prompts import (ChatPromptTemplate,HumanMessagePromptTemplate,SystemMessagePromptTemplate,
)
from langchain_openai import ChatOpenAItemplate = ChatPromptTemplate.from_messages([SystemMessagePromptTemplate.from_template("你是{product}的客服助手。你的名字叫{name}"),HumanMessagePromptTemplate.from_template("{query}"),]
)llm = ChatOpenAI(model="gpt-4o-mini")
prompt = template.format_messages(product="聚客AI研究院",name="大吉",query="你是誰"
)print(prompt)ret = llm.invoke(prompt)print(ret.content)
[SystemMessage(content=‘你是聚客AI研究院的客服助手。你的名字叫大吉’, additional_kwargs={}, response_metadata={}), HumanMessage(content=‘你是誰’, additional_kwargs={}, response_metadata={})]
我是大吉,聚客AI研究院的客服助手。很高興為您提供幫助!請問有什么我可以為您做的呢?
- MessagesPlaceholder 把多輪對話變成模板
from langchain.prompts import (ChatPromptTemplate,HumanMessagePromptTemplate,MessagesPlaceholder,
)human_prompt = "Translate your answer to {language}."
human_message_template = HumanMessagePromptTemplate.from_template(human_prompt)chat_prompt = ChatPromptTemplate.from_messages(# variable_name 是 message placeholder 在模板中的變量名# 用于在賦值時使用[MessagesPlaceholder("history"), human_message_template]
)
from langchain_core.messages import AIMessage, HumanMessagehuman_message = HumanMessage(content="Who is Elon Musk?")
ai_message = AIMessage(content="Elon Musk is a billionaire entrepreneur, inventor, and industrial designer"
)messages = chat_prompt.format_prompt(# 對 "history" 和 "language" 賦值history=[human_message, ai_message], language="中文"
)print(messages.to_messages())
[HumanMessage(content=‘Who is Elon Musk?’, additional_kwargs={}, response_metadata={}), AIMessage(content=‘Elon Musk is a billionaire entrepreneur, inventor, and industrial designer’, additional_kwargs={}, response_metadata={}), HumanMessage(content=‘Translate your answer to 中文.’, additional_kwargs={}, response_metadata={})]
result = llm.invoke(messages)
print(result.content)
埃隆·馬斯克(Elon Musk)是一位億萬富翁企業家、發明家和工業設計師。
劃重點:把Prompt模板看作帶有參數的函數
1.2.2 從文件加載 Prompt 模板
from langchain.prompts import PromptTemplatetemplate = PromptTemplate.from_file("example_prompt_template.txt")
print("===Template===")
print(template)
print("===Prompt===")
print(template.format(topic='黑色幽默'))
=Template=
input_variables=[‘topic’] input_types={} partial_variables={} template=‘舉一個關于{topic}的例子’
=Prompt=
舉一個關于黑色幽默的例子
1.3 結構化輸出
1.3.1 直接輸出 Pydantic 對象
from pydantic import BaseModel, Field# 定義你的輸出對象
class Date(BaseModel):year: int = Field(description="Year")month: int = Field(description="Month")day: int = Field(description="Day")era: str = Field(description="BC or AD")
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain_openai import ChatOpenAIfrom langchain_core.output_parsers import PydanticOutputParsermodel_name = 'gpt-4o-mini'
temperature = 0
llm = ChatOpenAI(model_name=model_name, temperature=temperature)# 定義結構化輸出的模型
structured_llm = llm.with_structured_output(Date)template = """提取用戶輸入中的日期。
用戶輸入:
{query}"""prompt = PromptTemplate(template=template,
)query = "2024年十二月23日天氣晴..."
input_prompt = prompt.format_prompt(query=query)structured_llm.invoke(input_prompt)
Date(year=2024, month=12, day=23, era=‘AD’)
1.3.2 輸出指定格式的 JSON
json_schema = {"title": "Date","description": "Formated date expression","type": "object","properties": {"year": {"type": "integer","description": "year, YYYY",},"month": {"type": "integer","description": "month, MM",},"day": {"type": "integer","description": "day, DD",},"era": {"type": "string","description": "BC or AD",},},
}
structured_llm = llm.with_structured_output(json_schema)structured_llm.invoke(input_prompt)
{‘day’: 23, ‘month’: 12, ‘year’: 2024}
1.3.3 使用 OutputParser
OutputParser 可以按指定格式解析模型的輸出
from langchain_core.output_parsers import JsonOutputParserparser = JsonOutputParser(pydantic_object=Date)prompt = PromptTemplate(template="提取用戶輸入中的日期。\n用戶輸入:{query}\n{format_instructions}",input_variables=["query"],partial_variables={"format_instructions": parser.get_format_instructions()},
)input_prompt = prompt.format_prompt(query=query)
output = llm.invoke(input_prompt)
print("原始輸出:\n"+output.content)print("\n解析后:")
parser.invoke(output)
原始輸出:
{"year": 2024, "month": 12, "day": 23, "era": "AD"}
解析后:
{‘year’: 2024, ‘month’: 12, ‘day’: 23, ‘era’: ‘AD’}
也可以用 PydanticOutputParser
from langchain_core.output_parsers import PydanticOutputParserparser = PydanticOutputParser(pydantic_object=Date)input_prompt = prompt.format_prompt(query=query)
output = llm.invoke(input_prompt)
print("原始輸出:\n"+output.content)print("\n解析后:")
parser.invoke(output)
原始輸出:
{"year": 2024,"month": 12,"day": 23,"era": "AD"
}
解析后:
Date(year=2024, month=12, day=23, era=‘AD’)
OutputFixingParser 利用大模型做格式自動糾錯
from langchain.output_parsers import OutputFixingParsernew_parser = OutputFixingParser.from_llm(parser=parser, llm=ChatOpenAI(model="gpt-4o"))bad_output = output.content.replace("4","四")
print("PydanticOutputParser:")
try:parser.invoke(bad_output)
except Exception as e:print(e)print("OutputFixingParser:")
new_parser.invoke(bad_output)
PydanticOutputParser:
Invalid json output: ```json
{
“year”: 202四,
“month”: 12,
“day”: 23,
“era”: “AD”
}
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/OUTPUT_PARSING_FAILURE
OutputFixingParser:
Date(year=2024, month=12, day=23, era=‘AD’)
1.4 Function Calling
from langchain_core.tools import tool@tool
def add(a: int, b: int) -> int:"""Add two integers.Args:a: First integerb: Second integer"""return a + b@tool
def multiply(a: int, b: int) -> int:"""Multiply two integers.Args:a: First integerb: Second integer"""return a * b
import jsonllm_with_tools = llm.bind_tools([add, multiply])query = "3的4倍是多少?"
messages = [HumanMessage(query)]output = llm_with_tools.invoke(messages)print(json.dumps(output.tool_calls, indent=4))
[
{
“name”: “multiply”,
“args”: {
“a”: 3,
“b”: 4
},
“id”: “call_V6lxABmx12g6lIgT8WMUtexM”,
“type”: “tool_call”
}
]
回傳 Funtion Call 的結果
messages.append(output)available_tools = {"add": add, "multiply": multiply}for tool_call in output.tool_calls:selected_tool = available_tools[tool_call["name"].lower()]tool_msg = selected_tool.invoke(tool_call)messages.append(tool_msg)new_output = llm_with_tools.invoke(messages)
for message in messages:print(json.dumps(message.dict(), indent=4, ensure_ascii=False))
print(new_output.content)
{
“content”: “3的4倍是多少?”,
“additional_kwargs”: {},
“response_metadata”: {},
“type”: “human”,
“name”: null,
“id”: null,
“example”: false
}
{
“content”: “”,
“additional_kwargs”: {
“tool_calls”: [
{
“id”: “call_V6lxABmx12g6lIgT8WMUtexM”,
“function”: {
“arguments”: “{“a”:3,“b”:4}”,
“name”: “multiply”
},
“type”: “function”
}
],
“refusal”: null
},
“response_metadata”: {
“token_usage”: {
“completion_tokens”: 18,
“prompt_tokens”: 97,
“total_tokens”: 115,
“completion_tokens_details”: {
“accepted_prediction_tokens”: 0,
“audio_tokens”: 0,
“reasoning_tokens”: 0,
“rejected_prediction_tokens”: 0
},
“prompt_tokens_details”: {
“audio_tokens”: 0,
“cached_tokens”: 0
}
},
“model_name”: “gpt-4o-mini-2024-07-18”,
“system_fingerprint”: “fp_0aa8d3e20b”,
“finish_reason”: “tool_calls”,
“logprobs”: null
},
“type”: “ai”,
“name”: null,
“id”: “run-d25ca9ee-50b1-4848-a79e-42e58803fc7a-0”,
“example”: false,
“tool_calls”: [
{
“name”: “multiply”,
“args”: {
“a”: 3,
“b”: 4
},
“id”: “call_V6lxABmx12g6lIgT8WMUtexM”,
“type”: “tool_call”
}
],
“invalid_tool_calls”: [],
“usage_metadata”: {
“input_tokens”: 97,
“output_tokens”: 18,
“total_tokens”: 115,
“input_token_details”: {
“audio”: 0,
“cache_read”: 0
},
“output_token_details”: {
“audio”: 0,
“reasoning”: 0
}
}
}
{
“content”: “12”,
“additional_kwargs”: {},
“response_metadata”: {},
“type”: “tool”,
“name”: “multiply”,
“id”: null,
“tool_call_id”: “call_V6lxABmx12g6lIgT8WMUtexM”,
“artifact”: null,
“status”: “success”
}
3的4倍是12。
C:\Users\Administrator\AppData\Local\Temp\ipykernel_16608\2298449061.py:12: PydanticDeprecatedSince20: The
dict
method is deprecated; usemodel_dump
instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.10/migration/
print(json.dumps(message.dict(), indent=4, ensure_ascii=False))
1.5、小結
- LangChain 統一封裝了各種模型的調用接口,包括補全型和對話型兩種
- LangChain 提供了 PromptTemplate 類,可以自定義帶變量的模板
- LangChain 提供了一些列輸出解析器,用于將大模型的輸出解析成結構化對象
- LangChain 提供了 Function Calling 的封裝
- 上述模型屬于 LangChain 中較為實用的部分
二、數據連接封裝
2.1 文檔加載器:Document Loaders
pip install pymupdf
from langchain_community.document_loaders import PyMuPDFLoaderloader = PyMuPDFLoader("llama2.pdf")
pages = loader.load_and_split()print(pages[0].page_content)
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron?
Louis Martin?
Kevin Stone?
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller
Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou
Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev
Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich
Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra
Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi
Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang
Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang
Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic
Sergey Edunov
Thomas Scialom?
GenAI, Meta
Abstract
In this work, we develop and release Llama 2, a collection of pretrained and ?ne-tuned
large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
Our ?ne-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our
models outperform open-source chat models on most benchmarks we tested, and based on
our human evaluations for helpfulness and safety, may be a suitable substitute for closed-
source models. We provide a detailed description of our approach to ?ne-tuning and safety
improvements of Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
?Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com
?Second author
Contributions for all the authors can be found in Section A.1.
arXiv:2307.09288v2 [cs.CL] 19 Jul 2023
2.2 文檔處理器
2.2.1 TextSplitter
pip install --upgrade langchain-text-splitters
from langchain_text_splitters import RecursiveCharacterTextSplitter# 簡單的文本內容切割
text_splitter = RecursiveCharacterTextSplitter(chunk_size=200,chunk_overlap=100, length_function=len,add_start_index=True,
)paragraphs = text_splitter.create_documents([pages[0].page_content])
for para in paragraphs:print(para.page_content)print('-------')
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron?
Louis Martin?
Kevin Stone?
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Kevin Stone?
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller
Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou
Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou
Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev
Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev
Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich
Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich
Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra
Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra
Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi
Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi
Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang
Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang
Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang
Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang
Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic
Sergey Edunov
Thomas Scialom?
Sergey Edunov
Thomas Scialom?
GenAI, Meta
Abstract
In this work, we develop and release Llama 2, a collection of pretrained and ?ne-tuned
Abstract
In this work, we develop and release Llama 2, a collection of pretrained and ?ne-tuned
large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
Our ?ne-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our
Our ?ne-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our
models outperform open-source chat models on most benchmarks we tested, and based on
models outperform open-source chat models on most benchmarks we tested, and based on
our human evaluations for helpfulness and safety, may be a suitable substitute for closed-
our human evaluations for helpfulness and safety, may be a suitable substitute for closed-
source models. We provide a detailed description of our approach to ?ne-tuning and safety
source models. We provide a detailed description of our approach to ?ne-tuning and safety
improvements of Llama 2-Chat in order to enable the community to build on our work and
improvements of Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
contribute to the responsible development of LLMs.
?Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com
?Second author
?Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com
?Second author
Contributions for all the authors can be found in Section A.1.
arXiv:2307.09288v2 [cs.CL] 19 Jul 2023
類似 LlamaIndex,LangChain 也提供了豐富的 Document Loaders 和 Text Splitters。
2.3、向量數據庫與向量檢索
conda install -c pytorch faiss-cpu
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_openai import ChatOpenAI
from langchain_community.document_loaders import PyMuPDFLoader# 加載文檔
loader = PyMuPDFLoader("llama2.pdf")
pages = loader.load_and_split()# 文檔切分
text_splitter = RecursiveCharacterTextSplitter(chunk_size=300,chunk_overlap=100,length_function=len,add_start_index=True,
)texts = text_splitter.create_documents([page.page_content for page in pages[:4]]
)# 灌庫
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
db = FAISS.from_documents(texts, embeddings)# 檢索 top-3 結果
retriever = db.as_retriever(search_kwargs={"k": 3})docs = retriever.invoke("llama2有多少參數")for doc in docs:print(doc.page_content)print("----")
but are not releasing.§
2. Llama 2-Chat, a ?ne-tuned version of Llama 2 that is optimized for dialogue use cases. We release
variants of this model with 7B, 13B, and 70B parameters as well.
We believe that the open release of LLMs, when done safely, will be a net bene?t to society. Like all LLMs,
Llama 2-Chat, at scales up to 70B parameters. On the series of helpfulness and safety benchmarks we tested,
Llama 2-Chat models generally perform better than existing open-source models. They also appear to
large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
Our ?ne-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our
models outperform open-source chat models on most benchmarks we tested, and based on
更多的三方檢索組件鏈接,參考:https://python.langchain.com/v0.3/docs/integrations/vectorstores/
2.4、小結
- 文檔處理部分,建議在實際應用中詳細測試后使用
- 與向量數據庫的鏈接部分本質是接口封裝,向量數據庫需要自己選型
三、對話歷史管理
3.1、歷史記錄的剪裁
from langchain_core.messages import (AIMessage,HumanMessage,SystemMessage,trim_messages,
)
from langchain_openai import ChatOpenAImessages = [SystemMessage("you're a good assistant, you always respond with a joke."),HumanMessage("i wonder why it's called langchain"),AIMessage('Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!'),HumanMessage("and who is harrison chasing anyways"),AIMessage("Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"),HumanMessage("what do you call a speechless parrot"),
]trim_messages(messages,max_tokens=45,strategy="last",token_counter=ChatOpenAI(model="gpt-4o-mini"),
)
[AIMessage(content=“Hmmm let me think.\n\nWhy, he’s probably chasing after the last cup of coffee in the office!”, additional_kwargs={}, response_metadata={}),
HumanMessage(content=‘what do you call a speechless parrot’, additional_kwargs={}, response_metadata={})]
#保留 system prompt
trim_messages(
messages,
max_tokens=45,
strategy=“last”,
token_counter=ChatOpenAI(model=“gpt-4o-mini”),
include_system=True,
allow_partial=True,
)
[SystemMessage(content=“you’re a good assistant, you always respond with a joke.”, additional_kwargs={}, response_metadata={}),
HumanMessage(content=‘what do you call a speechless parrot’, additional_kwargs={}, response_metadata={})]
3.2、過濾帶標識的歷史記錄
from langchain_core.messages import (AIMessage,HumanMessage,SystemMessage,filter_messages,
)messages = [SystemMessage("you are a good assistant", id="1"),HumanMessage("example input", id="2", name="example_user"),AIMessage("example output", id="3", name="example_assistant"),HumanMessage("real input", id="4", name="bob"),AIMessage("real output", id="5", name="alice"),
]filter_messages(messages, include_types="human")
[HumanMessage(content=‘example input’, additional_kwargs={}, response_metadata={}, name=‘example_user’, id=‘2’),
HumanMessage(content=‘real input’, additional_kwargs={}, response_metadata={}, name=‘bob’, id=‘4’)]
filter_messages(messages, exclude_names=["example_user", "example_assistant"])
[SystemMessage(content=‘you are a good assistant’, additional_kwargs={}, response_metadata={}, id=‘1’),
HumanMessage(content=‘real input’, additional_kwargs={}, response_metadata={}, name=‘bob’, id=‘4’),
AIMessage(content=‘real output’, additional_kwargs={}, response_metadata={}, name=‘alice’, id=‘5’)]
filter_messages(messages, include_types=[HumanMessage, AIMessage], exclude_ids=["3"])
[HumanMessage(content=‘example input’, additional_kwargs={}, response_metadata={}, name=‘example_user’, id=‘2’),
HumanMessage(content=‘real input’, additional_kwargs={}, response_metadata={}, name=‘bob’, id=‘4’),
AIMessage(content=‘real output’, additional_kwargs={}, response_metadata={}, name=‘alice’, id=‘5’)]
四、Chain 和 LangChain Expression Language (LCEL)
LangChain Expression Language(LCEL)是一種聲明式語言,可輕松組合不同的調用順序構成 Chain。LCEL 自創立之初就被設計為能夠支持將原型投入生產環境,無需代碼更改,從最簡單的“提示+LLM”鏈到最復雜的鏈(已有用戶成功在生產環境中運行包含數百個步驟的 LCEL Chain)。
LCEL 的一些亮點包括:
-
流支持:使用 LCEL 構建 Chain 時,你可以獲得最佳的首個令牌時間(即從輸出開始到首批輸出生成的時間)。對于某些 Chain,這意味著可以直接從 LLM 流式傳輸令牌到流輸出解析器,從而以與 LLM 提供商輸出原始令牌相同的速率獲得解析后的、增量的輸出。
-
異步支持:任何使用 LCEL 構建的鏈條都可以通過同步 API(例如,在 Jupyter 筆記本中進行原型設計時)和異步 API(例如,在 LangServe 服務器中)調用。這使得相同的代碼可用于原型設計和生產環境,具有出色的性能,并能夠在同一服務器中處理多個并發請求。
-
優化的并行執行:當你的 LCEL 鏈條有可以并行執行的步驟時(例如,從多個檢索器中獲取文檔),我們會自動執行,無論是在同步還是異步接口中,以實現最小的延遲。
-
重試和回退:為 LCEL 鏈的任何部分配置重試和回退。這是使鏈在規模上更可靠的絕佳方式。目前我們正在添加重試/回退的流媒體支持,因此你可以在不增加任何延遲成本的情況下獲得增加的可靠性。
-
訪問中間結果:對于更復雜的鏈條,訪問在最終輸出產生之前的中間步驟的結果通常非常有用。這可以用于讓最終用戶知道正在發生一些事情,甚至僅用于調試鏈條。你可以流式傳輸中間結果,并且在每個 LangServe 服務器上都可用。
-
輸入和輸出模式:輸入和輸出模式為每個 LCEL 鏈提供了從鏈的結構推斷出的 Pydantic 和 JSONSchema 模式。這可以用于輸入和輸出的驗證,是 LangServe 的一個組成部分。
-
無縫 LangSmith 跟蹤集成:隨著鏈條變得越來越復雜,理解每一步發生了什么變得越來越重要。通過 LCEL,所有步驟都自動記錄到 LangSmith,以實現最大的可觀察性和可調試性。
-
無縫 LangServe 部署集成:任何使用 LCEL 創建的鏈都可以輕松地使用 LangServe 進行部署。
原文:https://python.langchain.com/docs/expression_language/
4.1 Pipeline 式調用 PromptTemplate, LLM 和 OutputParser
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from pydantic import BaseModel, Field, validator
from typing import List, Dict, Optional
from enum import Enum
import json
# 輸出結構
class SortEnum(str, Enum):data = 'data'price = 'price'class OrderingEnum(str, Enum):ascend = 'ascend'descend = 'descend'class Semantics(BaseModel):name: Optional[str] = Field(description="流量包名稱", default=None)price_lower: Optional[int] = Field(description="價格下限", default=None)price_upper: Optional[int] = Field(description="價格上限", default=None)data_lower: Optional[int] = Field(description="流量下限", default=None)data_upper: Optional[int] = Field(description="流量上限", default=None)sort_by: Optional[SortEnum] = Field(description="按價格或流量排序", default=None)ordering: Optional[OrderingEnum] = Field(description="升序或降序排列", default=None)# Prompt 模板
prompt = ChatPromptTemplate.from_messages([("system", "你是一個語義解析器。你的任務是將用戶的輸入解析成JSON表示。不要回答用戶的問題。"),("human", "{text}"),]
)# 模型
llm = ChatOpenAI(model="gpt-4o", temperature=0)structured_llm = llm.with_structured_output(Semantics)# LCEL 表達式
runnable = ({"text": RunnablePassthrough()} | prompt | structured_llm
)# 直接運行
ret = runnable.invoke("不超過100元的流量大的套餐有哪些")
print(json.dumps(ret.dict(),indent = 4,ensure_ascii=False)
)
{
“name”: null,
“price_lower”: null,
“price_upper”: 100,
“data_lower”: null,
“data_upper”: null,
“sort_by”: “data”,
“ordering”: “descend”
}
C:\Users\Administrator\AppData\Local\Temp\ipykernel_16608\4198727415.py:44: PydanticDeprecatedSince20: The
dict
method is deprecated; usemodel_dump
instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.10/migration/
ret.dict(),
流式輸出
prompt = PromptTemplate.from_template("講個關于{topic}的笑話")runnable = ({"topic": RunnablePassthrough()} | prompt | llm | StrOutputParser()
)# 流式輸出
for s in runnable.stream("小明"):print(s, end="", flush=True)
好的,那我就給你講個關于小明的笑話吧!
一天,老師問小明:“如果地球是方的,那會怎么樣?”
小明思考了一會兒,非常認真地回答:“那我小時候玩捉迷藏就不用被人發現了,因為我可以直接躲在地球的拐角處!”
老師差點笑到掉書!
注意: 在當前的文檔中 LCEL 產生的對象,被叫做 runnable 或 chain,經常兩種叫法混用。本質就是一個自定義調用流程。
使用 LCEL 的價值,也就是 LangChain 的核心價值。
官方從不同角度給出了舉例說明:https://python.langchain.com/v0.1/docs/expression_language/why/
4.2 用 LCEL 實現 RAG
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain_community.document_loaders import PyMuPDFLoader# 加載文檔
loader = PyMuPDFLoader("llama2.pdf")
pages = loader.load_and_split()# 文檔切分
text_splitter = RecursiveCharacterTextSplitter(chunk_size=300,chunk_overlap=100,length_function=len,add_start_index=True,
)texts = text_splitter.create_documents([page.page_content for page in pages[:4]]
)# 灌庫
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
db = FAISS.from_documents(texts, embeddings)# 檢索 top-2 結果
retriever = db.as_retriever(search_kwargs={"k": 2})
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough# Prompt模板
template = """Answer the question based only on the following context:
{context}Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)# Chain
rag_chain = ({"question": RunnablePassthrough(), "context": retriever}| prompt| llm| StrOutputParser()
)rag_chain.invoke("Llama 2有多少參數")
‘根據提供的上下文,Llama 2 有 7B(70億)、13B(130億)和 70B(700億)參數的不同版本。’
4.3 用 LCEL 實現工廠模式
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI
from langchain_community.chat_models import QianfanChatEndpoint
from langchain.prompts import (ChatPromptTemplate,HumanMessagePromptTemplate,
)
from langchain.schema import HumanMessage
import os# 模型1
ernie_model = QianfanChatEndpoint(qianfan_ak=os.getenv('ERNIE_CLIENT_ID'),qianfan_sk=os.getenv('ERNIE_CLIENT_SECRET')
)# 模型2
gpt_model = ChatOpenAI(model="gpt-4o-mini", temperature=0)# 通過 configurable_alternatives 按指定字段選擇模型
model = gpt_model.configurable_alternatives(ConfigurableField(id="llm"), default_key="gpt", ernie=ernie_model,# claude=claude_model,
)# Prompt 模板
prompt = ChatPromptTemplate.from_messages([HumanMessagePromptTemplate.from_template("{query}"),]
)# LCEL
chain = ({"query": RunnablePassthrough()} | prompt| model | StrOutputParser()
)# 運行時指定模型 "gpt" or "ernie"
ret = chain.with_config(configurable={"llm": "gpt"}).invoke("請自我介紹")print(ret)
擴展閱讀:什么是工廠模式;設計模式概覽。
思考:從模塊間解依賴角度,LCEL的意義是什么?
4.4 存儲與管理對話歷史
from langchain_community.chat_message_histories import SQLChatMessageHistorydef get_session_history(session_id):# 通過 session_id 區分對話歷史,并存儲在 sqlite 數據庫中return SQLChatMessageHistory(session_id, "sqlite:///memory.db")
from langchain_core.messages import HumanMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
from langchain.schema.output_parser import StrOutputParsermodel = ChatOpenAI(model="gpt-4o-mini", temperature=0)runnable = model | StrOutputParser()runnable_with_history = RunnableWithMessageHistory(runnable, # 指定 runnableget_session_history, # 指定自定義的歷史管理方法
)runnable_with_history.invoke([HumanMessage(content="你好,我叫大拿")],config={"configurable": {"session_id": "dana"}},
)
d:\envs\langchain-learn\lib\site-packages\langchain_core\runnables\history.py:608: LangChainDeprecationWarning:
connection_string
was deprecated in LangChain 0.2.2 and will be removed in 1.0. Use connection instead.
message_history = self.get_session_history(
‘你好,大拿!很高興再次見到你。有任何問題或想聊的話題嗎?’
runnable_with_history.invoke([HumanMessage(content="你知道我叫什么名字")],config={"configurable": {"session_id": "dana"}},
)
‘是的,你叫大拿。有什么我可以為你做的呢?’
runnable_with_history.invoke([HumanMessage(content="你知道我叫什么名字")],config={"configurable": {"session_id": "test"}},
)
‘抱歉,我不知您的名字。如果您愿意,可以告訴我您的名字。’
通過 LCEL,還可以實現
- 配置運行時變量:https://python.langchain.com/v0.3/docs/how_to/configure/
- 故障回退:https://python.langchain.com/v0.3/docs/how_to/fallbacks
- 并行調用:https://python.langchain.com/v0.3/docs/how_to/parallel/
- 邏輯分支:https://python.langchain.com/v0.3/docs/how_to/routing/
- 動態創建 Chain: https://python.langchain.com/v0.3/docs/how_to/dynamic_chain/
更多例子:https://python.langchain.com/v0.3/docs/how_to/lcel_cheatsheet/
五、LangServe
LangServe 用于將 Chain 或者 Runnable 部署成一個 REST API 服務。
# 安裝 LangServe
# pip install --upgrade "langserve[all]"# 也可以只安裝一端
# pip install "langserve[client]"
# pip install "langserve[server]"
5.1、Server 端
#!/usr/bin/env python
from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langserve import add_routes
import uvicornapp = FastAPI(title="LangChain Server",version="1.0",description="A simple api server using Langchain's Runnable interfaces",
)model = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("講一個關于{topic}的笑話")
add_routes(app,prompt | model,path="/joke",
)if __name__ == "__main__":uvicorn.run(app, host="localhost", port=9999)
5.2、Client 端
import requestsresponse = requests.post("http://localhost:9999/joke/invoke",json={'input': {'topic': '小明'}}
)
print(response.json())
LangChain 與 LlamaIndex 的錯位競爭
LangChain 側重與 LLM 本身交互的封裝Prompt、LLM、Message、OutputParser 等工具豐富在數據處理和 RAG 方面提供的工具相對粗糙主打 LCEL 流程封裝配套 Agent、LangGraph 等智能體與工作流工具另有 LangServe 部署工具和 LangSmith 監控調試工具
LlamaIndex 側重與數據交互的封裝數據加載、切割、索引、檢索、排序等相關工具豐富Prompt、LLM 等底層封裝相對單薄配套實現 RAG 相關工具有 Agent 相關工具,不突出
LlamaIndex 為 LangChain 提供了集成在 LlamaIndex 中調用 LangChain 封裝的 LLM 接口:https://docs.llamaindex.ai/en/stable/api_reference/llms/langchain/將 LlamaIndex 的 Query Engine 作為 LangChain Agent 的工具:https://docs.llamaindex.ai/en/v0.10.17/community/integrations/using_with_langchain.htmlLangChain 也 曾經 集成過 LlamaIndex,目前相關接口仍在:https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.llama_index.LlamaIndexRetriever.html
總結
LangChain 隨著版本迭代可用性有明顯提升
使用 LangChain 要注意維護自己的 Prompt,盡量 Prompt 與代碼邏輯解依賴
它的內置基礎工具,建議充分測試效果后再決定是否使用