超越傳統客服機器人。智能體可以深度查詢知識庫、調用訂單系統API、甚至根據客戶情緒靈活處理退貨、退款、升級投訴等復雜流程。
案例:
客戶說:“我上周買的鞋子尺碼不對,想換貨但是找不到訂單頁面了。”
智能體行動: ① 通過用戶身份驗證;② 調用訂單查詢API找到該訂單;③ 檢查換貨政策;④ 生成換貨鏈接并指導用戶下一步操作;⑤ 若流程中斷,后續可主動發送郵件提醒。
1. 核心功能
- 多輪對話與上下文理解:能理解并記憶對話上下文。
- 工具調用(Tool Use):能調用外部API獲取真實數據(如訂單、用戶信息)。
- 知識庫檢索(RAG):能從企業知識庫(如手冊、FAQ)中檢索精準信息。
- 安全性與權限控制:驗證用戶身份,并基于身份控制數據訪問范圍
- 人工接管(Human-in-the-loop):在無法處理或用戶要求時,無縫轉接人工客服
2. 技術棧選擇(企業級)
-
LLM(大腦): OpenAI GPT-4 Turbo (性價比和性能平衡)
-
開發框架: LangChain / LangGraph (用于構建復雜、有狀態的智能體工作流)
-
工具調用: LangChain Tools & Custom Functions
-
知識庫: Chroma (向量數據庫,輕量且高效) + OpenAI Embeddings
-
后端API: FastAPI (高性能、異步支持好,適合生產環境)
-
身份認證: JWT Tokens
-
數據持久化: SQL Database (PostgreSQL) for logging
-
部署: Docker & Kubernetes (容器化便于擴展和管理)
架構設計
一、環境依賴
# 創建項目目錄并初始化虛擬環境
mkdir customer-support-agent
cd customer-support-agent
python -m venv venv
source venv/bin/activate # Linux/Mac
# venv\Scripts\activate # Windows# 安裝核心依賴
pip install openai langchain langgraph chromadb langchain-openai fastapi uvicorn python-jose[cryptography] passlib sqlalchemy psycopg2-binary pydantic
二、數據模型
models.py
- UserIdentity - 用戶身份標識,跟蹤用戶身份和會話信息
- AgentState - 智能體狀態管理,管理對話智能體的完整狀態,這是LangGraph框架中的核心概念
- ConversationType - 對話類型枚舉,定義對話的意圖分類,限制可能的對話類型,便于處理后續的統計分析和邏輯分支
- ConversationLog - 對話日志存儲:用于持久化存儲對話記錄到數據庫
from pydantic import BaseModel, Field #Pydantic: 提供數據驗證和設置管理的現代Python數據驗證庫
from typing import Optional, List, Dict, Any #typing: 用于類型注解,提高代碼可讀性和類型安全
from enum import Enum #enum: 用于定義枚舉類型,限制可能的取值class UserIdentity(BaseModel):"""用戶身份標識"""user_id: Optional[str] = None #user_id: 可選字段,用于已登錄用戶session_id: str #session_id: 必需字段,唯一標識一個對話會話is_authenticated: bool = False #is_authenticated: 標識用戶是否通過身份驗證class AgentState(BaseModel):"""LangGraph智能體的狀態"""#一個列表,列表中的每個元素都是字典,使用Field類來定義字段的詳細配置,...表示該字段是必需的,不能為空messages: List[Dict[str, Any]] = Field(..., description="對話消息歷史")user_identity: UserIdentity = Field(..., description="用戶身份信息")current_step: str = Field("greeting", description="當前對話步驟")#"greeting": 默認值,表示如果沒有提供值,就使用這個默認值#default_factory=dict: 使用工廠函數,每次創建新實例時都會調用dict()創建一個新的空字典extracted_info: Dict[str, Any] = Field(default_factory=dict, description="從對話中提取的結構化信息")class ConversationType(Enum):QUERY_ORDER = "query_order" # 訂單查詢RETURN_EXCHANGE = "return_exchange" # 退換貨GENERAL_QUESTION = "general_question" # 一般問題class ConversationLog(BaseModel):"""存入數據庫的對話日志"""session_id: str # 會話標識user_id: Optional[str] # 可選用戶IDmessage: str # 用戶消息內容agent_response: str # 智能體回復內容intent: Optional[str] # 識別出的意圖timestamp: str # 時間戳success: bool # 處理是否成功
畫外題:
#創建實例
state = AgentState(messages=[{"role": "user", "content": "你好"}],user_identity=UserIdentity(...), # 需要提供UserIdentity實例# current_step 使用默認值 "greeting"# extracted_info 使用默認空字典
)# 訪問屬性
print(state.current_step) # 輸出: "greeting"
print(state.extracted_info) # 輸出: {}
三、核心工具
tools.py
- 訂單查詢工具→ AI調用query_order_tool
- 退換貨政策查詢工具→ AI調用query_return_policy_tool
- 復雜問題無法解決 → AI調用create_support_ticket_tool
import requests
from langchain.tools import tool #LangChain的工具裝飾器,將函數轉換為AI可調用的工具
from typing import Type
from pydantic import BaseModel, Field
import os# 1. 訂單查詢工具
class OrderQueryInput(BaseModel):order_id: str = Field(..., description="The order ID to query")#裝飾器是一種高級功能,它允許在不修改原函數代碼的情況下,為函數添加額外的功能。
#將一個普通的 Python 函數"包裝"成一個可以被 AI 大語言模型(LLM)識別和調用的工具(Tool)
#(args_schema=OrderQueryInput):這是傳遞給 @tool 裝飾器的參數,定義了工具輸入參數的名稱、類型、描述等,幫助 AI 模型理解如何正確地調用這個函數。
@tool(args_schema=OrderQueryInput)
def query_order_tool(order_id: str) -> str: # -> str::返回類型提示,表明這個函數執行完畢后會返回一個字符串(String)類型的結果"""查詢用戶訂單信息。需要先驗證用戶身份。"""# 模擬調用內部訂單系統API# 真實環境中,這里會是 requests.get(f"{ORDER_API_URL}/{order_id}", headers=...)print(f"🔍 [Tool Call] Querying order: {order_id}")# 模擬響應 創建字典 構建了一個訂單對象的詳細信息,包括產品、尺寸、日期、狀態mock_order_data = {"order_id": order_id,"product": "Running Shoes (Model X)","size": "42","order_date": "2024-09-15","status": "Delivered","customer_id": "cust_12345"}return f"Order Details: {str(mock_order_data)}"# 2. 退換貨政策查詢工具 (RAG)
@tool # @tool 裝飾器告訴AI框架:“這個函數是我(AI)可以用的一個工具”。
def query_return_policy_tool(product_category: str) -> str:"""根據產品類別查詢退換貨政策。如果用戶未指定類別,則查詢通用政策。"""# 在真實環境中,這里會從向量數據庫檢索print(f"🔍 [Tool Call] Querying return policy for: {product_category}")policies = {"general": "You can return most items within 30 days of delivery. Items must be unworn and in original packaging.","shoes": "Shoes can be exchanged for a different size within 45 days. Must have original box and no signs of wear.","electronics": "Electronics can be returned within 14 days. Must be factory reset and all accessories included."}policy = policies.get(product_category.lower(), policies["general"])return f"Our return policy for {product_category}: {policy}"# 3. 創建客服工單工具
class CreateTicketInput(BaseModel):issue_summary: str = Field(..., description="A summary of the customer's issue")priority: str = Field("medium", description="Priority of the ticket: low, medium, high")@tool(args_schema=CreateTicketInput)
def create_support_ticket_tool(issue_summary: str, priority: str = "medium") -> str:"""在第三方系統(如Zendesk、Jira)中創建支持工單。用于當智能體無法解決問題時。"""print(f"🔍 [Tool Call] Creating Support Ticket. Priority: {priority}. Issue: {issue_summary}")# 模擬創建工單的API調用ticket_id = "TICKET-0987"return f"Successfully created a support ticket for you. Your ticket ID is {ticket_id}. A human agent will contact you shortly."
生產環境改進建議
def query_order_tool(order_id:str) -> str:try:response = requests.get(r"{os.getenv('ORDER_API_URL')}/{order_id}",headers = {"Authorization": f"Bearer {os.getenv('API_TOKEN')}"},timeout = 10)return f"Order Details:{response.json()}"except requests.RequestException as e:return f"Sorry, I couldn't retrieve your order details. Error: {str(e)}"
四、構建智能體工作流
+-------------+| agent | # 決策節點+-------------+|v (條件路由)+---------+---------+| | |v v v
+-------+ END +-------------+
| tools | | human_agent |
+-------+ +-------------+| |+--------+ |v v+-------------+| agent | # 循環回到agent+-------------+
agent.py
- 智能體決策節點,接收當前對話狀態,調用LLM會自動決定是否需要調用工具,返回AI消息到狀態中
- 路由函數 - 核心決策邏輯
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage
from langchain_openai import ChatOpenAI
from models import AgentState
from tools import query_order_tool, query_return_policy_tool, create_support_ticket_tool
from typing import Literal, Dict, Any# 初始化LLM
llm = ChatOpenAI(model="gpt-4-turbo", temperature=0) #模型選擇: gpt-4-turbo 提供強大的推理能力 溫度設置: temperature=0 確保確定性輸出,適合客服場景
# 綁定工具 工具綁定: bind_tools() 讓LLM知道可用的工具及其功能
llm_with_tools = llm.bind_tools([query_order_tool, query_return_policy_tool, create_support_ticket_tool])# 定義工具節點 ToolNode 專門處理工具調用的執行
tool_node = ToolNode(tools=[query_order_tool, query_return_policy_tool, create_support_ticket_tool])def agent_step(state: AgentState):#接收當前對話狀態"""智能體決策節點"""print(f"🤖 [Agent Step] Current Step: {state['current_step']}")messages = state["messages"]response = llm_with_tools.invoke(messages) #調用LLM生成響應,LLM會自動決定是否需要調用工具return {"messages": [response]}# 返回AI消息到狀態中def route_to_tools(state: AgentState) -> Literal["tools", "end", "human_agent"]:"""路由函數,決定下一步是調用工具、結束還是轉人工"""ai_msg = state["messages"][-1]#檢查工具調用: 如果AI消息包含tool_calls,轉到"tools"節點if not hasattr(ai_msg, 'tool_calls') or len(ai_msg.tool_calls) == 0:# 如果沒有工具調用,檢查是否需要結束或轉人工if "thank you" in state["messages"][-2].content.lower():#結束條件: 用戶說"thank you"時結束對話return "end"if "human" in state["messages"][-2].content.lower():return "human_agent"return "end"return "tools"def call_human_agent(state: AgentState):"""調用創建工單工具并結束對話"""issue_summary = f"Customer requested human agent. Conversation history: {state['messages']}"tool_input = {"issue_summary": issue_summary, "priority": "medium"}result = create_support_ticket_tool.invoke(tool_input)#返回友好的轉接消息return {"messages": [AIMessage(content=f"I've escalated your issue to our human team. {result}")]}# 構建圖
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_step)
workflow.add_node("tools", tool_node)
workflow.add_node("human_agent", call_human_agent)workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent",route_to_tools,{"tools": "tools","end": END,"human_agent": "human_agent"}
)
workflow.add_edge("tools", "agent")
workflow.add_edge("human_agent", END)# 編譯圖
app = workflow.compile()
實際對話示例:
用戶: "我想查詢訂單12345"
AI: (決定調用query_order_tool) → 工具節點 → 返回結果
用戶: "謝謝你的幫助"
AI: (檢測到感謝) → 結束對話用戶: "我要找人工客服"
AI: (檢測到"human") → 轉人工節點 → 創建工單 → 結束
五、創建FastAPI后端與安全中間件
from fastapi import FastAPI, HTTPException, Depends, Header
from fastapi.middleware.cors import CORSMiddleware
from models import UserIdentity, AgentState, ConversationLog
from agent import app as agent_app
from typing import Annotated, Optional
import uuid
import datetime
import json# 初始化FastAPI應用
app = FastAPI(title="Customer Support Agent API")# 中間件:CORS(允許前端訪問)
app.add_middleware(CORSMiddleware,allow_origins=["*"], # 生產環境應指定具體域名allow_credentials=True,allow_methods=["*"],allow_headers=["*"],
)# 模擬用戶身份驗證(生產環境應使用JWT)
async def verify_token(authorization: Annotated[Optional[str], Header()] = None) -> UserIdentity:session_id = str(uuid.uuid4())if authorization and authorization.startswith("Bearer "):token = authorization[7:]# 這里應驗證JWT token并提取user_id# 為簡單起見,我們模擬一個已驗證用戶user_identity = UserIdentity(user_id="cust_12345", session_id=session_id, is_authenticated=True)else:# 未認證用戶只有session_iduser_identity = UserIdentity(session_id=session_id, is_authenticated=False)return user_identity# API路由
@app.post("/chat")
async def chat_endpoint(message: str,user_identity: UserIdentity = Depends(verify_token)
):"""主聊天端點"""try:# 1. 初始化或獲取對話狀態(這里應使用Redis等持久化狀態,為簡單起見我們每次新建)initial_state = AgentState(messages=[HumanMessage(content=message)],user_identity=user_identity,current_step="greeting")# 2. 執行智能體圖final_state = agent_app.invoke(initial_state)# 3. 獲取最終響應agent_response = final_state["messages"][-1].content# 4. 記錄日志(應異步寫入數據庫)log_entry = ConversationLog(session_id=user_identity.session_id,user_id=user_identity.user_id,message=message,agent_response=agent_response,intent=final_state.get("current_step"),timestamp=datetime.datetime.utcnow().isoformat(),success=True)print(f"📝 Logging conversation: {log_entry.json()}")# 5. 返回響應return {"response": agent_response,"session_id": user_identity.session_id,"user_id": user_identity.user_id}except Exception as e:print(f"? Error in chat endpoint: {e}")raise HTTPException(status_code=500, detail=str(e))@app.get("/health")
async def health_check():return {"status": "OK"}if __name__ == "__main__":import uvicornuvicorn.run(app, host="0.0.0.0", port=8000)
六、測試智能體
import asyncio
from models import AgentState, UserIdentity
from agent import app# 模擬一個測試用戶
test_user = UserIdentity(user_id="cust_12345", session_id="test_session_001", is_authenticated=True)def test_conversation():# 模擬用戶消息:詢問訂單狀態test_messages = ["Hi, I want to check the status of my order ORD-67890"]state = AgentState(messages=[], user_identity=test_user, current_step="start")for msg in test_messages:state["messages"].append(HumanMessage(content=msg))print(f"User: {msg}")# 調用智能體state = app.invoke(state)agent_msg = state["messages"][-1]print(f"Agent: {agent_msg.content}")if hasattr(agent_msg, 'tool_calls') and agent_msg.tool_calls:print(f"Agent called tools: {agent_msg.tool_calls}")print("---")if __name__ == "__main__":test_conversation()
七、使用Docker容器化部署
FROM python:3.11-slimWORKDIR /appCOPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txtCOPY . .EXPOSE 8000CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
docker-compose.yml
version: '3.8'
services:support-agent:build: .ports:- "8000:8000"environment:- OPENAI_API_KEY=${OPENAI_API_KEY}# 生產環境應添加以下依賴# depends_on:# - postgres# - redis# postgres:# image: postgres:13# environment:# POSTGRES_DB: agent_db# POSTGRES_USER: agent# POSTGRES_PASSWORD: password# redis:# image: redis:7-alpine
如何運行
設置環境變量:
export OPENAI_API_KEY='your-openai-api-key'
啟動服務
uvicorn main:app --reload
測試API
curl -X 'POST' \'http://localhost:8000/chat' \-H 'Authorization: Bearer fake_jwt_token_for_testing' \-H 'Content-Type: application/json' \-d '{"message": "I need to return my shoes, what is your policy?"}'