目錄
- 代碼
- 代碼解釋
- 1. 系統架構
- 2. 核心組件詳解
- 2.1 LLM調用函數
- 2.2 UserInputNode(用戶輸入節點)
- 2.3 GuardrailNode(安全防護節點)
- 2.4 LLMNode(LLM處理節點)
- 3. 流程控制機制
- 示例運行
代碼
from pocketflow import Node, Flowfrom openai import OpenAIdef call_llm(messages):client = OpenAI(api_key = "your api key",base_url="https://dashscope.aliyuncs.com/compatible-mode/v1")response = client.chat.completions.create(model="qwen-turbo",messages=messages,temperature=0.7)return response.choices[0].message.contentclass UserInputNode(Node):def prep(self, shared):# Initialize messages if this is the first runif "messages" not in shared:shared["messages"] = []print("Welcome to the Travel Advisor Chat! Type 'exit' to end the conversation.")return Nonedef exec(self, _):# Get user inputuser_input = input("\nYou: ")return user_inputdef post(self, shared, prep_res, exec_res):user_input = exec_res# Check if user wants to exitif user_input and user_input.lower() == 'exit':print("\nGoodbye! Safe travels!")return None # End the conversation# Store user input in sharedshared["user_input"] = user_input# Move to guardrail validationreturn "validate"class GuardrailNode(Node):def prep(self, shared):# Get the user input from shared datauser_input = shared.get("user_input", "")return user_inputdef exec(self, user_input):# Basic validation checksif not user_input or user_input.strip() == "":return False, "Your query is empty. Please provide a travel-related question."if len(user_input.strip()) < 3:return False, "Your query is too short. Please provide more details about your travel question."# LLM-based validation for travel topicsprompt = f"""
Evaluate if the following user query is related to travel advice, destinations, planning, or other travel topics.
The chat should ONLY answer travel-related questions and reject any off-topic, harmful, or inappropriate queries.
User query: {user_input}
Return your evaluation in YAML format:
```yaml
valid: true/false
reason: [Explain why the query is valid or invalid]
```"""# Call LLM with the validation promptmessages = [{"role": "user", "content": prompt}]response = call_llm(messages)# Extract YAML contentyaml_content = response.split("```yaml")[1].split("```")[0].strip() if "```yaml" in response else responseimport yamlresult = yaml.safe_load(yaml_content)assert result is not None, "Error: Invalid YAML format"assert "valid" in result and "reason" in result, "Error: Invalid YAML format"is_valid = result.get("valid", False)reason = result.get("reason", "Missing reason in YAML response")return is_valid, reasondef post(self, shared, prep_res, exec_res):is_valid, message = exec_resif not is_valid:# Display error message to userprint(f"\nTravel Advisor: {message}")# Skip LLM call and go back to user inputreturn "retry"# Valid input, add to message historyshared["messages"].append({"role": "user", "content": shared["user_input"]})# Proceed to LLM processingreturn "process"class LLMNode(Node):def prep(self, shared):# Add system message if not presentif not any(msg.get("role") == "system" for msg in shared["messages"]):shared["messages"].insert(0, {"role": "system", "content": "You are a helpful travel advisor that provides information about destinations, travel planning, accommodations, transportation, activities, and other travel-related topics. Only respond to travel-related queries and keep responses informative and friendly. Your response are concise in 100 words."})# Return all messages for the LLMreturn shared["messages"]def exec(self, messages):# Call LLM with the entire conversation historyresponse = call_llm(messages)return responsedef post(self, shared, prep_res, exec_res):# Print the assistant's responseprint(f"\nTravel Advisor: {exec_res}")# Add assistant message to historyshared["messages"].append({"role": "assistant", "content": exec_res})# Loop back to continue the conversationreturn "continue"# Create the flow with nodes and connections
user_input_node = UserInputNode()
guardrail_node = GuardrailNode()
llm_node = LLMNode()# Create flow connections
user_input_node - "validate" >> guardrail_node
guardrail_node - "retry" >> user_input_node # Loop back if input is invalid
guardrail_node - "process" >> llm_node
llm_node - "continue" >> user_input_node # Continue conversationflow = Flow(start=user_input_node)shared = {}
flow.run(shared)
代碼解釋
這個示例展示了如何使用PocketFlow框架實現一個帶有guardrail(安全防護)機制的旅行顧問聊天機器人。整個系統由三個核心節點組成,通過條件流控制實現智能對話管理。
1. 系統架構
用戶輸入 → 安全驗證 → LLM處理 → 用戶輸入↑ ↓←─────────┘(驗證失敗時重試)
2. 核心組件詳解
2.1 LLM調用函數
def call_llm(messages):client = OpenAI(api_key = "your api key",base_url="https://dashscope.aliyuncs.com/compatible-mode/v1")
- 使用阿里云DashScope的兼容OpenAI API接口
- 配置qwen-turbo模型,temperature=0.7保證回答的創造性
2.2 UserInputNode(用戶輸入節點)
class UserInputNode(Node):def prep(self, shared):if "messages" not in shared:shared["messages"] = []
功能:
prep
:初始化對話歷史,首次運行時顯示歡迎信息exec
:獲取用戶輸入post
:檢查退出條件,將輸入存儲到共享狀態,返回"validate"進入驗證流程
2.3 GuardrailNode(安全防護節點)
這是系統的核心安全組件,實現多層驗證:
基礎驗證:
if not user_input or user_input.strip() == "":return False, "Your query is empty..."
if len(user_input.strip()) < 3:return False, "Your query is too short..."
LLM智能驗證:
prompt = f"""
Evaluate if the following user query is related to travel advice...
Return your evaluation in YAML format:
```yaml
valid: true/false
reason: [Explain why the query is valid or invalid]
```"""
驗證流程:
- 基礎格式檢查(空輸入、長度)
- 使用LLM判斷是否為旅行相關話題
- 解析YAML格式的驗證結果
- 根據驗證結果決定流向:
- 失敗:返回"retry"重新輸入
- 成功:返回"process"進入LLM處理
2.4 LLMNode(LLM處理節點)
def prep(self, shared):if not any(msg.get("role") == "system" for msg in shared["messages"]):shared["messages"].insert(0, {"role": "system", "content": "You are a helpful travel advisor..."})
功能:
prep
:確保系統提示詞存在,定義AI助手角色exec
:調用LLM生成回答post
:顯示回答,更新對話歷史,返回"continue"繼續對話
3. 流程控制機制
user_input_node - "validate" >> guardrail_node
guardrail_node - "retry" >> user_input_node
guardrail_node - "process" >> llm_node
llm_node - "continue" >> user_input_node
流程說明:
- 正常流程:用戶輸入 → 驗證通過 → LLM處理 → 繼續對話
- 安全攔截:用戶輸入 → 驗證失敗 → 重新輸入
- 退出機制:用戶輸入"exit" → 程序結束