【學習筆記】Langchain基礎(二)

前文:【學習筆記】Langchain基礎


文章目錄

  • 8 [LangGraph] 實現 Building Effective Agents,各種 workflows 及 Agent
    • Augmented LLM
    • Prompt Chaining
    • Parallelization
    • Routing
    • Orchestrator-Worker (協調器-工作器)
    • Evaluator-optimizer (Actor-Critic)
    • Agent


8 [LangGraph] 實現 Building Effective Agents,各種 workflows 及 Agent

  • video: https://www.bilibili.com/video/BV1ZsM2zcEFa

  • code: https://github.com/chunhuizhang/llm_aigc/blob/main/tutorials/agents/langchain/advanced/building_effective_agents.ipynb

  • https://www.anthropic.com/engineering/building-effective-agents

    • https://mirror-feeling-d80.notion.site/Workflow-And-Agents-17e808527b1780d792a0d934ce62bee6
      • https://langchain-ai.github.io/langgraph/tutorials/workflows/
      • https://www.youtube.com/watch?v=aHCDrAbH_go
  • AlphaEvolve: coding agent

    • llm 作為核心算子實現遺傳算法;
    • 適用于可以自動評估的環境;& 大量腳手架類似的工作;
  • https://github.com/google-gemini/gemini-fullstack-langgraph-quickstart/tree/main

    • 基于 langgraph 編排工作流實現 deepresearch
      • 不同的節點用不同的 gemini models
      • pro:deep,flash:width
  • sota llms (gemini 2.5 pro, openai o3) + 精心設計的 workflow(腳手架,scaffolding)能解決非常多非常復雜的問題;

    • 基礎模型實在是越來越powerful了

在這里插入圖片描述

  • workflows
    • Create a scaffolding of predefined code paths around llm calls
    • LLMs directs control flow through predefined code paths
  • Agen: remove this scaffolding (LLM directs its own actions, responds to feedback)
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
assert load_dotenv()
llm = ChatOpenAI(model='gpt-4o-mini')

Augmented LLM

  • Augment the LLM with schema for structured output
  • Augment the LLM with tools

在這里插入圖片描述

# Schema for structured output => output schema
from pydantic import BaseModel, Fieldclass SearchQuery(BaseModel):search_query: str = Field(None, description="Query that is optimized web search.")justification: str = Field(None, description="Why this query is relevant to the user's request.")# Augment the LLM with schema for structured output
structured_llm = llm.with_structured_output(SearchQuery)# Invoke the augmented LLM
output = structured_llm.invoke("How does Calcium CT score relate to high cholesterol?") # SearchQuery(search_query='Calcium CT score high cholesterol relationship', justification='This query targets the relationship between calcium CT scores and cholesterol levels, which may help in understanding cardiovascular risk assessment.')

執行:structured_llm.invoke("今年高考新聞"),輸出SearchQuery(search_query='2023年高考 新聞 相關報道', justification='搜索2023年高考的相關新聞,以獲取該年度高考的最新動態、政策變化及新聞事件等信息,符合用戶對今年高考的關注。')

# Define a tool
def multiply(a: float, b: float) -> float:return a * b
def sigmoid(a: float) -> float:return 1./(1+np.exp(-a))
# Augment the LLM with tools
llm_with_tools = llm.bind_tools([multiply, sigmoid])
# Invoke the LLM with input that triggers the tool call
msg = llm_with_tools.invoke("What is derivative of sigmoid(5)")
msg.tool_calls
"""
[{'name': 'sigmoid','args': {'a': 5},'id': 'call_PBKMYMxZjU0x8IP9TxMHuE8Y','type': 'tool_call'}]
"""

msg具體如下:

AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_PBKMYMxZjU0x8IP9TxMHuE8Y', 'function': {'arguments': '{"a":5}', 'name': 'sigmoid'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 194, 'total_tokens': 208, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_34a54ae93c', 'id': 'chatcmpl-BiJpUWnwANEG7bDWV9FR6mXp5vHj1', 'service_tier': 'default', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run--ac1635ef-b4cd-4526-af25-f1ba23e22ed7-0', tool_calls=[{'name': 'sigmoid', 'args': {'a': 5}, 'id': 'call_PBKMYMxZjU0x8IP9TxMHuE8Y', 'type': 'tool_call'}], usage_metadata={'input_tokens': 194, 'output_tokens': 14, 'total_tokens': 208, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})

在這里插入圖片描述


Prompt Chaining

在這里插入圖片描述

from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from IPython.display import Image, display# Graph state
class State(TypedDict):topic: strjoke: strimproved_joke: strfinal_joke: str# Nodes
def generate_joke(state: State):"""First LLM call to generate initial joke"""msg = llm.invoke(f"Write a short joke about {state['topic']}")return {"joke": msg.content}def check_punchline(state: State):"""Gate function to check if the joke has a punchline"""# Simple check - does the joke contain "?" or "!"if "?" in state["joke"] or "!" in state["joke"]:return "Pass"return "Fail"def improve_joke(state: State):"""Second LLM call to improve the joke"""# wordplay: 俏皮話/雙關語msg = llm.invoke(f"Make this joke funnier by adding wordplay: {state['joke']}")return {"improved_joke": msg.content}def polish_joke(state: State):"""Third LLM call for final polish"""msg = llm.invoke(f"Add a surprising twist to this joke: {state['improved_joke']}")return {"final_joke": msg.content}

然后搭建workflow:

# Build workflow
workflow = StateGraph(State)# Add nodes
workflow.add_node("generate_joke", generate_joke)
workflow.add_node("improve_joke", improve_joke)
workflow.add_node("polish_joke", polish_joke)# Add edges to connect nodes
workflow.add_edge(START, "generate_joke")
workflow.add_conditional_edges("generate_joke", check_punchline, {"Fail": "improve_joke", "Pass": END}
)
workflow.add_edge("improve_joke", "polish_joke")
workflow.add_edge("polish_joke", END)# Compile
chain = workflow.compile()

Image(chain.get_graph().draw_mermaid_png())可視化如下:
在這里插入圖片描述

state = chain.invoke({"topic": "cats"})
"""
{'topic': 'cats','joke': 'Why did the cat sit on the computer?\n\nBecause it wanted to keep an eye on the mouse!'}
"""for step in chain.stream({"topic": "dogs"}):print(step)
# {'generate_joke': {'joke': "Why did the dog sit in the shade? \n\nBecause he didn't want to become a hot dog!"}}

Parallelization

LangGraph中可以并發執行一些workflow

  • LLMs can sometimes work simultaneously on a task and have their outputs aggregated programmatically. This workflow, parallelization, manifests in two key variations:
    • Sectioning: Breaking a task into independent subtasks run in parallel.
    • Aggregator(Voting): Running the same task multiple times to get diverse outputs.

在這里插入圖片描述

定義圖:

# Graph state
class State(TypedDict):topic: strjoke: strstory: strpoem: strcombined_output: str# Nodes
def call_llm_1(state: State):"""First LLM call to generate initial joke"""msg = llm.invoke(f"Write a joke about {state['topic']}")return {"joke": msg.content}def call_llm_2(state: State):"""Second LLM call to generate story"""msg = llm.invoke(f"Write a story about {state['topic']}")return {"story": msg.content}def call_llm_3(state: State):"""Third LLM call to generate poem"""msg = llm.invoke(f"Write a poem about {state['topic']}")return {"poem": msg.content}def aggregator(state: State):"""Combine the joke and story into a single output"""combined = f"Here's a story, joke, and poem about {state['topic']}!\n\n"combined += f"STORY:\n{state['story']}\n\n"combined += f"JOKE:\n{state['joke']}\n\n"combined += f"POEM:\n{state['poem']}"return {"combined_output": combined}

搭建工作流:

# Build workflow
parallel_builder = StateGraph(State)# Add nodes
parallel_builder.add_node("call_llm_1", call_llm_1)
parallel_builder.add_node("call_llm_2", call_llm_2)
parallel_builder.add_node("call_llm_3", call_llm_3)
parallel_builder.add_node("aggregator", aggregator)# Add edges to connect nodes
parallel_builder.add_edge(START, "call_llm_1")
parallel_builder.add_edge(START, "call_llm_2")
parallel_builder.add_edge(START, "call_llm_3")
parallel_builder.add_edge("call_llm_1", "aggregator")
parallel_builder.add_edge("call_llm_2", "aggregator")
parallel_builder.add_edge("call_llm_3", "aggregator")
parallel_builder.add_edge("aggregator", END)
parallel_workflow = parallel_builder.compile()

可視化:display(Image(parallel_workflow.get_graph().draw_mermaid_png()))

在這里插入圖片描述


Routing

單個語言不夠強的時候,可以根據路由將不同的輸入分配給不同的LLM

在這里插入圖片描述

  • Routing classifies an input and directs it to a specialized followup task. This workflow allows for separation of concerns, and building more specialized prompts. Without this workflow, optimizing for one kind of input can hurt performance on other inputs.
  • When to use this workflow: Routing works well for complex tasks where there are distinct categories that are better handled separately, and where classification can be handled accurately, either by an LLM or a more traditional classification model/algorithm.
    • distinct models (fast non-reasoning model, powerful reasoning model)
from typing_extensions import Literal
from langchain_core.messages import HumanMessage, SystemMessage# Schema for structured output to use as routing logic
class Route(BaseModel):step: Literal["poem", "story", "joke"] = Field(None, description="The next step in the routing process")
# Augment the LLM with schema for structured output
router = llm.with_structured_output(Route)# State
class State(TypedDict):input: strdecision: stroutput: str

定義節點:

# Nodes
def llm_call_1(state: State):"""Write a story"""result = llm.invoke(state["input"])return {"output": result.content}def llm_call_2(state: State):"""Write a joke"""result = llm.invoke(state["input"])return {"output": result.content}def llm_call_3(state: State):"""Write a poem"""result = llm.invoke(state["input"])return {"output": result.content}

定義路由:

def llm_call_router(state: State):"""Route the input to the appropriate node"""# Run the augmented LLM with structured output to serve as routing logicdecision = router.invoke([SystemMessage(content="Route the input to story, joke, or poem based on the user's request."),HumanMessage(content=state["input"]),])return {"decision": decision.step}# Conditional edge function to route to the appropriate node
def route_decision(state: State):# Return the node name you want to visit nextif state["decision"] == "story":return "llm_call_1"elif state["decision"] == "joke":return "llm_call_2"elif state["decision"] == "poem":return "llm_call_3"

搭建工作流:

# Build workflow
router_builder = StateGraph(State)# Add nodes
router_builder.add_node("llm_call_1", llm_call_1)
router_builder.add_node("llm_call_2", llm_call_2)
router_builder.add_node("llm_call_3", llm_call_3)
router_builder.add_node("llm_call_router", llm_call_router)# Add edges to connect nodes
router_builder.add_edge(START, "llm_call_router")
router_builder.add_conditional_edges("llm_call_router",route_decision,{  # Name returned by route_decision : Name of next node to visit"llm_call_1": "llm_call_1","llm_call_2": "llm_call_2","llm_call_3": "llm_call_3",},
)
router_builder.add_edge("llm_call_1", END)
router_builder.add_edge("llm_call_2", END)
router_builder.add_edge("llm_call_3", END)# Compile workflow
router_workflow = router_builder.compile()

可視化:display(Image(router_workflow.get_graph().draw_mermaid_png()))

在這里插入圖片描述

然后可以測試一個案例:

state = router_workflow.invoke({"input": "Write me a joke about cats"})
for step in router_workflow.stream({"input": "Write me a joke about cats"}):print(step)

輸出:

{'llm_call_router': {'decision': 'joke'}}
{'llm_call_2': {'output': 'Why was the cat sitting on the computer?\n\nBecause it wanted to keep an eye on the mouse!'}}

再來一個案例:

for step in router_workflow.stream({"input": "Write me a poem about cats"}):print(step)

輸出:

{'llm_call_router': {'decision': 'poem'}}
{'llm_call_3': {'output': 'In sunlit corners, shadows play,  \nWhere whispers of the feline sway,  \nWith graceful poise and silent tread,  \nThe world’s a kingdom, theirs to thread.  \n\nA tapestry of fur like night,  \nWith emerald eyes that pierce the light,  \nThey leap as if on dreams they dance,  \nIn elegant arcs, a fleeting glance.  \n\nA soft purr hums, a gentle song,  \nA lullaby where hearts belong,  \nWith velvet paws on wooden floors,  \nThey weave their magic, open doors.  \n\nEach flick of tail, a tale to tell,  \nOf mischief, grace, and worlds that dwell  \nIn boxes, sunbeams, every fold,  \nAdventures vast, and secrets bold.  \n\nThey curl like commas, snug and warm,  \nIn every lap, their soft charm forms,  \nA soothing presence, quiet, wise,  \nWith knowing hearts and ageless sighs.  \n\nOh, creatures of the night and day,  \nIn your soft wisdom, we find our way,  \nWith tender gazes, you understand,  \nThe joys and sorrows of this land.  \n\nSo here’s to cats, our quaintest friends,  \nWith every whisker, affection lends,  \nIn their elusive, gentle grace,  \nWe find a home, a purr-fect place.'}}

Orchestrator-Worker (協調器-工作器)

在這里插入圖片描述

  • an orchestrator breaks down a task and delegates each sub-task to workers.
    • In the orchestrator-workers workflow, a central LLM dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results.
    • When to use this workflow: This workflow is well-suited for complex tasks where you can’t predict the subtasks needed (in coding, for example, the number of files that need to be changed and the nature of the change in each file likely depend on the task). Whereas it’s topographically similar, the key difference from parallelization is its flexibility—subtasks aren’t pre-defined, but determined by the orchestrator based on the specific input.
from typing import Annotated, List
import operator# Schema for structured output to use in planning
class Section(BaseModel):name: str = Field(description="Name for this section of the report.",)description: str = Field(description="Brief overview of the main topics and concepts to be covered in this section.",)class Sections(BaseModel):sections: List[Section] = Field(description="Sections of the report.",)
# Augment the LLM with schema for structured output
planner = llm.with_structured_output(Sections)
from langgraph.constants import Send# Graph state
class State(TypedDict):topic: str  # Report topicsections: list[Section]  # List of report sectionscompleted_sections: Annotated[list, operator.add]  # All workers write to this key in parallelfinal_report: str  # Final report# Worker state
class WorkerState(TypedDict):section: Sectioncompleted_sections: Annotated[list, operator.add]
# Nodes
def orchestrator(state: State):"""Orchestrator that generates a plan for the report"""# Generate queriesreport_sections = planner.invoke([SystemMessage(content="Generate a plan for the report."),HumanMessage(content=f"Here is the report topic: {state['topic']}"),])return {"sections": report_sections.sections}
def llm_call(state: WorkerState):"""Worker writes a section of the report"""# Generate sectionsection = llm.invoke([SystemMessage(content="Write a report section following the provided name and description. Include no preamble for each section. Use markdown formatting."),HumanMessage(content=f"Here is the section name: {state['section'].name} and description: {state['section'].description}"),])# Write the updated section to completed sectionsreturn {"completed_sections": [section.content]}
def synthesizer(state: State):"""Synthesize full report from sections"""# List of completed sectionscompleted_sections = state["completed_sections"]# Format completed section to str to use as context for final sectionscompleted_report_sections = "\n\n---\n\n".join(completed_sections)return {"final_report": completed_report_sections}
# Conditional edge function to create llm_call workers that each write a section of the report
def assign_workers(state: State):"""Assign a worker to each section in the plan"""# Kick off section writing in parallel via Send() APIreturn [Send("llm_call", {"section": s}) for s in state["sections"]]

最后搭建工作流:

# Build workflow
orchestrator_worker_builder = StateGraph(State)# Add the nodes
orchestrator_worker_builder.add_node("orchestrator", orchestrator)
orchestrator_worker_builder.add_node("llm_call", llm_call)
orchestrator_worker_builder.add_node("synthesizer", synthesizer)# Add edges to connect nodes
orchestrator_worker_builder.add_edge(START, "orchestrator")
orchestrator_worker_builder.add_conditional_edges("orchestrator", assign_workers, ["llm_call"]
)
orchestrator_worker_builder.add_edge("llm_call", "synthesizer")
orchestrator_worker_builder.add_edge("synthesizer", END)# Compile the workflow
orchestrator_worker = orchestrator_worker_builder.compile()# Show the workflow
display(Image(orchestrator_worker.get_graph().draw_mermaid_png()))

在這里插入圖片描述

同樣給兩個例子:

state = orchestrator_worker.invoke({"topic": "Create a report on LLM scaling laws"})from IPython.display import Markdown
# Markdown(state["final_report"])# for step in orchestrator_worker.stream({"topic": "Create a report on LLM scaling laws"}):
#     print(step)

Evaluator-optimizer (Actor-Critic)

在這里插入圖片描述

  • In the evaluator-optimizer workflow, one LLM call generates a response while another provides evaluation and feedback in a loop.
  • When to use this workflow: This workflow is particularly effective when we have clear evaluation criteria, and when iterative refinement provides measurable value. The two signs of good fit are, first, that LLM responses can be demonstrably improved when a human articulates their feedback; and second, that the LLM can provide such feedback. This is analogous to the iterative writing process a human writer might go through when producing a polished document.

定義圖:

# Graph state
class State(TypedDict):joke: strtopic: strfeedback: strfunny_or_not: str# Schema for structured output to use in evaluation
class Feedback(BaseModel):grade: Literal["funny", "not funny"] = Field(description="Decide if the joke is funny or not.",)feedback: str = Field(description="If the joke is not funny, provide feedback on how to improve it.",)# Augment the LLM with schema for structured output
evaluator = llm.with_structured_output(Feedback)def llm_call_generator(state: State):"""LLM generates a joke"""if state.get("feedback"):msg = llm.invoke(f"Write a joke about {state['topic']} but take into account the feedback: {state['feedback']}")else:msg = llm.invoke(f"Write a joke about {state['topic']}")return {"joke": msg.content}def llm_call_evaluator(state: State):"""LLM evaluates the joke"""grade = evaluator.invoke(f"Grade the joke {state['joke']}")return {"funny_or_not": grade.grade, "feedback": grade.feedback}# Conditional edge function to route back to joke generator or end based upon feedback from the evaluator
def route_joke(state: State):"""Route back to joke generator or end based upon feedback from the evaluator"""if state["funny_or_not"] == "funny":return "Accepted"elif state["funny_or_not"] == "not funny":return "Rejected + Feedback"

搭建工作流:

# Build workflow
optimizer_builder = StateGraph(State)# Add the nodes
optimizer_builder.add_node("llm_call_generator", llm_call_generator)
optimizer_builder.add_node("llm_call_evaluator", llm_call_evaluator)# Add edges to connect nodes
optimizer_builder.add_edge(START, "llm_call_generator")
optimizer_builder.add_edge("llm_call_generator", "llm_call_evaluator")
optimizer_builder.add_conditional_edges("llm_call_evaluator",route_joke,{  # Name returned by route_joke : Name of next node to visit"Accepted": END,"Rejected + Feedback": "llm_call_generator",},
)# Compile the workflow
optimizer_workflow = optimizer_builder.compile()display(Image(optimizer_workflow.get_graph().draw_mermaid_png()))

在這里插入圖片描述
一個例子:

for step in optimizer_workflow.stream({"topic": "Cats"}):print(step)
# {'llm_call_generator': {'joke': 'Why was the cat sitting on the computer?\n\nBecause it wanted to keep an eye on the mouse!'}}
# {'llm_call_evaluator': {'funny_or_not': 'funny', 'feedback': ''}}

Agent

  • Environment 接受 Action 返回 feedback

在這里插入圖片描述

在這里插入圖片描述

import numpy as np
from langchain_core.tools import tool# Define tools
@tool
def multiply(a: float, b: float) -> float:"""Multiply a and b.Args:a: first floatb: second float"""return a * b@tool
def add(a: float, b: float) -> float:"""Adds a and b.Args:a: first floatb: second float"""return a + b@tool
def subtract(a: float, b: float) -> float:"""subtract a from b.Args:a: first floatb: second float"""return a - b@tool
def divide(a: float, b: float) -> float:"""Divide a and b.Args:a: first floatb: second float"""return a / b@tool
def sigmoid(a: float) -> float:"""sigmoid(a)Args: a: first float"""return 1./(1+np.exp(-a))# Augment the LLM with tools
tools = [add, subtract, multiply, divide, sigmoid]
tools_by_name = {tool.name: tool for tool in tools}
llm_with_tools = llm.bind_tools(tools)
from langgraph.graph import MessagesState
from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage# Nodes
def llm_call(state: MessagesState):"""LLM decides whether to call a tool or not"""return {"messages": [llm_with_tools.invoke([SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs.")]+ state["messages"])]}def tool_node(state: dict):"""Performs the tool call"""result = []for tool_call in state["messages"][-1].tool_calls:tool = tools_by_name[tool_call["name"]]observation = tool.invoke(tool_call["args"])result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))return {"messages": result}# Conditional edge function to route to the tool node or end based upon whether the LLM made a tool call
def should_continue(state: MessagesState) -> Literal["environment", END]:"""Decide if we should continue the loop or stop based upon whether the LLM made a tool call"""messages = state["messages"]last_message = messages[-1]# If the LLM makes a tool call, then perform an actionif last_message.tool_calls:return "Action"# Otherwise, we stop (reply to the user)return END# Build workflow
agent_builder = StateGraph(MessagesState)# Add nodes
agent_builder.add_node("llm_call", llm_call)
agent_builder.add_node("environment", tool_node)# Add edges to connect nodes
agent_builder.add_edge(START, "llm_call")
agent_builder.add_conditional_edges("llm_call",should_continue,{# Name returned by should_continue : Name of next node to visit"Action": "environment",END: END,},
)
agent_builder.add_edge("environment", "llm_call")# Compile the agent
agent = agent_builder.compile()# Show the agent
display(Image(agent.get_graph(xray=True).draw_mermaid_png()))

在這里插入圖片描述

看一個例子:

# Invoke
messages = [HumanMessage(content="calculate derivative of sigmoid(5)")]
messages = agent.invoke({"messages": messages})
for m in messages["messages"]:m.pretty_print()

具體的輸出結果:

================================ Human Message =================================calculate derivative of sigmoid(5)
================================== Ai Message ==================================
Tool Calls:sigmoid (call_yhe0L1h0iirYx86Oc4tGbHHm)Call ID: call_yhe0L1h0iirYx86Oc4tGbHHmArgs:a: 5
================================= Tool Message =================================0.9933071490757153
================================== Ai Message ==================================
Tool Calls:multiply (call_eepwJz1ggN5uMOU3hCNOir1V)Call ID: call_eepwJz1ggN5uMOU3hCNOir1VArgs:a: 0.9933071490757153b: 0.006692850924284857
================================= Tool Message =================================0.006648056670790157
================================== Ai Message ==================================The derivative of the sigmoid function at \( \text{sigmoid}(5) \) is approximately \( 0.00665 \).

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/diannao/87110.shtml
繁體地址,請注明出處:http://hk.pswp.cn/diannao/87110.shtml
英文地址,請注明出處:http://en.pswp.cn/diannao/87110.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Java大模型開發入門 (9/15):連接外部世界(中) - 向量嵌入與向量數據庫

前言 在上一篇文章中,我們成功地將一篇長文檔加載并分割成了一系列小的文本片段(TextSegment)。我們現在有了一堆“知識碎片”,但面臨一個新問題:計算機如何理解這些碎片的內容,并找出與用戶問題最相關的片…

Windows下MySQL安裝全流程圖文教程及客戶端使用指南(付整合安裝包)

本教程是基于5.7版本安裝,5.7和8.0的安裝過程大差不差 安裝包「windows上mysql中安裝包資源」 鏈接:https://pan.quark.cn/s/de275899936d 一、安裝前的準備 1.1 獲取 MySQL 安裝程序 官網 前往 MySQL 官方下載頁面,下載適用于 Windows 系…

筆記 軟件工程復習

第一章 軟件工程學概述 1.1 軟件危機(Software Crisis) 概念 定義:軟件危機指在計算機軟件開發與維護過程中遇到的一系列嚴重問題,源于1960年代軟件復雜度激增與傳統開發方法失效的矛盾。 本質:軟件規模擴大 → 開…

GaussDB創建數據庫存儲

示例一: 下面是一個簡單的GaussDB存儲過程示例: –創建一個存儲過程。 CREATE OR REPLACE PROCEDURE prc_add (param1 IN INTEGER,param2 IN OUT INTEGER ) AS BEGINparam2: param1 param2;dbe_output.print_line(result is: ||to_char(param…

基于51單片機的校園打鈴及燈控制系統

目錄 具體實現功能 設計介紹 資料內容 全部內容 資料獲取 具體實現功能 具體功能: (1)實時顯示當前時間(年月日時分秒星期),LED模式指示燈亮。 (2)按下“打鈴”和“打鈴-”按鍵…

PHP+mysql雪里開輕量級報修系統 V1.0Beta

# PHP雪里開輕量級報修系統 V1.0Beta ## 簡介 這是一個基于PHP7和MySQL5.6的簡易報修系統,適用于學校、企業等機構的設備報修管理。 系統支持學生提交報修、后勤處理報修以及系統管理員管理用戶和報修記錄。 初代版本V1.0,尚未實際業務驗證,…

XCTF-misc-base64÷4

拿到一串字符串 666C61677B45333342374644384133423834314341393639394544444241323442363041417D轉換為字符串得到flag

Mini DeepSeek-v3訓練腳本學習

Mini DeepSeek-v3 訓練腳本詳細技術說明(腳本在文章最后) 📋 概述 這是一個實現了Mini DeepSeek-v3大語言模型的訓練腳本,集成了多項先進的深度學習技術。該腳本支持自動GPU選擇和分布式訓練,適合在多GPU環境下訓練Transformer模型。 &…

FPGA 的硬件結構

FPGA 的基本結構分為5 部分:可編程邏輯塊(CLB)、輸入/輸出塊(IOB)、邏輯塊之間的布線資源、內嵌RAM 和內嵌的功能單元。 (1)可編程邏輯塊(CLB) 一個基本的可編程邏輯塊由…

算法專題八: 鏈表

1.兩數相加 題目鏈接:2. 兩數相加 - 力扣(LeetCode) /*** Definition for singly-linked list.* public class ListNode {* int val;* ListNode next;* ListNode() {}* ListNode(int val) { this.val val; }* ListNode…

5G+邊緣計算推動下的商品詳情API低延遲高效率新方案

在電商行業,商品詳情API的性能直接關系到用戶體驗與平臺競爭力。傳統云計算模式在處理高并發請求時,常面臨網絡延遲高、帶寬成本大等問題。而5G與邊緣計算的結合,為商品詳情API的低延遲高效率提供了新方案。本文將深入探討這一新方案&#xf…

【Python教程】CentOS系統下Miniconda3安裝與Python項目后臺運行全攻略

一、引言 為了在CentOS系統上高效地開發和運行Python項目,我們常常需要借助Miniconda3來管理Python環境。本文將詳細介紹如何在CentOS系統上安裝Miniconda3,并將Python項目部署到后臺運行。 二、Miniconda3和CentOS系統介紹 Miniconda3介紹 Minicond…

【讀點論文】A Survey on Open-Set Image Recognition

A Survey on Open-Set Image Recognition Abstract 開集圖像識別(Open-set image recognition,OSR)旨在對測試集中已知類別的樣本進行分類,并識別未知類別的樣本,在許多實際應用中支持魯棒的分類器,如自動駕駛、醫療診斷、安全監…

使用DuckDB查詢DeepSeek歷史對話

DeepSeek網頁版在左下角個人信息/系統設置的賬號管理頁簽中有個“導出所有歷史對話”功能,點擊“導出”,片刻就能生成一個deepseek_data-2025-06-14.zip的文件,里面有2個json文件,直接用文本編輯器查看不太方便。 而用DuckDB查詢卻…

多線程下 到底是事務內部開啟鎖 還是先加鎖再開啟事務?

前言 不知大家是否有觀察到一個最常見的錯誤: 先開啟事務,然后針對資源加鎖,操作資源,然后釋放鎖,最后提交事務 你是否發現了在這樣的場景下會出現并發安全的問題? (提示:一個線程A…

Javascript解耦,以及Javascript學習網站推薦

一、學習網站推薦 解構 - JavaScript | MDN 界面如下,既有知識點,也有在線編譯器執行代碼。初學者可以看看 二、Javascript什么是解構 解構語法是一種 Javascript 語法。可以將數組中的值或對象的屬性取出,賦值給其他變量。它可以在接收數…

Java大模型開發入門 (11/15):讓AI自主行動 - 初探LangChain4j中的智能體(Agents)

前言 在過去的十篇文章里,我們已經打造出了一個相當強大的AI應用。它有記憶,能進行多輪對話;它有知識,能通過RAG回答關于我們私有文檔的問題。它就像一個博學的“學者”,你可以向它請教任何在其知識范圍內的問題。 但…

Qt KDReports詳解與使用

Qt KDReports詳解與使用 一、KD Reports 簡介二、安裝與配置三、核心功能與使用1、創建基礎報表2、添加表格數據3、導出為 PDF4、XML報表定義 四、高級功能1、動態數據綁定2、自定義圖表3、模板化設計4、頁眉頁腳設置 五、常見問題六、總結七、實際應用示例:發票生成…

Spring Cloud 原生中間件

📝 代碼記錄 Consul(服務注冊與發現 分布式配置管理) 擁有服務治理功能,實現微服務之間的動態注冊與發現 ?不在使用Eureka:1. 停更進維 2. 注冊中心獨立且和微服務功能解耦 Consul官網 Spring官方介紹 三個注冊中…

CMake實踐: 以開源庫QSimpleUpdater為例,詳細講解編譯、查找依賴等全過程

目錄 1.環境和工具 2.CMake編譯 3.查找依賴文件 3.1.windeployqt 3.2.dumpbin 4.總結 相關鏈接 QSimpleUpdater:解鎖 Qt 應用自動更新的全新姿勢-CSDN博客 1.環境和工具 windows 11, x64 Qt5.12.12或Qt5.15.2 CMake 4.0.2 干凈的windows 7,最好是…