1 新建Springboot項目
1.1 引入依賴
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>3.4.3</version><relativePath/> <!-- lookup parent from repository --></parent><groupId>org.example</groupId><artifactId>langchain4jSpringbootpro</artifactId><version>1.0-SNAPSHOT</version><packaging>jar</packaging><name>langchain4jSpringbootpro</name><url>http://maven.apache.org</url><properties><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><langchain4j.version>1.0.0-beta1</langchain4j.version></properties><dependencies><dependency><groupId>dev.langchain4j</groupId><artifactId>langchain4j-community-dashscope-spring-boot-starter</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><scope>test</scope></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>3.8.1</version><scope>test</scope></dependency></dependencies><dependencyManagement><dependencies><dependency><groupId>dev.langchain4j</groupId><artifactId>langchain4j-community-bom</artifactId><version>${langchain4j.version}</version><type>pom</type><scope>import</scope></dependency></dependencies></dependencyManagement></project>
2 配置項
查看加載的依賴中的langchain4j-community-dashscope-spring-boot-starter的Autoconfig文件。
可以看到使用時需要的配置條件,需要配置apikey。
在源文件中可以查看配置模型:
新建配置application.properties
server.port=8080langchain4j.community.dashscope.chat-model.api-key = 你的apikey
langchain4j.community.dashscope.chat-model.model-name = qwen-plus
2.3 代碼實現
package org.example.controller;import dev.langchain4j.community.model.dashscope.QwenChatModel;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;@RestController
@RequestMapping("/ai")
public class AiController {@AutowiredQwenChatModel qwenChatModel;@RequestMapping("/chat")public String test(@RequestParam(defaultValue = "你是誰") String message){String chat = qwenChatModel.chat(message);return chat;}}
2.4 訪問結果
啟動后,訪問鏈接http://localhost:8080/ai/chat。
2 接入DeepSeek
接入DeepSeek只需要修改配置文件即可,如下:
langchain4j.community.dashscope.chat-model.api-key = 你的key
langchain4j.community.dashscope.chat-model.model-name = deepseek-r1
啟動后,訪問鏈接http://localhost:8080/ai/chat。
3 接入Ollama
安裝Ollama,下載想要的大模型。
引用依賴:
<dependency><groupId>dev.langchain4j</groupId><artifactId>langchain4j-ollama-spring-boot-starter</artifactId><version>${langchain4j.version}</version></dependency>
加入配置文件:
langchain4j.ollama.chat-model.base-url = http://localhost:11434
langchain4j.ollama.chat-model.model-name = deepseek-r1:1.5b
代碼如下:
package org.example.controller;import dev.langchain4j.community.model.dashscope.QwenChatModel;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.ollama.OllamaChatModel;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;@RestController
@RequestMapping("/ai")
public class AiController {@AutowiredChatLanguageModel chatLanguageModel;@RequestMapping("/ollamachat")public String ollamachatfuc(@RequestParam(defaultValue = "你是誰") String message){String ollamachat = chatLanguageModel.chat(message);return ollamachat;}}
結果如下:
4 流式輸出
之前的依賴不變,但是因為langchain4j不是spring家族,所以我們在wen應用中需要引入webflux。
<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-webflux</artifactId></dependency>
通過Flux進行流式響應
@RequestMapping(value = "/streamchat",produces = "text/stream;charset=UTF-8")public Flux<String> streamchatfuc(@RequestParam(defaultValue="你是誰") String message){Flux<String> flux = Flux.create(fluxSink -> {streamingChatModel.chat(message, new StreamingChatResponseHandler() {@Overridepublic void onPartialResponse(String partialResponse) {fluxSink.next(partialResponse);}@Overridepublic void onCompleteResponse(ChatResponse completeResponse) {fluxSink.complete();}@Overridepublic void onError(Throwable error) {fluxSink.error(error);}});});return flux;}
加入配置文件:
langchain4j.community.dashscope.streaming-chat-model.api-key = sk-4d1748fba8994a2e94cb0fbaf3d34f23
langchain4j.community.dashscope.chat-model.model-name = deepseek-r1
啟動后訪問鏈接:http://localhost:8080/ai/streamchat,會發現答復是按照流式輸出的。
5 記憶對話
5.1 ChatMemory
大模型并不會把我們每次的對話存在服務端,所以他記不住我們說的話,如下代碼:
@Testpublic void test_bad(){ChatLanguageModel model = OpenAiChatModel.builder().apiKey("demo").modelName("gpt-4o-mini").build();System.out.println(model.chat("你好,我是徐庶老師"));System.out.println("----");System.out.println(model.chat("我叫什么"));}
運行結果如下:
所以每次對話都需要將之前的對話記錄,都發給大模型,這樣才能知道我們之前說了什么:
@Testpublic void test03(){ChatLanguageModel model = OpenAiChatModel.builder().apiKey("demo").modelName("gpt-4o-mini").build();UserMessage userMessage1 = UserMessage.userMessage("你好,我是徐庶");ChatResponse response1 = model.chat(userMessage1);AiMessage aiMessage1 = response1.aiMessage();//大模型的第一次響應System.out.println(aiMessage1.text());System.out.println("----");// 下面一行代碼是重點ChatResponse response2 = model.chat(userMessage1,aiMessage1, UserMessage.userMessage("我叫什么?"));AiMessage aiMessage2 = response2.aiMessage();// 大模型的第二次響應System.out.println(aiMessage2.text());System.out.println(model.chat("你好,我是徐庶老師"));System.out.println("----");System.out.println(model.chat("我叫什么"));}
返回結果如下:
但是如果要我們每次把之前的記錄自己去維護,未免太麻煩,所以提供了ChatMemory但是他這個ChatMemory沒有SpringAi好用、易用、十分麻煩!所以說誰在跟我說Langchain4i比SpringAi好我跟誰急!
package org.example.config;import dev.langchain4j.memory.ChatMemory;
import dev.langchain4j.memory.chat.MessageWindowChatMemory;
import dev.langchain4j.model.chat.ChatLanguageModel;
import dev.langchain4j.model.chat.StreamingChatLanguageModel;
import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.TokenStream;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;@Configuration
public class Aiconfig {public interface Assistant{String chat(String message);// 流式響應TokenStream stream(String message);}@Beanpublic Assistant assistant(ChatLanguageModel qwenchatModel, StreamingChatLanguageModel qwenstreamingchatModel) {//設置最大記錄對話數ChatMemory chatMemory = MessageWindowChatMemory.withMaxMessages(10);Assistant assistant = AiServices.builder(Assistant.class).chatLanguageModel(qwenchatModel).streamingChatLanguageModel(qwenstreamingchatModel).chatMemory(chatMemory).build();return assistant;}
}
原理:
- 通過AiService創建的代理對象(Aiservices.builder(XushuChatModel.class))調用chat方法(XushuChatModel.chat)
- 代理對象會去ChatMemory中獲取之前的對話記錄(獲取記憶)
- 將獲取到的對話記錄合并到當前對話中(此時大模型根據之前的聊天記錄肯定就擁有了“記憶”)
- 將當前的對話內容存入ChatMemory(保存記憶)
代碼如下:
package org.example.controller;import dev.langchain4j.service.TokenStream;
import jakarta.servlet.http.HttpServletResponse;
import org.example.config.Aiconfig;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import reactor.core.publisher.Flux;import java.time.LocalDate;@RestController
@RequestMapping("/ai_other")
public class OtherAiController {@AutowiredAiconfig.Assistant assistant;//告訴模型我叫諸葛懿@RequestMapping(value = "/memory_chat")public String memorychat(@RequestParam(defaultValue = "我叫諸葛懿") String message) {return assistant.chat(message);}//流式響應@RequestMapping(value = "/memory_stream_chat",produces ="text/stream;charset=UTF-8")public Flux<String> memoryStreamChat(@RequestParam(defaultValue="我是誰") String message, HttpServletResponse response) {TokenStream stream = assistant.stream(message);return Flux.create(sink -> {stream.onPartialResponse(s -> sink.next(s)).onCompleteResponse(c -> sink.complete()).onError(sink::error).start();});}
}
先訪問http://localhost:8080/ai_other/memory_chat告訴模型你的名字,后訪問http://localhost:8080/ai_other/memory_stream_chat看結果,結果如下:
5.2 記憶分離
現在我們再來想另一種情況:如果不同的用戶或者不同的對話肯定不能用同一個記憶,要不然對話肯定會混淆此時就需要進行區分:
可以通過memoryld進行區分
5.3 持久對話