?一、軟件介紹
文末提供程序和源碼下載
? ? ? ?AgenticSeek開源的完全本地的 Manus AI。無需 API,享受一個自主代理,它可以思考、瀏覽 Web 和編碼,只需支付電費。這款支持語音的 AI 助手是?Manus AI 的 100% 本地替代品?,可自主瀏覽網頁、編寫代碼和計劃任務,同時將所有數據保留在您的設備上。它專為本地推理模型量身定制,完全在您的硬件上運行,確保完全隱私和零云依賴。
二、為什么選擇 AgenticSeek?
-
🔒 Fully Local & Private - Everything runs on your machine — no cloud, no data sharing. Your files, conversations, and searches stay private.
🔒 完全本地和私有 - 一切都在您的機器上運行 - 沒有云,沒有數據共享。您的文件、對話和搜索將保持私密。 -
🌐 Smart Web Browsing - AgenticSeek can browse the internet by itself — search, read, extract info, fill web form — all hands-free.
🌐 智能網頁瀏覽 - AgenticSeek 可以自行瀏覽互聯網 - 搜索、閱讀、提取信息、填寫網頁表格 - 所有這些都是免提的。 -
💻 Autonomous Coding Assistant - Need code? It can write, debug, and run programs in Python, C, Go, Java, and more — all without supervision.
💻 Autonomous Coding Assistant - 需要代碼?它可以用 Python、C、Go、Java 等語言編寫、調試和運行程序 — 所有這些都無需監督。 -
🧠 Smart Agent Selection - You ask, it figures out the best agent for the job automatically. Like having a team of experts ready to help.
🧠 智能代理選擇 - 您詢問,它會自動找出最適合該工作的代理。就像有一個隨時準備提供幫助的專家團隊。 -
📋 Plans & Executes Complex Tasks - From trip planning to complex projects — it can split big tasks into steps and get things done using multiple AI agents.
📋 計劃并執行復雜的任務 - 從旅行計劃到復雜的項目 - 它可以將大任務分成步驟,并使用多個 AI 代理完成工作。 -
🎙? Voice-Enabled - Clean, fast, futuristic voice and speech to text allowing you to talk to it like it's your personal AI from a sci-fi movie
🎙? 支持語音 - 干凈、快速、未來主義的語音和語音到文本,讓您可以像科幻電影中的個人 AI 一樣與它交談
三、Installation??安裝
Make sure you have chrome driver, docker and python3.10 installed.
確保您已安裝 chrome 驅動程序、docker 和 python3.10。
We highly advice you use exactly python3.10 for the setup. Dependencies error might happen otherwise.
我們強烈建議您完全使用 python3.10 進行設置。否則可能會發生依賴項錯誤。
For issues related to chrome driver, see the?Chromedriver?section.
有關 Chrome 驅動程序的問題,請參閱?Chromedriver?部分。
1???Clone the repository and setup
1???克隆存儲庫并設置
git clone https://github.com/Fosowl/agenticSeek.git
cd agenticSeek
mv .env.example .env
2??Create a virtual env
2??創建虛擬環境
python3 -m venv agentic_seek_env
source agentic_seek_env/bin/activate
# On Windows: agentic_seek_env\Scripts\activate
3???Install package
3???安裝軟件包
Ensure Python, Docker and docker compose, and Google chrome are installed.
確保已安裝 Python、Docker 和 docker compose 以及 Google Chrome。
We recommand Python 3.10.0.
建議使用 Python 3.10.0。
Automatic Installation (Recommanded):
自動安裝 (推薦):
For Linux/Macos:??對于 Linux/Macos:
./install.sh
For windows:??對于 Windows:
./install.bat
Manually:??手動地:
Note: For any OS, ensure the ChromeDriver you install matches your installed Chrome version. Run?google-chrome --version
. See known issues if you have chrome >135
注意:對于任何作系統,請確保您安裝的 ChromeDriver 與您安裝的 Chrome 版本相匹配。運行?google-chrome --version
。如果您使用的是 chrome >135,請參閱已知問題
- Linux:??Linux?的:
Update Package List:?sudo apt update
更新包列表:sudo apt update
Install Dependencies:?sudo apt install -y alsa-utils portaudio19-dev python3-pyaudio libgtk-3-dev libnotify-dev libgconf-2-4 libnss3 libxss1
安裝依賴項:?sudo apt install -y alsa-utils portaudio19-dev python3-pyaudio libgtk-3-dev libnotify-dev libgconf-2-4 libnss3 libxss1
Install ChromeDriver matching your Chrome browser version:?sudo apt install -y chromium-chromedriver
安裝與您的 Chrome 瀏覽器版本匹配的 ChromeDriver:?sudo apt install -y chromium-chromedriver
Install requirements:?pip3 install -r requirements.txt
安裝要求:?pip3 install -r requirements.txt
- Macos:??蘋果科斯?:
Update brew :?brew update
Update brew :?brew update (更新
)
Install chromedriver :?brew install --cask chromedriver
安裝 chromedriver :?brew install --cask chromedriver
Install portaudio:?brew install portaudio
安裝 portaudio:brew install portaudio
Upgrade pip :?python3 -m pip install --upgrade pip
??升級 pip :?python3 -m pip install --upgrade pip
Upgrade wheel : :?pip3 install --upgrade setuptools wheel
升級輪 : :?pip3 install --upgrade setuptools wheel
Install requirements:?pip3 install -r requirements.txt
安裝要求:?pip3 install -r requirements.txt
- Windows:??Windows (窗口?):
Install pyreadline3?pip install pyreadline3
安裝 pyreadline3?pip 安裝 pyreadline3
Install portaudio manually (e.g., via vcpkg or prebuilt binaries) and then run:?pip install pyaudio
手動安裝 portaudio(例如,通過 vcpkg 或預構建的二進制文件),然后運行:pip install pyaudio
Download and install chromedriver manually from:?https://sites.google.com/chromium.org/driver/getting-started
從以下位置手動下載并安裝 chromedriver:https://sites.google.com/chromium.org/driver/getting-started
Place chromedriver in a directory included in your PATH.
將 chromedriver 放在 PATH 中包含的目錄中。
Install requirements:?pip3 install -r requirements.txt
安裝要求:?pip3 install -r requirements.txt
四、在計算機上本地運行 LLM 的設置
Hardware Requirements:??硬件要求:
To run LLMs locally, you'll need sufficient hardware. At a minimum, a GPU capable of running Qwen/Deepseek 14B is required. See the FAQ for detailed model/performance recommendations.
要在本地運行 LLM,您需要足夠的硬件。至少需要能夠運行 Qwen/Deepseek 14B 的 GPU。有關詳細的型號/性能建議,請參閱 FAQ。
Setup your local provider
設置您的本地提供商
Start your local provider, for example with ollama:
啟動您的本地提供商,例如 ollama:
ollama serve
See below for a list of local supported provider.
有關本地支持的提供商列表,請參閱下文。
Update the config.ini??更新 config.ini
Change the config.ini file to set the provider_name to a supported provider and provider_model to a LLM supported by your provider. We recommand reasoning model such as?Qwen?or?Deepseek.
更改 config.ini 文件,將 provider_name 設置為支持的提供商,并將 provider_model 設置為提供商支持的 LLM。我們推薦?Qwen?或?Deepseek?等推理模型。
See the?FAQ?at the end of the README for required hardware.
請參閱 README 末尾的常見問題解答?,了解所需的硬件。
[MAIN]
is_local = True # Whenever you are running locally or with remote provider.
provider_name = ollama # or lm-studio, openai, etc..
provider_model = deepseek-r1:14b # choose a model that fit your hardware
provider_server_address = 127.0.0.1:11434
agent_name = Jarvis # name of your AI
recover_last_session = True # whenever to recover the previous session
save_session = True # whenever to remember the current session
speak = True # text to speech
listen = False # Speech to text, only for CLI
work_dir = /Users/mlg/Documents/workspace # The workspace for AgenticSeek.
jarvis_personality = False # Whenever to use a more "Jarvis" like personality (experimental)
languages = en zh # The list of languages, Text to speech will default to the first language on the list
[BROWSER]
headless_browser = True # Whenever to use headless browser, recommanded only if you use web interface.
stealth_mode = True # Use undetected selenium to reduce browser detection
Warning: Do?NOT?set provider_name to?openai
?if using LM-studio for running LLMs. Set it to?lm-studio
.
警告: 如果使用 LM-studio 運行 LLM,?請不要將 provider_name 設置為?openai
,將其設置為?lm-studio
。
Note: Some provider (eg: lm-studio) require you to have?http://
?in front of the IP. For example?http://127.0.0.1:1234
注意:某些提供商(例如:lm-studio)要求您在 IP 前面有?http://
。例如?http://127.0.0.1:1234
List of local providers??本地提供商列表
Provider??供應商 | Local???當地? | Description??描述 |
---|---|---|
ollama??哦,羊駝 | Yes??是的 | Run LLMs locally with ease using ollama as a LLM provider 使用 ollama 作為 LLM 提供程序,輕松在本地運行 LLM |
lm-studio??LM-工作室 | Yes??是的 | Run LLM locally with LM studio (set?provider_name ?to?lm-studio )使用 LM studio 在本地運行 LLM(將? provider_name ?設置為?lm-studio ) |
openai??OpenAI 公司 | Yes??是的 | Use openai compatible API (eg: llama.cpp server) 使用兼容 openai 的 API(例如:llama.cpp 服務器) |
Next step:?Start services and run AgenticSeek
下一步:?啟動服務并運行 AgenticSeek
See the?Known issues?section if you are having issues
如果您遇到問題,請參閱已知問題部分
See the?Run with an API?section if your hardware can't run deepseek locally
如果您的硬件無法在本地運行 deepseek ,請參閱使用 API 運行?部分
See the?Config?section for detailled config file explanation.
有關配置文件的詳細解釋,請參閱?Config?部分。
五、使用 API 運行的設置
Set the desired provider in the?config.ini
. See below for a list of API providers.
在?config.ini
?中設置所需的提供程序。有關 API 提供程序的列表,請參閱下文。
[MAIN]
is_local = False
provider_name = google
provider_model = gemini-2.0-flash
provider_server_address = 127.0.0.1:5000 # doesn't matter
Warning: Make sure there is not trailing space in the config.
警告: 確保配置中沒有尾隨空格。
Export your API key:?export <<PROVIDER>>_API_KEY="xxx"
導出您的 API 密鑰:?export <<PROVIDER>>_API_KEY="xxx"
Example: export?TOGETHER_API_KEY="xxxxx"
示例:export?TOGETHER_API_KEY=“xxxxx”
List of API providers??API 提供者列表
Provider??供應商 | Local???當地? | Description??描述 |
---|---|---|
openai??OpenAI 公司 | Depends??取決于 | Use ChatGPT API??使用 ChatGPT API |
deepseek??深度搜索 | No??不 | Deepseek API (non-private) Deepseek API(非私有) |
huggingface??擁抱臉 | No??不 | Hugging-Face API (non-private) Hugging-Face API(非私有) |
togetherAI??一起 AI | No??不 | Use together AI API (non-private) 一起使用 AI API (非私有) |
google??谷歌 | No??不 | Use google gemini API (non-private) 使用 google gemini API(非私有) |
We advice against using gpt-4o or other closedAI models, performance are poor for web browsing and task planning.
我們建議不要使用 gpt-4o 或其他 closedAI 模型?,因為 Web 瀏覽和任務規劃的性能很差。
Please also note that coding/bash might fail with gemini, it seem to ignore our prompt for format to respect, which are optimized for deepseek r1.
另請注意,gemini 的 coding/bash 可能會失敗,它似乎忽略了我們對 format to respect 的提示,它針對 deepseek r1 進行了優化。
Next step:?Start services and run AgenticSeek
下一步:?啟動服務并運行 AgenticSeek
See the?Known issues?section if you are having issues
如果您遇到問題,請參閱已知問題部分
See the?Config?section for detailled config file explanation.
有關配置文件的詳細解釋,請參閱?Config?部分。
Start services and Run??啟動服務并運行
Activate your python env if needed.
如果需要,請激活您的 python env。
source agentic_seek_env/bin/activate
Start required services. This will start all services from the docker-compose.yml, including: - searxng - redis (required by searxng) - frontend
啟動所需的服務。這將從 docker-compose.yml 啟動所有服務,包括: - searxng - redis(searxng 需要)- 前端
sudo ./start_services.sh # MacOS
start ./start_services.cmd # Window
Options 1:?Run with the CLI interface.
選項 1:?使用 CLI 界面運行。
python3 cli.py
We advice you set?headless_browser
?to False in the config.ini for CLI mode.
我們建議您在 CLI 模式的 config.ini 中將?headless_browser
?設置為 False。
Options 2:?Run with the Web interface.
選項 2:?使用 Web 界面運行。
Start the backend.??啟動后端。
python3 api.py
Go to?http://localhost:3000/
?and you should see the web interface.
轉到?http://localhost:3000/
?您應該會看到 Web 界面。
Usage??用法
Make sure the services are up and running with?./start_services.sh
?and run the AgenticSeek with?python3 cli.py
?for CLI mode or?python3 api.py
?then go to?localhost:3000
?for web interface.
確保服務已啟動并運行?./start_services.sh
,并使用?python3 cli.py
?運行 AgenticSeek(用于 CLI 模式)或?python3 api.py
?然后轉到?localhost:3000
?用于 Web 界面。
You can also use speech to text by setting?listen = True
?in the config. Only for CLI mode.
您還可以通過在配置中設置?listen = True
?來使用語音轉文本。僅適用于 CLI 模式。
To exit, simply say/type?goodbye
.
要退出,只需說/鍵入?goodbye
。
Here are some example usage:
以下是一些示例用法:
Make a snake game in python!
用 python 制作貪吃蛇游戲!
Search the web for top cafes in Rennes, France, and save a list of three with their addresses in rennes_cafes.txt.
在網上搜索法國雷恩的頂級咖啡館,并保存三家咖啡館的列表以及它們在 rennes_cafes.txt 中的地址。
Write a Go program to calculate the factorial of a number, save it as factorial.go in your workspace
編寫一個 Go 程序來計算一個數字的階乘,在你的工作區中將其保存為 factorial.go
Search my summer_pictures folder for all JPG files, rename them with today’s date, and save a list of renamed files in photos_list.txt
在我的 summer_pictures 文件夾中搜索所有 JPG 文件,使用今天的日期重命名它們,并在 photos_list.txt 中保存重命名文件的列表
Search online for popular sci-fi movies from 2024 and pick three to watch tonight. Save the list in movie_night.txt.
在線搜索 2024 年的熱門科幻電影,并選擇今晚觀看的三部。將列表保存在 movie_night.txt 中。
Search the web for the latest AI news articles from 2025, select three, and write a Python script to scrape their titles and summaries. Save the script as news_scraper.py and the summaries in ai_news.txt in /home/projects
在網上搜索 2025 年最新的 AI 新聞文章,選擇三篇,然后編寫一個 Python 腳本來抓取它們的標題和摘要。將腳本另存為 news_scraper.py 并將摘要保存在 /home/projects 中 ai_news.txt
Friday, search the web for a free stock price API, register with?supersuper7434567@gmail.com?then write a Python script to fetch using the API daily prices for Tesla, and save the results in stock_prices.csv
星期五,在 Web 上搜索免費的股票價格 API,注冊?supersuper7434567@gmail.com,然后編寫一個 Python 腳本以使用 API 獲取特斯拉的每日價格,并將結果保存在 stock_prices.csv
Note that form filling capabilities are still experimental and might fail.
請注意,表單填寫功能仍處于試驗階段,可能會失敗。
After you type your query, AgenticSeek will allocate the best agent for the task.
鍵入查詢后,AgenticSeek 將為任務分配最佳代理。
Because this is an early prototype, the agent routing system might not always allocate the right agent based on your query.
由于這是一個早期原型,因此代理路由系統可能并不總是根據您的查詢分配正確的代理。
Therefore, you should be very explicit in what you want and how the AI might proceed for example if you want it to conduct a web search, do not say:
因此,您應該非常明確地說明您想要什么以及 AI 可能會如何進行,例如,如果您希望它進行網絡搜索,請不要說:
Do you know some good countries for solo-travel?
Instead, ask:??相反,請詢問:
Do a web search and find out which are the best country for solo-travel
Setup to run the LLM on your own server
設置以在您自己的服務器上運行 LLM
If you have a powerful computer or a server that you can use, but you want to use it from your laptop you have the options to run the LLM on a remote server using our custom llm server.
如果您有一臺功能強大的計算機或可以使用的服務器,但您想在筆記本電腦上使用它,您可以選擇使用我們的自定義 llm 服務器在遠程服務器上運行 LLM。
On your "server" that will run the AI model, get the ip address
在將運行 AI 模型的 “服務器” 上,獲取 IP 地址
ip a | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | cut -d/ -f1 # local ip
curl https://ipinfo.io/ip # public ip
Note: For Windows or macOS, use ipconfig or ifconfig respectively to find the IP address.
注意:對于 Windows 或 macOS,請分別使用 ipconfig 或 ifconfig 查找 IP 地址。
Clone the repository and enter the?server/
folder.
克隆存儲庫并輸入?server/
folder。
git clone --depth 1 https://github.com/Fosowl/agenticSeek.git
cd agenticSeek/llm_server/
Install server specific requirements:
安裝 Server 特定要求:
pip3 install -r requirements.txt
Run the server script.
運行 server 腳本。
python3 app.py --provider ollama --port 3333
You have the choice between using?ollama
?and?llamacpp
?as a LLM service.
您可以選擇使用?ollama
?和?llamacpp
?作為 LLM 服務。
Now on your personal computer:
現在在您的個人計算機上:
Change the?config.ini
?file to set the?provider_name
?to?server
?and?provider_model
?to?deepseek-r1:xxb
. Set the?provider_server_address
?to the ip address of the machine that will run the model.
更改?config.ini
?文件,將?provider_name
?設置為?server
, 將 provider_model
?設置為?deepseek-r1:xxb
。將?provider_server_address
?設置為將運行模型的計算機的 IP 地址。
[MAIN]
is_local = False
provider_name = server
provider_model = deepseek-r1:70b
provider_server_address = x.x.x.x:3333
Next step:?Start services and run AgenticSeek
下一步:?啟動服務并運行 AgenticSeek
Speech to Text??語音到文本
Please note that currently speech to text only work in english.
請注意,目前語音轉文本僅支持英語。
The speech-to-text functionality is disabled by default. To enable it, set the listen option to True in the config.ini file:
默認情況下,語音轉文本功能處于禁用狀態。要啟用它,請在 config.ini 文件中將 listen 選項設置為 True:
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>listen = True
</code></span></span></span></span>
When enabled, the speech-to-text feature listens for a trigger keyword, which is the agent's name, before it begins processing your input. You can customize the agent's name by updating the?agent_name
?value in the?config.ini?file:
啟用后,語音轉文本功能會在開始處理您的輸入之前偵聽觸發器關鍵字,即代理的名稱。您可以通過更新?config.ini?文件中的?agent_name
?值來自定義代理的名稱:
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>agent_name = Friday
</code></span></span></span></span>
For optimal recognition, we recommend using a common English name like "John" or "Emma" as the agent name
為了獲得最佳識別效果,我們建議使用常見的英文名稱,如“John”或“Emma”作為代理名稱
Once you see the transcript start to appear, say the agent's name aloud to wake it up (e.g., "Friday").
看到轉錄文本開始出現后,大聲說出代理的姓名以將其喚醒(例如,“星期五”)。
Speak your query clearly.
清楚地說出您的問題。
End your request with a confirmation phrase to signal the system to proceed. Examples of confirmation phrases include:
以確認短語結束您的請求,以指示系統繼續。確認短語的示例包括:
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>"do it", "go ahead", "execute", "run", "start", "thanks", "would ya", "please", "okay?", "proceed", "continue", "go on", "do that", "go it", "do you understand?"
</code></span></span></span></span>
Config??配置
Example config:??示例配置:
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>[MAIN]
is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:11434
agent_name = Friday
recover_last_session = False
save_session = False
speak = False
listen = False
work_dir = /Users/mlg/Documents/ai_folder
jarvis_personality = False
languages = en zh
[BROWSER]
headless_browser = False
stealth_mode = False
</code></span></span></span></span>
Explanation:??說明?:
-
is_local -> Runs the agent locally (True) or on a remote server (False).
is_local -> 在本地 (True) 或在遠程服務器上運行代理 (False)。 -
provider_name -> The provider to use (one of:?
ollama
,?server
,?lm-studio
,?deepseek-api
)
provider_name -> 要使用的提供程序(以下之一:ollama
、server
、lm-studio
、deepseek-api
) -
provider_model -> The model used, e.g., deepseek-r1:32b.
provider_model -> 使用的模型,例如 deepseek-r1:32b。 -
provider_server_address -> Server address, e.g., 127.0.0.1:11434 for local. Set to anything for non-local API.
provider_server_address -> 服務器地址,例如,127.0.0.1:11434 表示本地。對于非本地 API,設置為 anything。 -
agent_name -> Name of the agent, e.g., Friday. Used as a trigger word for TTS.
agent_name -> 代理的名稱,例如 Friday。用作 TTS 的觸發詞。 -
recover_last_session -> Restarts from last session (True) or not (False).
recover_last_session -> 從上一個會話重新開始 (True) 或不 (False)。 -
save_session -> Saves session data (True) or not (False).
save_session -> 保存會話數據 (True) 或不保存 (False)。 -
speak -> Enables voice output (True) or not (False).
speak -> 啟用語音輸出 (True) 或不啟用 (False)。 -
listen -> listen to voice input (True) or not (False).
listen -> 監聽語音輸入 (True) 或不監聽 (False)。 -
work_dir -> Folder the AI will have access to. eg: /Users/user/Documents/.
work_dir -> AI 將有權訪問的文件夾。例如:/Users/user/Documents/。 -
jarvis_personality -> Uses a JARVIS-like personality (True) or not (False). This simply change the prompt file.
jarvis_personality -> 使用類似 JARVIS 的個性 (True) 或不 (False)。這只是更改提示文件。 -
languages -> The list of supported language, needed for the llm router to work properly, avoid putting too many or too similar languages.
languages -> llm 路由器正常工作所需的支持語言列表,避免放置太多或太相似的語言。 -
headless_browser -> Runs browser without a visible window (True) or not (False).
headless_browser -> 在沒有可見窗口 (True) 或不顯示 (False) 的情況下運行瀏覽器。 -
stealth_mode -> Make bot detector time harder. Only downside is you have to manually install the anticaptcha extension.
stealth_mode -> 使機器人檢測器時間更難。唯一的缺點是您必須手動安裝 anticaptcha 擴展。 -
languages -> List of supported languages. Required for agent routing system. The longer the languages list the more model will be downloaded.
languages - > 支持的語言列表。代理路由系統是必需的。語言列表越長,下載的模型就越多。
Providers??供應商
The table below show the available providers:
下表顯示了可用的提供商:
Provider??供應商 | Local???當地? | Description??描述 |
---|---|---|
ollama??哦,羊駝 | Yes??是的 | Run LLMs locally with ease using ollama as a LLM provider 使用 ollama 作為 LLM 提供程序,輕松在本地運行 LLM |
server??服務器 | Yes??是的 | Host the model on another machine, run your local machine 在另一臺計算機上托管模型,運行本地計算機 |
lm-studio??LM-工作室 | Yes??是的 | Run LLM locally with LM studio (lm-studio )使用 LM studio ( lm-studio ) 在本地運行 LLM |
openai??OpenAI 公司 | Depends??取決于 | Use ChatGPT API (non-private) or openai compatible API 使用 ChatGPT API(非私有)或 openai 兼容 API |
deepseek-api??深度搜索 API | No??不 | Deepseek API (non-private) Deepseek API(非私有) |
huggingface??擁抱臉 | No??不 | Hugging-Face API (non-private) Hugging-Face API(非私有) |
togetherAI??一起 AI | No??不 | Use together AI API (non-private) 一起使用 AI API (非私有) |
google??谷歌 | No??不 | Use google gemini API (non-private) 使用 google gemini API(非私有) |
To select a provider change the config.ini:
要選擇提供商,請更改 config.ini:
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>is_local = True
provider_name = ollama
provider_model = deepseek-r1:32b
provider_server_address = 127.0.0.1:5000
</code></span></span></span></span>
is_local
: should be True for any locally running LLM, otherwise False.
is_local
: 對于任何本地運行的 LLM,都應該為 True,否則為 False。
provider_name
: Select the provider to use by it's name, see the provider list above.
provider_name
:按名稱選擇要使用的提供商,請參閱上面的提供商列表。
provider_model
: Set the model to use by the agent.
provider_model
:設置代理要使用的模型。
provider_server_address
: can be set to anything if you are not using the server provider.
provider_server_address
:如果您不使用服務器提供程序,則可以設置為任何值。
Known issues??已知問題
Chromedriver Issues??Chromedriver 問題
Known error #1:?chromedriver mismatch
已知錯誤 #1:chromedriver 不匹配
Exception: Failed to initialize browser: Message: session not created: This version of ChromeDriver only supports Chrome version 113 Current browser version is 134.0.6998.89 with binary path
This happen if there is a mismatch between your browser and chromedriver version.
如果您的瀏覽器和 chromedriver 版本不匹配,就會發生這種情況。
You need to navigate to download the latest version:
您需要導航以下載最新版本:
https://developer.chrome.com/docs/chromedriver/downloads
If you're using Chrome version 115 or newer go to:
如果您使用的是 Chrome 115 或更高版本,請轉到:
Chrome for Testing availability
And download the chromedriver version matching your OS.
并下載與您的作系統匹配的 chromedriver 版本。
六、連接適配器問題
連接適配器問題
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>Exception: Provider lm-studio failed: HTTP request failed: No connection adapters were found for '127.0.0.1:11434/v1/chat/completions'
</code></span></span></span></span>
Make sure you have?http://
?in front of the provider IP address :
確保提供商 IP 地址前面有?http://
?:
provider_server_address = http://127.0.0.1:11434
SearxNG base URL must be provided
必須提供 SearxNG 基本 URL
<span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><span style="color:#1f2328"><span style="color:var(--fgColor-default, var(--color-fg-default))"><span style="background-color:var(--bgColor-muted, var(--color-canvas-subtle))"><code>raise ValueError("SearxNG base URL must be provided either as an argument or via the SEARXNG_BASE_URL environment variable.")
ValueError: SearxNG base URL must be provided either as an argument or via the SEARXNG_BASE_URL environment variable.
</code></span></span></span></span>
Maybe you didn't move?.env.example
?as?.env
?? You can also export SEARXNG_BASE_URL:
也許您沒有將?.env.example
?移動為?.env
??您還可以導出 SEARXNG_BASE_URL:
export SEARXNG_BASE_URL="http://127.0.0.1:8080"
FAQ??常見問題
Q: What hardware do I need?
Q: 我需要什么硬件?
Model Size??模型大小 | GPU | Comment??評論 |
---|---|---|
7B | 8GB Vram??8GB 顯存 | ???Not recommended. Performance is poor, frequent hallucinations, and planner agents will likely fail. ?? 不推薦。表現不佳,經常出現幻覺,規劃師代理很可能會失敗。 |
14B | 12 GB VRAM (e.g. RTX 3060) 12 GB VRAM(例如 RTX 3060) | ? Usable for simple tasks. May struggle with web browsing and planning tasks. ? 可用于簡單的任務。可能在 Web 瀏覽和規劃任務方面遇到困難。 |
32B | 24+ GB VRAM (e.g. RTX 4090) 24+ GB VRAM(例如 RTX 4090) | 🚀 Success with most tasks, might still struggle with task planning 🚀 成功完成大多數任務,可能仍然難以進行任務規劃 |
70B+ | 48+ GB Vram (eg. mac studio) 48+ GB 顯存(例如 mac studio) | 💪 Excellent. Recommended for advanced use cases. 💪 非常好。推薦用于高級使用案例。 |
Q: Why Deepseek R1 over other models?
Q: 為什么選擇 Deepseek R1 而不是其他型號?
Deepseek R1 excels at reasoning and tool use for its size. We think it’s a solid fit for our needs other models work fine, but Deepseek is our primary pick.
Deepseek R1 在推理和工具使用方面表現出色。我們認為它非常適合我們的需求,其他模型運行良好,但 Deepseek 是我們的首選。
Q: I get an error running?cli.py
. What do I do?
問:我在運行?cli.py
?時遇到錯誤。我該怎么辦?
Ensure local is running (ollama serve
), your?config.ini
?matches your provider, and dependencies are installed. If none work feel free to raise an issue.
確保 local 正在運行 (ollama serve
),您的?config.ini
?與您的提供商匹配,并且已安裝依賴項。如果沒有工作,請隨時提出問題。
Q: Can it really run 100% locally?
問:它真的可以 100% 在本地運行嗎?
Yes with Ollama, lm-studio or server providers, all speech to text, LLM and text to speech model run locally. Non-local options (OpenAI or others API) are optional.
是的,對于 Ollama、lm-studio 或服務器提供商,所有語音轉文本、LLM 和文本轉語音模型都在本地運行。非本地選項(OpenAI 或其他 API)是可選的。
七、軟件下載
夸克網盤分享
本文信息來源于GitHub作者地址:GitHub - Fosowl/agenticSeek: Fully Local Manus AI. No APIs, No $200 monthly bills. Enjoy an autonomous agent that thinks, browses the web, and code for the sole cost of electricity.