一、軟件介紹
文末提供程序和源碼下載
OramaCore 是您的項目、答案引擎、副駕駛和搜索所需的 AI 運行時。
它包括一個成熟的全文搜索引擎、矢量數據庫、LLM具有行動計劃和推理功能的接口、用于根據數據編寫和運行您自己的自定義代理的 JavaScript 運行時,以及更多實用程序。
二、Getting Started?開始
絕對簡單的入門方法是按照您可以在此存儲庫中找到的 docker-compose.yml 文件進行作。
You can either clone the entire repo or setup?oramasearch/oramacore:latest
?as image in your?docker-compose.yml
?file under the?oramacore
?service.
您可以克隆整個存儲庫,也可以在?oramacore
?該服務下的?docker-compose.yml
?文件中設置為?oramasearch/oramacore:latest
?映像。
Then compile your?configuration file?and run it:
然后編譯您的配置文件并運行它:
docker compose up
This will create the following architecture, allowing you to perform high-performance RAG with little to zero configuration.
這將創建以下架構,允許您以很少甚至零的配置執行高性能 RAG。
An NVIDIA GPU is highly recommended for running the application. For production usage, we recommend using minimum one NVIDIA A100. Optimal configuration would include four NVIDIA H100.
強烈建議使用 NVIDIA GPU 來運行應用程序。對于生產用途,我們建議至少使用一個 NVIDIA A100。最佳配置將包括四個 NVIDIA H100 。
三、Available Dockerfiles?可用的 Dockerfile
Depending on your machine, you may want to use different Docker images.
根據您的計算機,您可能希望使用不同的 Docker 映像。
Application?應用 | CPU/GPU?CPU/圖形處理器 | Docker image?Docker 鏡像 |
---|---|---|
OramaCore?OramaCore 公司 | X86_64 | oramasearch/oramacore |
OramaCore?OramaCore 公司 | ARM64 (Mac M series for example) ARM64(例如 Mac M 系列) | oramasearch/oramacore-arm64 |
AI Server?AI 服務器 | Any CPU architecture, no CUDA access 任何 CPU 架構,無需 CUDA 訪問 | oramasearch/oramacore-ai-server |
AI Server?AI 服務器 | Any CPU architecture, CUDA available 任何 CPU 架構,CUDA 可用 | coming soon |
Using the JavaScript SDK?使用 JavaScript SDK
You can install the official JavaScript SDK with npm:
你可以使用 npm 安裝官方的 JavaScript SDK:
npm i @orama/core
Then, you can start by creating a collection (a database index) with all of the data you want to perform AI search & experiences on:
然后,你可以開始創建一個集合(數據庫索引),其中包含你想要執行AI搜索和體驗的所有數據:
import { OramaCoreManager } from "@orama/core";const orama = new OramaCoreManager({url: "http://localhost:8080",masterAPIKey: "<master-api-key>", // The master API key set in your config file
});const newCollection = await orama.createCollection({id: "products",writeAPIKey: "my-write-api-key", // A custom API key to perform write operations on your collectionreadAPIKey: "my-read-api-key", // A custom API key to perform read operations on your collection
});
Then, insert some data:
然后,插入一些數據:
import { CollectionManager } from "@orama/core";const collection = new CollectionManager({url: "http://localhost:8080",collectionID: "<COLLECTION_ID>",writeAPIKey: "<write_api_key>",
});// You can insert a single document
await collection.insert({title: "My first document",content: "This is the content of my first document.",
});// Or you can insert multiple documents by passing an array of objects
await collection.insert([{title: "My first document",content: "This is the content of my first document.",},{title: "My second document",content: "This is the content of my second document.",},
]);
OramaCore will automatically generate highly optimized embeddings for you and will store them inside its built-in vector database.
OramaCore 將為您自動生成高度優化的嵌入,并將其存儲在其內置的向量數據庫中。
Now you can perform vector, hybrid, full-text search, or let OramaCore decide which one is best for your specific query:
現在,您可以執行矢量、混合、全文搜索,或者讓 OramaCore 決定哪一個最適合您的特定查詢:
import { CollectionManager } from "@orama/core";const collection = new CollectionManager({url: "http://localhost:8080",collectionID: "<COLLECTION_ID>",readAPIKey: "<read_api_key>",
});const results = await collection.search({term: "The quick brown fox",mode: "auto", // can be "fulltext", "vector", "hybrid", or "auto"
});
You can also perform?Answer Sessions?as you'd do on?Perplexity?or?SearchGPT, but on your own data!
您還可以像在 Perplexity 或 SearchGPT 上一樣執行 Answer Sessions,但使用您自己的數據!
import { CollectionManager } from "@orama/core";const collection = new CollectionManager({url: "http://localhost:8080",collectionID: "<COLLECTION_ID>",readAPIKey: "<read_api_key>",
});const answerSession = collection.createAnswerSession({initialMessages: [{ role: "user",content: "How do I install OramaCore?"},{role: "assistant",content: "You can install OramaCore by pulling the oramasearch/oramacore:latest Docker image",},],events: {onStateChange(state) {console.log("State changed:", state);},},
});
軟件下載
夸克網盤分享
本文信息來源于GitHub作者地址:GitHub - oramasearch/oramacore: OramaCore is the AI runtime you need for your AI projects, answer engines, copilots, and search. It includes a fully-fledged full-text search engine, vector database, LLM interface, and many more utilities.