# embedchainjs
[![Discord](https://dcbadge.vercel.app/api/server/CUU9FPhRNt?style=flat)](https://discord.gg/CUU9FPhRNt)
[![Twitter](https://img.shields.io/twitter/follow/embedchain)](https://twitter.com/embedchain)
[![Substack](https://img.shields.io/badge/Substack-%23006f5c.svg?logo=substack)](https://embedchain.substack.com/)
embedchain is a framework to easily create LLM powered bots over any dataset. embedchainjs is Javascript version of embedchain. If you want a python version, check out [embedchain-python](https://github.com/embedchain/embedchain)
# ð¤ Let's Talk Embedchain!
Schedule a [Feedback Session](https://cal.com/taranjeetio/ec) with Taranjeet, the founder, to discuss any issues, provide feedback, or explore improvements.
# How it works
It abstracts the entire process of loading dataset, chunking it, creating embeddings and then storing in vector database.
You can add a single or multiple dataset using `.add` and `.addLocal` function and then use `.query` function to find an answer from the added datasets.
If you want to create a Naval Ravikant bot which has 2 of his blog posts, as well as a question and answer pair you supply, all you need to do is add the links to the blog posts and the QnA pair and embedchain will create a bot for you.
```javascript
const dotenv = require("dotenv");
dotenv.config();
const { App } = require("embedchain");
//Run the app commands inside an async function only
async function testApp() {
const navalChatBot = await App();
// Embed Online Resources
await navalChatBot.add("web_page", "https://nav.al/feedback");
await navalChatBot.add("web_page", "https://nav.al/agi");
await navalChatBot.add(
"pdf_file",
"https://navalmanack.s3.amazonaws.com/Eric-Jorgenson_The-Almanack-of-Naval-Ravikant_Final.pdf"
);
// Embed Local Resources
await navalChatBot.addLocal("qna_pair", [
"Who is Naval Ravikant?",
"Naval Ravikant is an Indian-American entrepreneur and investor.",
]);
const result = await navalChatBot.query(
"What unique capacity does Naval argue humans possess when it comes to understanding explanations or concepts?"
);
console.log(result);
// answer: Naval argues that humans possess the unique capacity to understand explanations or concepts to the maximum extent possible in this physical reality.
}
testApp();
```
# Getting Started
## Installation
- First make sure that you have the package installed. If not, then install it using `npm`
```bash
npm install embedchain && npm install -S openai@^3.3.0
```
- Currently, it is only compatible with openai 3.X, not the latest version 4.X. Please make sure to use the right version, otherwise you will see the `ChromaDB` error `TypeError: OpenAIApi.Configuration is not a constructor`
- Make sure that dotenv package is installed and your `OPENAI_API_KEY` in a file called `.env` in the root folder. You can install dotenv by
```js
npm install dotenv
```
- Download and install Docker on your device by visiting [this link](https://www.docker.com/). You will need this to run Chroma vector database on your machine.
- Run the following commands to setup Chroma container in Docker
```bash
git clone https://github.com/chroma-core/chroma.git
cd chroma
docker-compose up -d --build
```
- Once Chroma container has been set up, run it inside Docker
## Usage
- We use OpenAI's embedding model to create embeddings for chunks and ChatGPT API as LLM to get answer given the relevant docs. Make sure that you have an OpenAI account and an API key. If you have dont have an API key, you can create one by visiting [this link](https://platform.openai.com/account/api-keys).
- Once you have the API key, set it in an environment variable called `OPENAI_API_KEY`
```js
// Set this inside your .env file
OPENAI_API_KEY = "sk-xxxx";
```
- Load the environment variables inside your .js file using the following commands
```js
const dotenv = require("dotenv");
dotenv.config();
```
- Next import the `App` class from embedchain and use `.add` function to add any dataset.
- Now your app is created. You can use `.query` function to get the answer for any query.
```js
const dotenv = require("dotenv");
dotenv.config();
const { App } = require("embedchain");
async function testApp() {
const navalChatBot = await App();
// Embed Online Resources
await navalChatBot.add("web_page", "https://nav.al/feedback");
await navalChatBot.add("web_page", "https://nav.al/agi");
await navalChatBot.add(
"pdf_file",
"https://navalmanack.s3.amazonaws.com/Eric-Jorgenson_The-Almanack-of-Naval-Ravikant_Final.pdf"
);
// Embed Local Resources
await navalChatBot.addLocal("qna_pair", [
"Who is Naval Ravikant?",
"Naval Ravikant is an Indian-American entrepreneur and investor.",
]);
const result = await navalChatBot.query(
"What unique capacity does Naval argue humans possess when it comes to understanding explanations or concepts?"
);
console.log(result);
// answer: Naval argues that humans possess the unique capacity to understand explanations or concepts to the maximum extent possible in this physical reality.
}
testApp();
```
- If there is any other app instance in your script or app, you can change the import as
```javascript
const { App: EmbedChainApp } = require("embedchain");
// or
const { App: ECApp } = require("embedchain");
```
## Format supported
We support the following formats:
### PDF File
To add any pdf file, use the data_type as `pdf_file`. Eg:
```javascript
await app.add("pdf_file", "a_valid_url_where_pdf_file_can_be_accessed");
```
### Web Page
To add any web page, use the data_type as `web_page`. Eg:
```javascript
await app.add("web_page", "a_valid_web_page_url");
```
### QnA Pair
To supply your own QnA pair, use the data_type as `qna_pair` and enter a tuple. Eg:
```javascript
await app.addLocal("qna_pair", ["Question", "Answer"]);
```
### More Formats coming soon
- If you want to add any other format, please create an [issue](https://github.com/embedchain/embedchainjs/issues) and we will add it to the list of supported formats.
## Testing
Before you consume valuable tokens, you should make sure that the embedding you have done works and that it's receiving the correct document from the database.
For this you can use the `dryRun` method.
Following the example above, add this to your script:
```js
let result = await naval_chat_bot.dryRun("What unique capacity does Naval argue humans possess when it comes to understanding explanations or concepts?");console.log(result);
'''
Use the following pieces of context to answer the query at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
terms of the unseen. And I think thatâs critical. That is what humans do uniquely that no other creature, no other computer, no other intelligenceâbiological or artificialâthat we have ever encountered does. And not only do we do it uniquely, but if we were to meet an alien species that also had the power to generate these good explanations, there is no explanation that they could generate that we could not understand. We are maximally capable of understanding. There is no concept out there that is possible in this physical reality that a human being, given sufficient time and resources and
Query: What unique capacity does Naval argue humans possess when it comes to understanding explanations or concepts?
Helpful Answer:
'''
```
_The embedding is confirmed to work as expected. It returns the right document, even if the question is asked slightly different. No prompt tokens have been consumed._
**The dry run will still consume tokens to embed your query, but it is only ~1/15 of the prompt.**
# How does it work?
Creating a chat bot over any dataset needs the following steps to happen
- load the data
- create meaningful chunks
- create embeddings for each chunk
- store the chunks in vector database
Whenever a user asks any query, following pro
没有合适的资源?快使用搜索试试~ 我知道了~
基于LLM的机器人创建工具
共636个文件
py:260个
mdx:114个
ts:27个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 110 浏览量
2024-02-08
17:39:00
上传
评论
收藏 12.33MB ZIP 举报
温馨提示
一个强大的工具,可以在任何数据集上轻松创建基于大语言模型(LLM)的机器人。它为开发者提供了一个简便的方式来利用自然语言处理和生成模型,以构建智能机器人、聊天机器人或自动化助手。Embedchain 的灵活性使其适用于多种领域,从自然语言理解到内容生成,都可以实现。
资源推荐
资源详情
资源评论
收起资源包目录
基于LLM的机器人创建工具 (636个子文件)
default_add.bru 200B
default_query.bru 180B
default_chat.bru 170B
ping.bru 113B
CITATION.cff 236B
CITATION.cff 234B
commit-msg 75B
globals.css 59B
Dockerfile 282B
Dockerfile 232B
Dockerfile 195B
Dockerfile 195B
Dockerfile 189B
Dockerfile 186B
Dockerfile 175B
Dockerfile 159B
Dockerfile 159B
Dockerfile 156B
Dockerfile 140B
Dockerfile 140B
.dockerignore 59B
.dockerignore 59B
.dockerignore 56B
.dockerignore 55B
.dockerignore 30B
.dockerignore 5B
.dockerignore 3B
.dockerignore 3B
.dockerignore 3B
.dockerignore 3B
variables.env 38B
variables.env 17B
.eslintignore 17B
.eslintrc 2KB
.env.example 88B
.env.example 86B
.env.example 37B
.env.example 36B
.env.example 21B
.env.example 21B
.env.example 21B
.env.example 21B
.env.example 16B
cover.gif 8.21MB
.gitignore 3KB
.gitignore 2KB
.gitignore 390B
.gitignore 62B
.gitignore 62B
.gitignore 53B
.gitignore 53B
.gitignore 50B
.gitignore 31B
.gitignore 10B
.gitignore 5B
.gitignore 5B
favicon.ico 15KB
ollama.ipynb 6KB
together.ipynb 5KB
hugging_face_hub.ipynb 4KB
gpt4all.ipynb 4KB
cohere.ipynb 4KB
jina.ipynb 4KB
llama2.ipynb 4KB
anthropic.ipynb 4KB
vertex_ai.ipynb 4KB
azure-openai.ipynb 4KB
openai.ipynb 4KB
opensearch.ipynb 3KB
embedchain-docs-site-example.ipynb 3KB
pinecone.ipynb 3KB
chromadb.ipynb 3KB
elasticsearch.ipynb 3KB
embedchain-chromadb-server.ipynb 2KB
whatsapp.jpg 59KB
SetSources.js 5KB
Sidebar.js 5KB
ChatWindow.js 5KB
SetOpenAIKey.js 2KB
DeleteBot.js 2KB
CreateBot.js 2KB
index.js 2KB
PurgeChats.js 2KB
app.js 561B
HumanWrapper.js 511B
next.config.js 484B
BotWrapper.js 481B
tailwind.config.js 418B
_document.js 385B
_app.js 334B
PageWrapper.js 232B
index.js 186B
lint-staged.config.js 145B
jest.config.js 107B
postcss.config.js 82B
commitlint.config.js 67B
package-lock.json 691KB
package-lock.json 462KB
rest-api.json 13KB
mint.json 7KB
共 636 条
- 1
- 2
- 3
- 4
- 5
- 6
- 7
资源评论
UnknownToKnown
- 粉丝: 1w+
- 资源: 590
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功