# LocalGPT: Secure, Local Conversations with Your Documents ð
ð¨ð¨ You can run localGPT on a pre-configured [Virtual Machine](https://bit.ly/localGPT). Make sure to use the code: PromptEngineering to get 50% off. I will get a small commision!
**LocalGPT** is an open-source initiative that allows you to converse with your documents without compromising your privacy. With everything running locally, you can be assured that no data ever leaves your computer. Dive into the world of secure, local document interactions with LocalGPT.
## Features ð
- **Utmost Privacy**: Your data remains on your computer, ensuring 100% security.
- **Versatile Model Support**: Seamlessly integrate a variety of open-source models, including HF, GPTQ, GGML, and GGUF.
- **Diverse Embeddings**: Choose from a range of open-source embeddings.
- **Reuse Your LLM**: Once downloaded, reuse your LLM without the need for repeated downloads.
- **Chat History**: Remembers your previous conversations (in a session).
- **API**: LocalGPT has an API that you can use for building RAG Applications.
- **Graphical Interface**: LocalGPT comes with two GUIs, one uses the API and the other is standalone (based on streamlit).
- **GPU, CPU & MPS Support**: Supports multiple platforms out of the box, Chat with your data using `CUDA`, `CPU` or `MPS` and more!
## Dive Deeper with Our Videos ð¥
- [Detailed code-walkthrough](https://youtu.be/MlyoObdIHyo)
- [Llama-2 with LocalGPT](https://youtu.be/lbFmceo4D5E)
- [Adding Chat History](https://youtu.be/d7otIM_MCZs)
- [LocalGPT - Updated (09/17/2023)](https://youtu.be/G_prHSKX9d4)
## Technical Details ð ï¸
By selecting the right local models and the power of `LangChain` you can run the entire RAG pipeline locally, without any data leaving your environment, and with reasonable performance.
- `ingest.py` uses `LangChain` tools to parse the document and create embeddings locally using `InstructorEmbeddings`. It then stores the result in a local vector database using `Chroma` vector store.
- `run_localGPT.py` uses a local LLM to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
- You can replace this local LLM with any other LLM from the HuggingFace. Make sure whatever LLM you select is in the HF format.
This project was inspired by the original [privateGPT](https://github.com/imartinez/privateGPT).
## Built Using ð§©
- [LangChain](https://github.com/hwchase17/langchain)
- [HuggingFace LLMs](https://huggingface.co/models)
- [InstructorEmbeddings](https://instructor-embedding.github.io/)
- [LLAMACPP](https://github.com/abetlen/llama-cpp-python)
- [ChromaDB](https://www.trychroma.com/)
- [Streamlit](https://streamlit.io/)
# Environment Setup ð
1. ð¥ Clone the repo using git:
```shell
git clone https://github.com/PromtEngineer/localGPT.git
```
2. ð Install [conda](https://www.anaconda.com/download) for virtual environment management. Create and activate a new virtual environment.
```shell
conda create -n localGPT python=3.10.0
conda activate localGPT
```
3. ð ï¸ Install the dependencies using pip
To set up your environment to run the code, first install all requirements:
```shell
pip install -r requirements.txt
```
***Installing LLAMA-CPP :***
LocalGPT uses [LlamaCpp-Python](https://github.com/abetlen/llama-cpp-python) for GGML (you will need llama-cpp-python <=0.1.76) and GGUF (llama-cpp-python >=0.1.83) models.
If you want to use BLAS or Metal with [llama-cpp](https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast--metal) you can set appropriate flags:
For `NVIDIA` GPUs support, use `cuBLAS`
```shell
# Example: cuBLAS
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir
```
For Apple Metal (`M1/M2`) support, use
```shell
# Example: METAL
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir
```
For more details, please refer to [llama-cpp](https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast--metal)
## Docker ð³
Installing the required packages for GPU inference on NVIDIA GPUs, like gcc 11 and CUDA 11, may cause conflicts with other packages in your system.
As an alternative to Conda, you can use Docker with the provided Dockerfile.
It includes CUDA, your system just needs Docker, BuildKit, your NVIDIA GPU driver and the NVIDIA container toolkit.
Build as `docker build -t localgpt .`, requires BuildKit.
Docker BuildKit does not support GPU during *docker build* time right now, only during *docker run*.
Run as `docker run -it --mount src="$HOME/.cache",target=/root/.cache,type=bind --gpus=all localgpt`.
## Test dataset
For testing, this repository comes with [Constitution of USA](https://constitutioncenter.org/media/files/constitution.pdf) as an example file to use.
## Ingesting your OWN Data.
Put your files in the `SOURCE_DOCUMENTS` folder. You can put multiple folders within the `SOURCE_DOCUMENTS` folder and the code will recursively read your files.
### Support file formats:
LocalGPT currently supports the following file formats. LocalGPT uses `LangChain` for loading these file formats. The code in `constants.py` uses a `DOCUMENT_MAP` dictionary to map a file format to the corresponding loader. In order to add support for another file format, simply add this dictionary with the file format and the corresponding loader from [LangChain](https://python.langchain.com/docs/modules/data_connection/document_loaders/).
```shell
DOCUMENT_MAP = {
".txt": TextLoader,
".md": TextLoader,
".py": TextLoader,
".pdf": PDFMinerLoader,
".csv": CSVLoader,
".xls": UnstructuredExcelLoader,
".xlsx": UnstructuredExcelLoader,
".docx": Docx2txtLoader,
".doc": Docx2txtLoader,
}
```
### Ingest
Run the following command to ingest all the data.
If you have `cuda` setup on your system.
```shell
python ingest.py
```
You will see an output like this:
<img width="1110" alt="Screenshot 2023-09-14 at 3 36 27 PM" src="https://github.com/PromtEngineer/localGPT/assets/134474669/c9274e9a-842c-49b9-8d95-606c3d80011f">
Use the device type argument to specify a given device.
To run on `cpu`
```sh
python ingest.py --device_type cpu
```
To run on `M1/M2`
```sh
python ingest.py --device_type mps
```
Use help for a full list of supported devices.
```sh
python ingest.py --help
```
This will create a new folder called `DB` and use it for the newly created vector store. You can ingest as many documents as you want, and all will be accumulated in the local embeddings database.
If you want to start from an empty database, delete the `DB` and reingest your documents.
Note: When you run this for the first time, it will need internet access to download the embedding model (default: `Instructor Embedding`). In the subsequent runs, no data will leave your local environment and you can ingest data without internet connection.
## Ask questions to your documents, locally!
In order to chat with your documents, run the following command (by default, it will run on `cuda`).
```shell
python run_localGPT.py
```
You can also specify the device type just like `ingest.py`
```shell
python run_localGPT.py --device_type mps # to run on Apple silicon
```
This will load the ingested vector store and embedding model. You will be presented with a prompt:
```shell
> Enter a query:
```
After typing your question, hit enter. LocalGPT will take some time based on your hardware. You will get a response like this below.
<img width="1312" alt="Screenshot 2023-09-14 at 3 33 19 PM" src="https://github.com/PromtEngineer/localGPT/assets/134474669/a7268de9-ade0-420b-a00b-ed12207dbe41">
Once the answer is generated, you can then ask another question without re-running the script, just wait for the prompt again.
***Note:*** When you run this fo
没有合适的资源?快使用搜索试试~ 我知道了~
使用 GPT 模型与本地设备上的文档聊天 任何数据都不会离开您的设备,并且 100% 私密 localGPT源代码
共77个文件
map:22个
css:16个
py:10个
1 下载量 157 浏览量
2024-01-24
15:15:33
上传
评论
收藏 3.46MB ZIP 举报
温馨提示
LocalGPT 是一项Python代码,可让您在不损害隐私的情况下处理文档。 由于一切都在本地运行,因此您可以放心,任何数据都不会离开您的计算机。 使用 LocalGPT 深入安全的本地文档交互世界。 1、最大程度的隐私:您的数据保留在您的计算机上,确保 100% 的安全。 2、丰富的模型支持:无缝集成多种开源模型,包括HF、GPTQ、GGML、GGUF等。 3、多样化的嵌入:从一系列开源嵌入中进行选择。 4、重复使用您的 LLM:下载后,即可重复使用您的 LLM,无需重复下载。 5、聊天历史记录:记住您之前的对话(在会话中)。 6、API:LocalGPT 有一个 API,可用于构建 RAG 应用程序。 7、图形界面:LocalGPT 带有两个 GUI,一个使用 API,另一个是独立的(基于 Streamlit)。 8、GPU、CPU 和 MPS 支持:支持开箱即用的多个平台,使用 CUDA、CPU 或 MPS 等与您的数据聊天!
资源推荐
资源详情
资源评论
收起资源包目录
localGPT-main.zip (77个子文件)
localGPT-main
utils.py 875B
.flake8 67B
.editorconfig 263B
ingest.py 6KB
crawl.py 3KB
.pyup.yml 361B
.github
workflows
github-actions.yml 591B
FUNDING.yml 821B
SOURCE_DOCUMENTS
Orca_paper.pdf 1.39MB
LICENSE 11KB
run_localGPT.py 10KB
CONTRIBUTING.md 2KB
.pre-commit-config.yaml 1KB
localGPTUI
localGPTUI.py 3KB
templates
home.html 21KB
static
dependencies
bootstrap-5.1.3-dist
js
bootstrap.js 152KB
bootstrap.bundle.js.map 418KB
bootstrap.esm.min.js.map 218KB
bootstrap.bundle.min.js.map 323KB
bootstrap.min.js 86KB
bootstrap.esm.js 141KB
bootstrap.bundle.min.js 120KB
bootstrap.bundle.js 214KB
bootstrap.js.map 284KB
bootstrap.esm.js.map 283KB
jquery-3.2.1.min.js 158KB
bootstrap.min.js.map 215KB
bootstrap.esm.min.js 95KB
css
bootstrap-grid.css.map 197KB
bootstrap.min.css 199KB
bootstrap-reboot.rtl.min.css.map 47KB
bootstrap.rtl.min.css.map 648KB
bootstrap-utilities.rtl.min.css 69KB
bootstrap-grid.css 71KB
bootstrap-utilities.rtl.css 70KB
bootstrap-grid.min.css.map 119KB
bootstrap-reboot.rtl.css.map 107KB
bootstrap-reboot.min.css.map 39KB
bootstrap-utilities.css 70KB
bootstrap-utilities.css.map 188KB
bootstrap-reboot.rtl.css 7KB
bootstrap.rtl.min.css 199KB
bootstrap.css.map 525KB
bootstrap.css 201KB
bootstrap-utilities.rtl.min.css.map 109KB
bootstrap-grid.rtl.min.css.map 120KB
bootstrap-utilities.rtl.css.map 188KB
bootstrap.rtl.css.map 525KB
bootstrap-reboot.css 7KB
bootstrap-reboot.rtl.min.css 7KB
bootstrap-utilities.min.css.map 109KB
bootstrap-utilities.min.css 69KB
bootstrap-reboot.css.map 107KB
bootstrap-grid.rtl.css.map 197KB
bootstrap-grid.rtl.css 71KB
bootstrap-grid.rtl.min.css 70KB
bootstrap-grid.min.css 70KB
bootstrap.min.css.map 441KB
bootstrap-reboot.min.css 7KB
bootstrap.rtl.css 201KB
jquery
3.6.0
jquery.min.js 164KB
social_icons
favicon.png 423KB
document_examples
news_articles.zip 17KB
constitution.pdf 404KB
Dockerfile 1KB
prompt_template_utils.py 3KB
ACKNOWLEDGEMENT.md 426B
pyproject.toml 210B
requirements.txt 710B
localGPT_UI.py 4KB
run_localGPT_API.py 6KB
.gitignore 3KB
3.20.2 95B
load_models.py 7KB
.dockerignore 44B
constants.py 8KB
README.md 14KB
共 77 条
- 1
资源评论
技术探秘者
- 粉丝: 1122
- 资源: 48
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 机械设计油塞自动装配机( sw14可编辑+工程图)全套设计资料100%好用.zip
- 国家级城市群政策DID(2003-2023年).zip
- 悬浮球案例记录,包括移动、停靠、cpu信息展示、内存信息展示
- form-diff.patch
- pip-24.3.1-py3-none-any.whl
- 机械设计已量产一次性纸杯成型机proe1.0可编辑全套设计资料100%好用.zip
- pip-24.3.1.tar.gz
- 基于Java的办公管理系统的设计与实现论文
- 基于Springboot+Vue的办公用品管理系统论文
- 毕设-c语言迷宫源码.zip
- 毕设-c语言种地要浇水游戏源码12.zip
- 毕设-c语言自创军旗游戏源码13.zip
- 毕设-c语言支持自己创建迷宫,并求解最短路径11.zip
- 毕设-c语言做的绘图板系统16.zip
- 毕设-c语言做的播放器源码15.zip
- 毕设-c语言自创推箱子游戏改版14.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功