# ð GPT Researcher
[![Official Website](https://img.shields.io/badge/Official%20Website-tavily.com-blue?style=for-the-badge&logo=world&logoColor=white)](https://tavily.com)
[![Discord Follow](https://dcbadge.vercel.app/api/server/2pFkc83fRq?style=for-the-badge)](https://discord.com/invite/2pFkc83fRq)
[![GitHub Repo stars](https://img.shields.io/github/stars/assafelovic/gpt-researcher?style=social)](https://github.com/assafelovic/gpt-researcher)
[![Twitter Follow](https://img.shields.io/twitter/follow/tavilyai?style=social)](https://twitter.com/tavilyai)
[![PyPI version](https://badge.fury.io/py/gpt-researcher.svg)](https://badge.fury.io/py/gpt-researcher)
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/assafelovic/gpt-researcher?style=for-the-badge&color=orange
- [English](README.md)
- [ä¸æ](README-zh_CN.md)
**GPT Researcher is an autonomous agent designed for comprehensive online research on a variety of tasks.**
The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources, outlines, and lessons. Inspired by the recent [Plan-and-Solve](https://arxiv.org/abs/2305.04091) and [RAG](https://arxiv.org/abs/2005.11401) papers, GPT Researcher addresses issues of speed, determinism and reliability, offering a more stable performance and increased speed through parallelized agent work, as opposed to synchronous operations.
**Our mission is to empower individuals and organizations with accurate, unbiased, and factual information by leveraging the power of AI.**
## Why GPT Researcher?
- To form objective conclusions for manual research tasks can take time, sometimes weeks to find the right resources and information.
- Current LLMs are trained on past and outdated information, with heavy risks of hallucinations, making them almost irrelevant for research tasks.
- Solutions that enable web search (such as ChatGPT + Web Plugin), only consider limited resources and content that in some cases result in superficial conclusions or biased answers.
- Using only a selection of resources can create bias in determining the right conclusions for research questions or tasks.
## Architecture
The main idea is to run "planner" and "execution" agents, whereas the planner generates questions to research, and the execution agents seek the most related information based on each generated research question. Finally, the planner filters and aggregates all related information and creates a research report. <br /> <br />
The agents leverage both gpt3.5-turbo and gpt-4-turbo (128K context) to complete a research task. We optimize for costs using each only when necessary. **The average research task takes around 3 minutes to complete, and costs ~$0.1.**
<div align="center">
<img align="center" height="500" src="https://cowriter-images.s3.amazonaws.com/architecture.png">
</div>
More specifically:
* Create a domain specific agent based on research query or task.
* Generate a set of research questions that together form an objective opinion on any given task.
* For each research question, trigger a crawler agent that scrapes online resources for information relevant to the given task.
* For each scraped resources, summarize based on relevant information and keep track of its sources.
* Finally, filter and aggregate all summarized sources and generate a final research report.
## Demo
https://github.com/assafelovic/gpt-researcher/assets/13554167/a00c89a6-a295-4dd0-b58d-098a31c40fda
## Tutorials
- [How it Works](https://docs.tavily.com/blog/building-gpt-researcher)
- [How to Install](https://www.loom.com/share/04ebffb6ed2a4520a27c3e3addcdde20?sid=da1848e8-b1f1-42d1-93c3-5b0b9c3b24ea)
- [Live Demo](https://www.loom.com/share/6a3385db4e8747a1913dd85a7834846f?sid=a740fd5b-2aa3-457e-8fb7-86976f59f9b8)
## Features
- ð Generate research, outlines, resources and lessons reports
- ð Aggregates over 20 web sources per research to form objective and factual conclusions
- ð¥ï¸ Includes an easy-to-use web interface (HTML/CSS/JS)
- ð Scrapes web sources with javascript support
- ð Keeps track and context of visited and used web sources
- ð Export research reports to PDF and more...
## ð Documentation
Please see [here](https://docs.tavily.com/docs/gpt-researcher/getting-started) for full documentation on:
- Getting started (installation, setting up the environment, simple examples)
- How-To examples (demos, integrations, docker support)
- Reference (full API docs)
- Tavily API integration (high-level explanation of core concepts)
## Quickstart
> **Step 0** - Install Python 3.11 or later. [See here](https://www.tutorialsteacher.com/python/install-python) for a step-by-step guide.
<br />
> **Step 1** - Download the project
```bash
git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher
```
<br />
> **Step 2** - Install dependencies
```bash
pip install -r requirements.txt
```
<br />
> **Step 3** - Create .env file with your OpenAI Key and Tavily API key or simply export it
```bash
export OPENAI_API_KEY={Your OpenAI API Key here}
```
```bash
export TAVILY_API_KEY={Your Tavily API Key here}
```
- **For LLM, we recommend [OpenAI GPT](https://platform.openai.com/docs/guides/gpt)**, but you can use any other LLM model (including open sources) supported by [Langchain Adapter](https://python.langchain.com/docs/guides/adapters/openai), simply change the llm model and provider in config/config.py. Follow [this guide](https://python.langchain.com/docs/integrations/llms/) to learn how to integrate LLMs with Langchain.
- **For search engine, we recommend [Tavily Search API](https://app.tavily.com) (optimized for LLMs)**, but you can also refer to other search engines of your choice by changing the search provider in config/config.py to `"duckduckgo"`, `"googleAPI"`, `"googleSerp"`, or `"searx"`. Then add the corresponding env API key as seen in the config.py file.
- **We highly recommend using [OpenAI GPT](https://platform.openai.com/docs/guides/gpt) models and [Tavily Search API](https://app.tavily.com) for optimal performance.**
<br />
> **Step 4** - Run the agent with FastAPI
```bash
uvicorn main:app --reload
```
<br />
> **Step 5** - Go to http://localhost:8000 on any browser and enjoy researching!
To learn how to get started with Docker or to learn more about the features and services check out the [documentation](https://docs.tavily.com) page.
## ð Contributing
We highly welcome contributions! Please check out [contributing](CONTRIBUTING.md) if you're interested.
Please check out our [roadmap](https://trello.com/b/3O7KBePw/gpt-researcher-roadmap) page and reach out to us via our [Discord community](https://discord.gg/2pFkc83fRq) if you're interested in joining our mission.
## ð¡ Disclaimer
This project, GPT Researcher, is an experimental application and is provided "as-is" without any warranty, express or implied. We are sharing codes for academic purposes under the MIT license. Nothing herein is academic advice, and NOT a recommendation to use in academic or research papers.
Our view on unbiased research claims:
1. The whole point of our scraping system is to reduce incorrect fact. How? The more sites we scrape the less chances of incorrect data. We are scraping 20 per research, the chances that they are all wrong is extremely low.
2. We do not aim to eliminate biases; we aim to reduce it as much as possible. **We are here as a community to figure out the most effective human/llm interactions.**
3. In research, people also tend towards biases as most have already opinions on the topics they research about. This tool scrapes many opinions and will evenly explain diverse views that a biased person would never have read.
**Please note that the use of the GPT-4 language model can be expensive due to its token usage.** By ut
没有合适的资源?快使用搜索试试~ 我知道了~
温馨提示
GPT-Researcher是一个自主代理,旨在对各种任务进行全面的在线研究。代理可以生成详细,事实和公正的研究报告,并提供定制选项,专注于相关资源,大纲和课程。受AutoGPT和最近的Plan-and-Solve论文的启发,GPT-Researcher解决了速度和确定性问题,通过并行代理工作(而不是同步操作)提供了更稳定的性能和更高的速度。
资源推荐
资源详情
资源评论
收起资源包目录
在线自主代理:GPT-Researcher (134个子文件)
CNAME 15B
custom.css 3KB
styles.css 3KB
index.module.css 418B
HomepageFeatures.module.css 191B
Dockerfile 964B
.dockerignore 4B
.gitignore 232B
index.html 5KB
favicon.ico 18KB
favicon.ico 1KB
examples.ipynb 14KB
planner.jpeg 132KB
diagram-assistant.jpeg 61KB
defaultAgentAvatar.JPG 47KB
banner1.jpg 10KB
scripts.js 5KB
docusaurus.config.js 4KB
HomepageFeatures.js 2KB
sidebars.js 1KB
index.js 1KB
overlay.js 810B
babel.config.js 89B
package.json 1KB
config.json 707B
sidebar.json 392B
LICENSE 1KB
yarn.lock 384KB
index.md 11KB
examples.md 10KB
README.md 8KB
index.md 8KB
README-zh_CN.md 8KB
langchain.md 7KB
CODE_OF_CONDUCT.md 5KB
faq.md 5KB
introduction.md 5KB
introduction.md 5KB
python-sdk.md 4KB
rest_api.md 4KB
agent_frameworks.md 3KB
config.md 3KB
CONTRIBUTING.md 3KB
README.md 2KB
pip-package.md 2KB
welcome.md 2KB
getting-started.md 2KB
config.md 2KB
text.md 2KB
troubleshooting.md 2KB
README.md 1KB
example.md 1KB
llamaindex.md 804B
README.md 791B
html.md 745B
roadmap.md 708B
singleton.md 541B
01-introduction.md 426B
contribute.md 390B
people.md 92B
finance.md 9B
news.md 6B
code.md 6B
.nojekyll 0B
academicResearchAgentAvatar.png 245KB
financeAgentAvatar.png 233KB
leaderboard.png 230KB
businessAnalystAgentAvatar.png 226KB
mathAgentAvatar.png 224KB
travelAgentAvatar.png 222KB
computerSecurityanalystAvatar.png 183KB
architecture.png 140KB
architecture.png 140KB
gptresearcher.png 39KB
examples.png 21KB
diagram-1.png 21KB
tavily.png 10KB
web_scrape.py 8KB
prompts.py 8KB
functions.py 7KB
agent.py 5KB
text.py 5KB
llm.py 4KB
scraper.py 3KB
research_team.py 3KB
google.py 2KB
websocket_manager.py 2KB
gpt_researcher.py 2KB
bing.py 2KB
serpapi.py 2KB
serper.py 2KB
config.py 2KB
server.py 2KB
editor.py 2KB
compression.py 2KB
tavily_search.py 1KB
utils.py 1KB
searx.py 1KB
tavily_news.py 1KB
test.py 1KB
共 134 条
- 1
- 2
资源评论
UnknownToKnown
- 粉丝: 1w+
- 资源: 773
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功