# FinGPT: Open-Source Financial Large Language Models
[![Downloads](https://static.pepy.tech/badge/fingpt)](https://pepy.tech/project/fingpt)
[![Downloads](https://static.pepy.tech/badge/fingpt/week)](https://pepy.tech/project/fingpt)
[![Python 3.8](https://img.shields.io/badge/python-3.6-blue.svg)](https://www.python.org/downloads/release/python-360/)
[![PyPI](https://img.shields.io/pypi/v/fingpt.svg)](https://pypi.org/project/fingpt/)
![License](https://img.shields.io/github/license/AI4Finance-Foundation/fingpt.svg?color=brightgreen)
<div align="center">
<img align="center" src=figs/logo_transparent_background.png width="40%"/>
</div>
Let us not expect Wall Street to open-source LLMs or open APIs, due to FinTech institutes' internal regulations and policies.
[Blueprint of FinGPT](https://arxiv.org/abs/2306.06031)
<https://huggingface.co/FinGPT>
[![](https://dcbadge.vercel.app/api/server/trsr8SXpW5)](https://discord.gg/trsr8SXpW5)
## What's New:
- [Model Release] Nov, 2023: We release [FinGPT-Forecaster](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT_Forecaster)! ð¥[Demo](https://huggingface.co/spaces/FinGPT/FinGPT-Forecaster), [Medium Blog](https://medium.datadriveninvestor.com/introducing-fingpt-forecaster-the-future-of-robo-advisory-services-50add34e3d3c) & [Model](https://huggingface.co/FinGPT/fingpt-forecaster_dow30_llama2-7b_lora) are available on Huggingfaceð¤!
- [Paper Acceptance] Oct, 2023: ["FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets"](https://arxiv.org/abs/2310.04793) is acceptedð by [Instruction Workshop](https://an-instructive-workshop.github.io/) @ NeurIPS 2023
- [Paper Acceptance] Oct, 2023: ["FinGPT: Democratizing Internet-scale Data for Financial Large Language Models"](https://arxiv.org/abs/2307.10485) is acceptedð by [Instruction Workshop](https://an-instructive-workshop.github.io/) @ NeurIPS 2023
- [Model Release] Oct, 2023: We release the [financial multi-task LLMs](https://huggingface.co/FinGPT) ð¥ produced when evaluating base-LLMs on [FinGPT-Benchmark](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT_Benchmark)
- [Paper Acceptance] Sep, 2023: ["Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language Models"](https://arxiv.org/abs/2310.04027) is acceptedð by [ACM International Conference on AI in Finance (ICAIF-23)](https://ai-finance.org/icaif-23-accepted-papers/)
- [Model Release] Aug, 2023: We release the [financial sentiment analysis model](https://huggingface.co/FinGPT/fingpt-sentiment_llama2-13b_lora) ð¥
- [Paper Acceptance] Jul, 2023: ["Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models"](https://arxiv.org/abs/2306.12659) is acceptedð by [FinLLM 2023](https://finllm.github.io/workshop/#/fcb)@IJCAI 2023
- [Paper Acceptance] Jul, 2023: ["FinGPT: Open-Source Financial Large Language Models"](https://arxiv.org/abs/2306.06031) is acceptedð by [FinLLM 2023](https://finllm.github.io/workshop/#/fcb)@IJCAI 2023
- [Medium Blog] Jun 2023: [FinGPT: Powering the Future of Finance with 20 Cutting-Edge Applications](https://medium.datadriveninvestor.com/fingpt-powering-the-future-of-finance-with-20-cutting-edge-applications-7c4d082ad3d8)
## Why FinGPT?
1). Finance is highly dynamic. [BloombergGPT](https://arxiv.org/abs/2303.17564) trained an LLM using a mixture of finance data and general-purpose data, which took about 53 days, at a cost of around **$3M**). It is costly to retrain an LLM model like BloombergGPT every month or every week, thus lightweight adaptation is highly favorable. FinGPT can be fine-tuned swiftly to incorporate new data (the cost falls significantly, less than **$300 per fine-tuning**).
2). Democratizing Internet-scale financial data is critical, say allowing timely updates of the model (monthly or weekly updates) using an automatic data curation pipeline. BloombergGPT has privileged data access and APIs, while FinGPT presents a more accessible alternative. It prioritizes lightweight adaptation, leveraging the best available open-source LLMs.
3). The key technology is "RLHF (Reinforcement learning from human feedback)", which is missing in BloombergGPT. RLHF enables an LLM model to learn individual preferences (risk-aversion level, investing habits, personalized robo-advisor, etc.), which is the "secret" ingredient of ChatGPT and GPT4.
### Milestone of AI Robo-Advisor: FinGPT-Forecaster
Try the latest released FinGPT-Forecaster demo at our [HuggingFace Space](https://huggingface.co/spaces/FinGPT/FinGPT-Forecaster)
![demo_interface](fingpt/FinGPT_Forecaster/figs/interface.png)
Enter the following inputs:
1) ticker symbol (e.g. AAPL, MSFT, NVDA)
2) the day from which you want the prediction to happen (yyyy-mm-dd)
3) the number of past weeks where market news are retrieved
4) whether to add the latest basic financials as additional information
Click Submitï¼ And you'll be responded with a well-rounded analysis of the company and a prediction for next week's stock price movement!
For detailed and more customized implementation, please refer to [FinGPT-Forecaster](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT_Forecaster)
## FinGPT Demos:
### Current State-of-the-arts for Financial Sentiment Analysis
* [FinGPT V3 (Updated on 10/12/2023)](./fingpt)
* What's new: **Best trainable and inferable FinGPT for sentiment analysis on a single RTX 3090, which is even better than GPT-4 and ChatGPT Finetuning.**
* [FinGPT v3](https://huggingface.co/FinGPT/fingpt-sentiment_llama2-13b_lora) series are LLMs finetuned with the LoRA method on the News and Tweets sentiment analysis dataset which achieve the best scores on most of the financial sentiment analysis datasets with low cost.
* FinGPT v3.3 use llama2-13b as base model; FinGPT v3.2 uses llama2-7b as base model; FinGPT v3.1 uses chatglm2-6B as base model.
* Benchmark Results:
* | Weighted F1 | FPB | FiQA-SA | TFNS | NWGI | Devices | Time | Cost |
| ------------------------------------------------------------ | :-------: | :-------: | :-------: | :-------: | :----------------: | :---------: | :------------: |
| [FinGPT v3.3](https://huggingface.co/FinGPT/fingpt-sentiment_llama2-13b_lora)| **0.882** | 0.874 | **0.903** | **0.643** | 1 Ã RTX 3090 | 17.25 hours | $17.25 |
| FinGPT v3.2| 0.850 | 0.860 | 0.894 | 0.636 | 1 Ã A100 | 5.5 hours | $ 22.55 |
| FinGPT v3.1| 0.855 | 0.850 | 0.875 | 0.642 | 1 Ã A100 | 5.5 hours | $ 22.55 |
| FinGPT (8bit) | 0.855 | 0.847 | 0.879 | 0.632 | 1 Ã RTX 3090 | 6.47 hours | $ 6.47 |
| FinGPT (QLoRA) | 0.777 | 0.752 | 0.828 | 0.583 | 1 Ã RTX 3090 | 4.15 hours | $ 4.15 |
| OpenAI Fine-tune | 0.878 | **0.887** | 0.883 | - | - | - | - |
| GPT-4 | 0.833 | 0.630 | 0.808 | - | - | - | - |
| FinBERT | 0.880 | 0.596 | 0.733 | 0.538 | 4 Ã NVIDIA K80 GPU | - | - |
| Llama2-7B | 0.390 | 0.800 | 0.296 | 0.503 | 2048 Ã A100 | 21 days | $ 4.23 million |
| BloombergGPT | 0.511 | 0.751 | - | - | 512 Ã A100 | 53 days | $ 2.67 million
没有合适的资源?快使用搜索试试~ 我知道了~
金融大型语言模型:FinGPT
共235个文件
py:100个
sh:28个
ipynb:26个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 156 浏览量
2024-01-12
11:27:15
上传
评论 1
收藏 9.82MB ZIP 举报
温馨提示
一个金融领域的大型语言模型,通过在FinNLP和FinNLP网站上进行民主化互联网规模的数据训练而得到。该项目旨在为金融领域提供强大的自然语言处理能力,帮助分析师、交易员和研究人员在金融领域的各种任务中获得更准确的语言模型支持。
资源推荐
资源详情
资源评论
收起资源包目录
金融大型语言模型:FinGPT (235个子文件)
df.csv 2.33MB
ChatGPT_Robo_Advisor_v2_Results.csv 434KB
sent_valid_penultimate_run_classified_classified.csv 234KB
sent_valid_penultimate_run.csv 233KB
sent_valid_penultimate_run_classified.csv 233KB
sent_valid_scraped.csv 212KB
sent_valid.csv 212KB
maotai.csv 44KB
maotai_another.csv 37KB
hs_300.csv 33KB
ChatGPT_Robo_Advisor_Results.csv 30KB
test_classified.csv 25KB
test.csv 25KB
dataset.csv 11KB
dataset.csv 7KB
.env.example 130B
.gitignore 2KB
.gitignore 47B
FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb 731KB
trade_with_gpt3.ipynb 666KB
demo.ipynb 626KB
FinGPT_Inference_Llama2_13B_falcon_7B_for_Beginners.ipynb 223KB
load_data.ipynb 154KB
prepare_data.ipynb 111KB
infer.ipynb 101KB
ChatGPT_Robo_Advisor_v2.ipynb 64KB
prepare_data.ipynb 64KB
trade_with_chatgpt.ipynb 52KB
make_dataset_by_date.ipynb 51KB
FMP.ipynb 46KB
get_chatgpt_results.ipynb 40KB
demo.ipynb 34KB
train_Llama2_13B.ipynb 23KB
main.ipynb 19KB
ChatGPT_Robo_Advisor.ipynb 18KB
train_ChatGLM2_6B.ipynb 18KB
main.ipynb 18KB
making_data.ipynb 18KB
train.ipynb 15KB
benchmarks.ipynb 12KB
benchmarks_llama2_13b.ipynb 11KB
SeekingAlpha_Content.ipynb 9KB
download_content_demo.ipynb 9KB
ChatGPT_sentiment_analysis_benchmark.ipynb 8KB
framework.jpg 521KB
instruction_following_dataset.jpg 393KB
white_logo_color_background.jpg 321KB
FinGPT.jpg 141KB
config_new.json 810B
config.json 793B
config.json 784B
config.json 784B
config_hf.json 222B
LICENSE 1KB
README.md 29KB
README.md 17KB
README.md 14KB
readme.md 6KB
README.md 5KB
README.md 5KB
readme.md 5KB
CODE_OF_CONDUCT.md 4KB
README.md 4KB
README.md 4KB
readme.md 4KB
README.md 3KB
CONTRIBUTING.md 3KB
README.md 3KB
README.md 2KB
README.md 2KB
readme.md 2KB
README.md 1KB
feature_request.md 595B
README.md 360B
README.md 199B
README.md 193B
README.md 8B
nohup.out 6KB
text-curie-001.pkl 1.9MB
text-davinci-003.pkl 1.47MB
FinGPT_Benchmark_20231110.png 1004KB
FinGPT_RAG_framework.png 875KB
FinGPT_Benchmark_update1.png 824KB
FinGPT_Benchmark.png 821KB
FinGPT_framework_20231003.png 794KB
FinGPT_framework.png 679KB
FinGPT_FinNLP_data_source.png 365KB
fingpt_best_presentation.png 238KB
logo_transparent_background.png 208KB
interface.png 167KB
showcase.png 131KB
output.png 105KB
response.png 66KB
title.png 19KB
plotting.py 62KB
tears.py 46KB
timeseries.py 36KB
news_scraper.py 29KB
raw_datasets.py 26KB
perf_attrib.py 22KB
共 235 条
- 1
- 2
- 3
资源评论
UnknownToKnown
- 粉丝: 1w+
- 资源: 621
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功