# keras-sentiment-analysis-web-api
Web api built on flask for keras-based sentiment analysis using Word Embedding, RNN and CNN
The implementation of the classifiers can be found in [keras_sentiment_analysis/library](keras_sentiment_analysis/library)
* [cnn.py](keras_sentiment_analysis/library/cnn.py)
* 1-D CNN with Word Embedding
* Multi-Channel CNN with categorical cross-entropy loss function
* [cnn_lstm.py](keras_sentiment_analysis/library/cnn_lstm.py)
* 1-D CNN + LSTM with Word Embedding
* [ffn.py](keras_sentiment_analysis/library/ffn.py)
* Feedforward network with Glove Word Embedding
* [lstm.py](keras_sentiment_analysis/library/lstm.py)
* LSTM with binary or category cross-entropy loss function
* Bi-directional LSTM/GRU with categorical cross-entropy loss function
# Usage
Run the following command to install the keras, flask and other dependency modules:
```bash
sudo pip install -r requirements.txt
```
## Training (Optional)
As the trained models are already included in the [demo/models](demo/models) folder in the project, the training is
not required. However, if you like to tune the parameters and retrain the models, you can use the
following command to run the training:
```bash
cd demo
python wordvec_bidirectional_lstm_softmax_train.py
```
Below shows the sample code in [wordvec_bidirectional_lstm_softmax_train.py](demo/wordvec_bidirectional_lstm_softmax_train.py):
```python
import numpy as np
import os
import sys
def main():
random_state = 42
np.random.seed(random_state)
current_dir = os.path.dirname(__file__)
# this line ask sys to add the keras_sentiment_analysis module to system path
sys.path.append(os.path.join(current_dir, '..'))
current_dir = current_dir if current_dir is not '' else '.'
output_dir_path = current_dir + '/models'
data_file_path = current_dir + '/data/umich-sentiment-train.txt'
from keras_sentiment_analysis.library.lstm import WordVecBidirectionalLstmSoftmax
from keras_sentiment_analysis.library.utility.simple_data_loader import load_text_label_pairs
from keras_sentiment_analysis.library.utility.text_fit import fit_text
text_data_model = fit_text(data_file_path)
text_label_pairs = load_text_label_pairs(data_file_path)
classifier = WordVecBidirectionalLstmSoftmax()
batch_size = 64
epochs = 20
history = classifier.fit(text_data_model=text_data_model,
model_dir_path=output_dir_path,
text_label_pairs=text_label_pairs,
batch_size=batch_size, epochs=epochs,
test_size=0.3,
random_state=random_state)
if __name__ == '__main__':
main()
```
The above commands will train bidrectional lstm model with softmax activation on the "demo/data/umich-sentiment-train.txt"
dataset and store the trained model in [demo/models/bidirectional_lstm_softmax_**](demo/models)
If you like to train other models, you can use the same command above on another train python scripts:
* [wordvec_cnn_lstm_train.py](demo/wordvec_cnn_lstm_train.py): cnn + lstm model with softmax and categorical crossentropy objective
* [wordvec_lstm_softmax_train.py](demo/wordvec_lstm_softmax_train.py): lstm model with softmax and categorical crossentropy objective
* [wordvec_lstm_sigmoid_train.py](demo/wordvec_lstm_sigmoid_train.py): lstm model with sigmoid and binary crossentropy objective
* [wordvec_cnn_train.py](demo/wordvec_cnn_train.py): cnn model with softmax and categorical crossentropy objective
* [wordvec_multi_channel_cnn_train.py](demo/wordvec_multi_channel_cnn_train.py): multi-channel cnn model with softmax and categorical crossentropy objective
* [wordvec_glove_ffn_train.py](demo/wordvec_glove_ffn_train.py): glove word embedding layer with feed forward network model and categorical crossentropy objective
The figure below compare the training accuracy and validation accuracy of various models using the script [wordvec_compare_models.py](demo/wordvec_compare_models.py):
![model-comparison](demo/models/training-history-comparison.png)
## Predict Sentiments
With the trained models in [demo/models](demo/models), one can test the performance by running the predictors via the
following command:
```bash
cd demo
python wordvec_bidirectional_lstm_softmax_predict.py
```
Below shows the sample code in [wordvec_bidirectional_lstm_softmax_predict.py](demo/wordvec_bidirectional_lstm_softmax_predict.py):
```python
from random import shuffle
import numpy as np
import os
import sys
def main():
random_state = 42
np.random.seed(random_state)
current_dir = os.path.dirname(__file__)
# this line ask sys to add the keras_sentiment_analysis module to system path
sys.path.append(os.path.join(current_dir, '..'))
current_dir = current_dir if current_dir is not '' else '.'
model_dir_path = current_dir + '/models'
data_file_path = current_dir + '/data/umich-sentiment-train.txt'
from keras_sentiment_analysis.library.lstm import WordVecBidirectionalLstmSoftmax
from keras_sentiment_analysis.library.utility.simple_data_loader import load_text_label_pairs
text_label_pairs = load_text_label_pairs(data_file_path)
classifier = WordVecBidirectionalLstmSoftmax()
classifier.load_model(model_dir_path=model_dir_path)
shuffle(text_label_pairs)
for i in range(20):
text, label = text_label_pairs[i]
print('Output: ', classifier.predict(sentence=text))
predicted_label = classifier.predict_class(text)
print('Sentence: ', text)
print('Predicted: ', predicted_label, 'Actual: ', label)
if __name__ == '__main__':
main()
```
Below is the console print out from running the prediction scripts above:
```
Output: [ 5.74236214e-09 1.00000000e+00]
Sentence: by the way, the Da Vinci Code sucked, just letting you know...
Predicted: 0 Actual: 0
Output: [ 2.50778981e-10 1.00000000e+00]
Sentence: , she helped me bobbypin my insanely cool hat to my head, and she laughed at my stupid brokeback mountain cowboy jokes..
Predicted: 0 Actual: 0
Output: [ 1.00000000e+00 2.23617502e-09]
Sentence: I love Harry Potter..
Predicted: 1 Actual: 1
Output: [ 5.44211538e-08 1.00000000e+00]
Sentence: Oh, and Brokeback Mountain is a TERRIBLE movie...
Predicted: 0 Actual: 0
Output: [ 1.00000000e+00 2.92380893e-08]
Sentence: I love The Da Vinci Code...
Predicted: 1 Actual: 1
Output: [ 1.00000000e+00 2.23617502e-09]
Sentence: I love Harry Potter..
Predicted: 1 Actual: 1
Output: [ 1.53129953e-08 1.00000000e+00]
Sentence: Is it just me, or does Harry Potter suck?...
Predicted: 0 Actual: 0
Output: [ 1.00000000e+00 1.66275674e-10]
Sentence: I am going to start reading the Harry Potter series again because that is one awesome story.
Predicted: 1 Actual: 1
Output: [ 2.50778981e-10 1.00000000e+00]
Sentence: , she helped me bobbypin my insanely cool hat to my head, and she laughed at my stupid brokeback mountain cowboy jokes..
Predicted: 0 Actual: 0
Output: [ 1.00000000e+00 1.62028765e-10]
Sentence: So as felicia's mom is cleaning the table, felicia grabs my keys and we dash out like freakin mission impossible.
Predicted: 1 Actual: 1
Output: [ 1.12964793e-08 1.00000000e+00]
Sentence: friday hung out with kelsie and we went and saw The Da Vinci Code SUCKED!!!!!
Predicted: 0 Actual: 0
Output: [ 1.00000000e+00 1.53758450e-10]
Sentence: I want to be here because I love Harry Potter, and I really want a place where people take it serious, but it is still so much fun.
Predicted: 1 Actual: 1
Output: [ 6.24366930e-06 9.99993801e-01]
Sentence: Mission Impossible III-Sucks big-time!..
Predicted: 0 Actual: 0
Output: [ 1.00000000e+00 1.35581537e-08]
Sentence: I, too, like Harry Potter..
Predicted: 1 Actual: 1
Output: [ 1.00000000e+00 9.90327020e-09]
Sentence:
没有合适的资源?快使用搜索试试~ 我知道了~
keras-sentiment-analysis-web-api:基于烧瓶的Web API,用于使用Word Embedding...
共101个文件
py:37个
npy:15个
html:14个
5星 · 超过95%的资源 需积分: 11 2 下载量 194 浏览量
2021-02-05
03:48:23
上传
评论
收藏 22.44MB ZIP 举报
温馨提示
keras情绪分析Web API 基于烧瓶的Web API,使用Word Embedding,RNN和CNN进行基于keras的情感分析 分类器的实现可以在找到 带字嵌入的一维CNN 具有分类交叉熵损失功能的多通道CNN 带字嵌入的一维CNN + LSTM 带手套字嵌入的前馈网络 具有二进制或类别交叉熵损失函数的LSTM 具有分类互熵损失函数的双向LSTM / GRU 用法 运行以下命令以安装keras,flask和其他依赖项模块: sudo pip install -r requirements.txt 培训(可选) 由于训练后的模型已经包含在项目的文件夹中,因此不需要训练
资源详情
资源评论
资源推荐
收起资源包目录
keras-sentiment-analysis-web-api:基于烧瓶的Web API,用于使用Word Embedding,RNN和CNN进行基于keras的情感分析 (101个子文件)
setup.cfg 21B
style.css 918B
lstm_softmax.csv 26KB
lstm_sigmoid.csv 26KB
bidirectional_lstm_softmax.csv 26KB
wordvec_cnn_lstm.csv 26KB
wordvec_multi_channel_cnn.csv 26KB
wordvec_cnn.csv 26KB
.gitignore 1KB
.gitignore 13B
.gitignore 13B
wordvec_multi_channel_cnn_weights.h5 8.91MB
wordvec_cnn_lstm_weights.h5 1.75MB
wordvec_cnn_weights.h5 1.4MB
bidirectional_lstm_softmax_weights.h5 1.23MB
lstm_sigmoid_weights.h5 1.06MB
lstm_softmax_weights.h5 1.06MB
glove_ffn_weights.h5 39KB
home.html 679B
layout.html 416B
ffn_glove.html 329B
wordvec_cnn_lstm_result.html 323B
bidirectional_lstm_softmax_result.html 319B
lstm_softmax_result.html 319B
lstm_sigmoid_result.html 319B
wordvec_cnn_result.html 318B
ffn_glove_result.html 316B
bidirectional_lstm_softmax.html 308B
lstm_softmax.html 294B
wordvec_cnn_lstm.html 290B
lstm_sigmoid.html 289B
wordvec_cnn.html 285B
wordvec_cnn_lstm_architecture.json 3KB
bidirectional_lstm_softmax_architecture.json 2KB
lstm_sigmoid_architecture.json 2KB
lstm_softmax_architecture.json 2KB
wordvec_cnn_architecture.json 2KB
glove_ffn_architecture.json 1KB
LICENSE 1KB
README.md 13KB
lstm_softmax_config.npy 60KB
lstm_sigmoid_config.npy 60KB
wordvec_cnn_lstm_config.npy 60KB
wordvec_cnn_config.npy 60KB
wordvec_multi_channel_cnn_config.npy 60KB
bidirectional_lstm_softmax_config.npy 60KB
glove_ffn_config.npy 60KB
wordvec_multi_channel_cnn_architecture.npy 7KB
bidirectional_lstm_softmax-history.npy 4KB
lstm_sigmoid-history.npy 4KB
lstm_softmax-history.npy 4KB
wordvec_cnn-history.npy 4KB
wordvec_cnn_lstm-history.npy 4KB
glove_ffn-history.npy 4KB
wordvec_multi_channel_cnn-history.npy 4KB
wordvec_cnn_lstm.pb 1.75MB
lstm_sigmoid.pb 1.75MB
wordvec_cnn.pb 1.75MB
bidirectional_lstm_softmax.pb 1.75MB
lstm_softmax.pb 1.75MB
wordvec_multi_channel_cnn.pb 1.75MB
training-history-comparison.png 60KB
lstm.py 16KB
cnn.py 12KB
flaskr.py 8KB
ffn.py 6KB
cnn_lstm.py 6KB
glove_loader.py 4KB
plot_utils.py 3KB
tensorflow_utils.py 2KB
tensorflow_bidrectional_lstm_classifier.py 2KB
tensorflow_cnn_lstm_classifier.py 2KB
wordvec_compare_models.py 2KB
tensorflow_keras_model_export_pb.py 2KB
text2vec_tfidf_multi_nominal_nb_classifier_train.py 2KB
text2vec_glove_mlp_classifier_train.py 1KB
wordvec_glove_ffn_train.py 1KB
wordvec_bidirectional_lstm_softmax_train.py 1KB
wordvec_multi_channel_cnn_train.py 1KB
wordvec_lstm_softmax_train.py 1KB
wordvec_lstm_sigmoid_train.py 1KB
wordvec_cnn_lstm_train.py 1KB
wordvec_cnn_train.py 1KB
tokenizer_utils.py 1KB
wordvec_glove_ffn_predict.py 1KB
wordvec_bidirectional_lstm_softmax_predict.py 1KB
text_fit.py 1KB
wordvec_multi_channel_cnn_predict.py 1KB
wordvec_lstm_softmax_predict.py 1KB
wordvec_lstm_sigmoid_predict.py 1KB
wordvec_cnn_lstm_predict.py 1KB
wordvec_cnn_predict.py 1KB
setup.py 378B
simple_data_loader.py 252B
__init__.py 23B
__init__.py 0B
__init__.py 0B
__init__.py 0B
__init__.py 0B
umich-sentiment-train.txt 437KB
共 101 条
- 1
- 2
努力中的懒癌晚期
- 粉丝: 34
- 资源: 4716
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论1