pg2482@columbia.edu
Time it takes programs to complete:
Part A takes 1m 29s to complete.
Part B takes 10m 22s to complete.
== Part A #2 Perplexity measurements ==
$ python perplexity.py A2.uni.txt Brown_train.txt
The perplexity is 1104.83292814
$ python perplexity.py A2.bi.txt Brown_train.txt
The perplexity is 57.2215464238
$ python perplexity.py A2.tri.txt Brown_train.txt
The perplexity is 5.89521267642
== Part A #3 Perplexity ==
$ python perplexity.py A3.txt Brown_train.txt
The perplexity is 13.0759217039
== Part A #4 Performance comparison ==
With linear interpolation, we got a significantly better fit than the bigram model by itself. However, perplexity wasn't as low as the trigram model by itself.
== Part A #5 Sample sentences perplexity ==
$ python perplexity.py Sample1_scored.txt Sample1.txt
The perplexity is 11.6492786046
$ python perplexity.py Sample2_scored.txt Sample2.txt
The perplexity is 1611241155.03
Sample1 belongs to the Brown dataset because the perplexity value is much closer to 1. So, the distribution of n-grams in Sample1 is very close to the distribution of n-grams in the training data.
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
收起资源包目录
Assignment_1.zip (16个子文件)
Assignment_1
nltk_data 24B
test.py 1KB
Homework1
README.txt 1KB
Brown_tagged_dev.txt 2.35MB
Sample2_scored.txt 16KB
solutionsA.py 14KB
Brown_dev.txt 1.32MB
Sample2.txt 365KB
solutionsB.py~ 5KB
Brown_tagged_train.txt 4.92MB
Sample1_scored.txt 73KB
solutionsB.py 15KB
perplexity.py 712B
Brown_train.txt 3.38MB
pos.py 864B
Sample1.txt 649KB
共 16 条
- 1
资源评论
oOtaku1
- 粉丝: 0
- 资源: 1
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功