# 500 Deep Learning Questions
<p align="center">
<strong>A comprehensive body of knowledge on Deep learning and Artificial Intelligence</strong>
</p>
`500 Deep Learning Questions` is a collection of articles on the mathematical and technical aspects of Deep Learning and Artificial Intelligence that will help you build a strong foundation of knowledge in this domain.
**Notes**
- Please help us improve this resource as mentioned in the [section on contributing](#contributing)
- Please respect the authors' intellectual property rights, and copyright. Piracy will be investigated
- Feel free to share the location of this content with others or fork the repository for contributing, but please ask for permission before copying and forwarding this content
2018/06/27 TanJiyong, 2020/10/28 Arjun Krishna-University of Waterloo
This was translated with Google Translate and partial human translation. Please help translate more of this resource to English as mentioned in the [section on contributing](#contributing)
**Please star and share this project if you like it!**
# Sections of README
1. [Sections of README](#sections-of-readme)
2. [How to use this Resource](#how-to-use-this-resource)
3. [Contributing](#contributing)
1. [How to help](#how-to-help)
2. [Notes on contributing](#notes-on-contributing)
4. [Communication](#communication)
5. [Table of Contents](#table-of-contents)
# How to Use This Resource
1. Go to the Language section that you are most comfortable with (e.g. English )
2. Look at the Table of Contents and go the topics that you are most interested in or start from Chapter 1
3. The best way to truly learn something is to apply it in practice
1. If you have just read the section on Recurrent Neural Networks, ask yourself this:
1. In what situations would I apply this?
1. Speech recognition
2. Translation
2. What benefits does this machine learning technique have over others?
3. What disdvantages does it have?
4. Code a recurrent neural network in MatLab, or using Tensorflow or PyTorch
# Contributing
## How to help
1. Translate articles (especially to English)
2. Edit previous translations for grammar and technical errors
3. Suggest or create new articles
4. Share this project with your friends who want to contribute
5. It is recommended to use the typora-Markdown reader: https://typora.io/ as mentioned in Notes on Contributing
## Steps to contribute
1. Fork the project
2. Make changes
1. Reflect your name and affiliation when contributing in the format of 'Name-Affiliation'
- eg: Daxie-West Lake University, or Arjun Krishna-University of Waterloo
3. Review your changes for grammatical or technical errors
4. Preview formatting using the Typora Markdown Editor and Reader
1. Recommended settings are in the section [Notes on Contributing](##notes-on-contributing)
5. Commit changes to your fork of the project
6. Create a pull request, clearly state what changes you made and why
7. Your changes will be reviewed
8. Congratulations!
## Notes on contributing
- We recommend that you use the Typora Markdown Editor and Reader to preview your changes before you create a pull request
- Recommended settings for Typora
- Setting:
File->Preference
- Syntax Support
- Inline Math
- Subscript
- Superscript
- Highlight
- Diagrams
## Example Contribution
### 3.3.2 How to find the optimal value of the hyperparameter? (Contributor: Daxie - Stanford University)
There are always some difficult hyperparameters when using machine learning algorithms. For example, weight attenuation size, Gaussian kernel width, and so on. The algorithm does not set these parameters, but instead requires you to set their values. The set value has a large effect on the result. Common practices for setting hyperparameters are:
1. Guess and check: Select parameters based on experience or intuition, and iterate over.
2. Grid Search: Let the computer try to evenly distribute a set of values within a certain range.
3. Random search: Let the computer randomly pick a set of values.
4. Bayesian optimization: Using Bayesian optimization of hyperparameters, it is difficult to meet the Bayesian optimization algorithm itself.
5. Perform local optimization with good initial guessing: this is the MITIE method, which uses the BOBYQA algorithm and has a carefully chosen starting point. Since BOBYQA only looks for the nearest local optimal solution, the success of this method depends largely on whether there is a good starting point. In the case of MITIE, we know a good starting point, but this is not a universal solution, because usually you won't know where the good starting point is. On the plus side, this approach is well suited to finding local optimal solutions. I will discuss this later.
6. The latest global optimization method for LIPO. This method has no parameters and is proven to be better than a random search method.
## Communication
1. Create an issue using the Github Bug Tracker
2. Email the project at: scutjy2015@163.com
3. Join the project's WeChat:
1. It is easier to enter the group if you have made contributions and they have been integrated into this project
2. "Deep Learning 500 Questions"
1. please add WeChat Client 1: HQJ199508212176 Client 2: Xuwumin1203 Client 3: tianyuzy
# Table of Contents
Chapter 1 Mathematical Foundation 1
1.1 The relationship between scalars, vectors, and tensors 1
1.2 What is the difference between a tensor and matrix? 1
1.3 Matrix and vector multiplication results 1
1.4 Vector and matrix norm induction 1
1.5 How to judge a matrix to be positive? 2
1.6 Derivative Bias Calculation 3
1.7 What is the difference between 1.7 derivatives and partial derivatives? 3
1.8 Eigenvalue decomposition and feature vector 3
1.9 What is the relationship between singular values and eigenvalues? 4
1.10 Why should machine learning use probabilities? 4
1.11 What is the difference between a variable and a random variable? 4
1.12 Common probability distribution? 5
1.13 Example Understanding Conditional Probability 9
1.14 What is the difference between joint probability and edge probability? 10
1.15 Chain Law of Conditional Probability 10
1.16 Independence and conditional independence 11
1.17 Summary of Expectations, Variances, Covariances, Correlation Coefficients 11
Chapter 2 Fundamentals of Machine Learning 14
2.1 Various common algorithm illustrations 14
2.2 What is supervised learning, unsupervised learning, semi-supervised learning, and weak supervised learning? 15
2.3 What are the steps insupervised learning? 16
2.4 What is multi-instance learning? 17
2.5 What is the difference between classification networks and regression? 17
2.6 What is a neural network? 17
2.7 Advantages and Disadvantages of Common Classification Algorithms? 18
2.8 Is the correct rate good for evaluating classification algorithms? 20
2.9 How to evaluate a classification algorithm? 20
2.10 What kind of classifier is the best? 22
2.11 The relationship between big data and deep learning 22
2.12 Understanding Local Optimization and Global Optimization 23
2.13 Understanding Logistic Regression 24
2.14 What is the difference between logistic regression and a Naive Bayes Classifier? twenty four
2.15 Why do you need a cost function? 25
2.16 Principle of the function of the cost function 25
2.17 Why is the cost function non-negative? 26
2.18 Common cost function? 26
2.19 Why use cross entropy instead of quadratic cost function? 28
2.20 What is a loss function? 28
2.21 Common loss function 28
2.22 Why does logistic regression use a logarithmic loss function? 30
2.22 How does the logarithmic loss function measure loss? 31
2.23 Why do gradients need to be redu
没有合适的资源?快使用搜索试试~ 我知道了~
深度学习500问资料整理
共1230个文件
png:895个
jpg:188个
md:53个
需积分: 0 0 下载量 176 浏览量
2023-05-26
20:16:37
上传
评论
收藏 163.75MB ZIP 举报
温馨提示
内容包括:相关数学基础、机器学习基础、深度学习基础、经典网络解读、卷积神经网络、循环神经网络、生成对抗网络、目标检测、图像分割、强化学习、迁移学习,更深一步就是网络搭建及其训练、优化算法、超参数调参、异构计算+GPU框架选择等等
资源推荐
资源详情
资源评论
收起资源包目录
深度学习500问资料整理 (1230个子文件)
1.bmp 2.2MB
1.doc 49KB
深度学习500问-Tan-00目录.docx 32KB
.DS_Store 10KB
3-20.gif 984KB
3-20.gif 984KB
8.1.11.gif 565KB
3-17.gif 190KB
3-17.gif 190KB
image11.GIF 102KB
image11.GIF 102KB
2-22.gif 14KB
2-22.gif 14KB
2-21.gif 12KB
2-21.gif 12KB
2-20.gif 7KB
2-20.gif 7KB
.gitignore 48B
index.html 3KB
image18.jpeg 650KB
image18.jpeg 650KB
image15.jpeg 194KB
image15.jpeg 194KB
image2.jpeg 179KB
image2.jpeg 179KB
image20.jpeg 138KB
image20.jpeg 138KB
two_trans_wgan.jpeg 132KB
image22.jpeg 95KB
image22.jpeg 95KB
image21.jpeg 82KB
image23.jpeg 65KB
image25.jpeg 61KB
image25.jpeg 61KB
tflite_artc.JPEG 57KB
image21.jpeg 55KB
QNNPACK3.jpeg 41KB
image19.jpeg 41KB
image19.jpeg 41KB
QNNPACK2.jpeg 37KB
QNNPACK4.jpeg 36KB
image28.jpeg 31KB
image28.jpeg 31KB
cycle衰减.jpeg 25KB
image27.jpeg 24KB
image27.jpeg 24KB
QNNPACK1.jpeg 24KB
余弦cycle衰减.jpeg 22KB
噪声线性余弦衰减.jpeg 21KB
多项式衰减.jpeg 17KB
余弦衰减.jpeg 17KB
逆时衰减.jpeg 15KB
指数衰减.jpeg 15KB
image60.jpeg 12KB
image60.jpeg 12KB
线性余弦衰减.jpeg 11KB
3-7.jpg 510KB
3-7.jpg 510KB
1.jpg 388KB
1.jpg 388KB
低秩分解模型压缩加速.jpg 305KB
2.jpg 262KB
2.jpg 262KB
2.jpg 259KB
2.jpg 259KB
3.jpg 182KB
3.jpg 182KB
1.jpg 172KB
2.16.4.1.jpg 172KB
1.jpg 172KB
2.16.4.1.jpg 172KB
18-5-6-1.jpg 159KB
09cb64cf3f3ceb5d87369e41c29d34bf.jpg 151KB
1.jpg 148KB
1.jpg 148KB
93f17238115b84db66ab5d56cfddacdd.jpg 144KB
7.jpg 143KB
7.jpg 143KB
figure_9.3_4.jpg 126KB
d8427680025580b726d99b1143bffd94.jpg 124KB
aa10d36f758430dd4ff72d2bf6a76a6c.jpg 116KB
966b912ade32d8171f18d2ea79b91b04.jpg 112KB
ab78b954fa6c763bdc99072fb5a1f670.jpg 111KB
4abacd82901988c3e0a98bdb07b2abc6.jpg 110KB
2.19.5A.jpg 108KB
2.19.5A.jpg 108KB
figure_9.3_1.jpg 107KB
不同学习率.jpg 103KB
4761dc00b96100769281e7c90011d09b.jpg 102KB
1cb62df2f6209e4c20fe5592dcc62918.jpg 101KB
18-5-9-0.jpg 101KB
07e7da7afdea5c38f470b3a06d8137e2.jpg 93KB
figure_9.1.3_1.jpg 92KB
fa08900e89bfd53cc28345d21bc6aca0.jpg 91KB
figure_9.1.2_1.jpg 84KB
tanh.jpg 82KB
1.jpg 79KB
1.jpg 79KB
3-11.jpg 74KB
3-11.jpg 74KB
共 1230 条
- 1
- 2
- 3
- 4
- 5
- 6
- 13
资源评论
花花世界的小憩
- 粉丝: 1
- 资源: 1
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功