没有合适的资源?快使用搜索试试~ 我知道了~
Training Issues & CNN Development
需积分: 7 0 下载量 174 浏览量
2022-06-14
14:16:34
上传
评论
收藏 338KB PDF 举报
温馨提示
试读
1页
Training Issues & CNN Development
资源推荐
资源详情
资源评论
TrainingIssues&CNNDev elopment
GradientExploding&Vanishing
avoidgradientexploding
weightinitialization,suchthat| wi|≤1ingeneral
weightre-normalizationduringtraining
rescalingxto|x|≤1
reducegradientvanishing
Weightinitialization:Xavier’smethod
Varianceofsignalacrosslayerdoesnotchange!
Varianceofbackwardgradientsignalacrosslayerdoesnotchange!
activation---noReLU
activation---Re LU
n上一层神经元的个数
m反向传播上一层神经元个数
Mini-BatchIssue
Is sueofmini-batch
Differentmini-batchdataoftenhavedifferentdistributions
Causeddifferentmini-batchinputdistributionsforeverylayer!
Distributionofon eminibatch changesovertimeforalayer!
Eachlayerneedstocontinuouslyadapttonewdistributions
Batchnor malization
reducesvarietiesofneurons’inputs/o utputs,i.e.,reducinglayer’srepresentationpower.
Solution:considerγkandβkaspartofmodelparameters
ResNetandItsextensions
Prob lemofdeeper networks
thelatterlayerchangesconsequently
morelaye rscausedlarge rtrainingandtesterror!
Deepernetworknotoverfitting,buthardertooptimize
so l ution:usenetworklayertolearnresidualmappingra therthan
directlytolearnadesiredunderlyingmapping!
IfH(x)isidentitymapping,itiseasiertopushresidualtozerothantofitanidentitymappingbyastackofnonlinearlayers
WhyRes Netswor kbetter? Train/updateea chlayereasier,byskipconnections
ResNetextens ions
DenseNets:den selyskipconnectionswithineachblock;Fewerkernel s(parameters)ineachlayer,why?
WideResNets:morekernelsineachlaye r,withfewerlayers
ResNeXt:divideResNetblockintosmallertransformations,thenaggregate.
SENet:Squeeze-and-ExcitationNetworks
PNASNet:progressiveneuralarchitecturesearch
资源评论
溢le
- 粉丝: 0
- 资源: 2
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功