摘要: |
在基于神经网络的语音识别任务中,提出根据激励函数二阶导数优化网络预训练阶段
中权值初始化的方法。利用激励函数的非线性区域和自变量呈高斯分布的特性,寻找权值分
布的较优方差以提升训练速度。通过比较同一学习速率下不同初始化数值对收敛速度的影响
,发现此种方法可以加快预训练阶段的速度,提升神经网络训练的效率。 |
关键词: 语音识别 深层神经网络 预训练 初始化 激励函数 |
DOI:10.3969/j.issn.1001-893x.2013.07.014 |
|
基金项目:国家自然科学基金资助项目(61075020) |
|
Improved initialization of pre-training in deep neural network |
ZHOU Jia-jun,OU Zhi-jian |
() |
Abstract: |
Second derivative of activation function is used to optimize weight i
nitialization in deep neural network pre-training phase within speech recognit
ion tasks.By using the non-linear region of activation function and independent
variables′ Gaussian distribution,a method of finding the best varia
nce is proposed in order to speed the training up. Comparison of convergence rat
es in different weight initialization at the same learning rate shows that this
method can accelerate the speed of the pre-training phase and enhance the effi
ciency of neural network training. |
Key words: speech recognition deep neural network pre-training initialization activation function |