《電子技術(shù)應(yīng)用》
您所在的位置:首頁(yè) > 其他 > 設(shè)計(jì)應(yīng)用 > 基于浮柵器件的低位寬卷積神經(jīng)網(wǎng)絡(luò)研究
基于浮柵器件的低位寬卷積神經(jīng)網(wǎng)絡(luò)研究
信息技術(shù)與網(wǎng)絡(luò)安全
陳雅倩,黃 魯
(中國(guó)科學(xué)技術(shù)大學(xué) 微電子學(xué)院,安徽 合肥230026)
摘要: 浮柵器件(Flash)能夠?qū)⒋鎯?chǔ)和計(jì)算的特性相結(jié)合,實(shí)現(xiàn)存算一體化,但是單個(gè)浮柵單元最多只能存儲(chǔ)位寬為4 bit的數(shù)據(jù)。面向Nor Flash,研究了卷積神經(jīng)網(wǎng)絡(luò)參數(shù)的低位寬量化,對(duì)經(jīng)典的AlexNet、VGGNet以及ResNet通過(guò)量化感知訓(xùn)練。采用非對(duì)稱(chēng)量化,將模型參數(shù)從32位浮點(diǎn)數(shù)量化至4位定點(diǎn)數(shù),模型大小變?yōu)樵瓉?lái)的1/8,針對(duì)Cifar10數(shù)據(jù)集,4位量化模型的準(zhǔn)確率相對(duì)于全精度網(wǎng)絡(luò)僅下降不到2%。最后將量化完成的卷積神經(jīng)網(wǎng)絡(luò)模型使用Nor Flash陣列加速。Hspice仿真結(jié)果表明,相對(duì)于全精度模型,部署在Nor Flash陣列中的量化模型精度僅下降2.25%,驗(yàn)證了卷積神經(jīng)網(wǎng)絡(luò)部署在Nor Flash上的可行性。
中圖分類(lèi)號(hào): TP183
文獻(xiàn)標(biāo)識(shí)碼: A
DOI: 10.19358/j.issn.2096-5133.2021.06.007
引用格式: 陳雅倩,黃魯. 基于浮柵器件的低位寬卷積神經(jīng)網(wǎng)絡(luò)研究[J].信息技術(shù)與網(wǎng)絡(luò)安全,2021,40(6):38-42.
Quantification research of convolutional neural network oriented Nor Flash
Chen Yaqian,Huang Lu
(School of Microelectronics,University of Science and Technology of China,Hefei 230026,China)
Abstract: Flash is one of the most promising candidates to bulid processing-in-memory(PIM)structures. However,the data width in one flash is 4bit at most. This article is oriented to Nor Flash and studies the quantitzation of convolution neural network. It performs quantitative perception training on the classic AlexNet, VGGNet and ResNet, and uses asymmetric quantization to quantify the model parameters from 32-bit floating point to 4-bit, and the model size becomes 1/8 of the original. For the Cifar10 data set, the accuracy of the 4-bit quantization model is only less than 2% lower than that of the full-precision network. Finally, the quantized convolutional neural network model is accelerated by the Nor Flash array. Hspice simulation results show that the accuracy of the quantized model bulided in the Nor Flash array is only reduced by 2.25% compared to the full-precision model. The feasibility of deploying the convolutional neural network on Nor Flash is verified.
Key words : convolution neural network;quantification;computation in memory;Nor Flash

0 引言

卷積神經(jīng)網(wǎng)絡(luò)(Convolution Neural Network,CNN)在圖像識(shí)別等領(lǐng)域有著廣泛的應(yīng)用,隨著網(wǎng)絡(luò)深度的不斷增加,CNN模型的參數(shù)也越來(lái)越多,例如Alexnet[1]網(wǎng)絡(luò),結(jié)構(gòu)為5層卷積層,3層全連接層,網(wǎng)絡(luò)參數(shù)超過(guò)5 000萬(wàn),全精度的模型需要250 MB的存儲(chǔ)空間,而功能更加強(qiáng)大的VGG[2]網(wǎng)絡(luò)和Res[3]網(wǎng)絡(luò)的深度以及參數(shù)量更是遠(yuǎn)遠(yuǎn)超過(guò)Alexnet。對(duì)于這些卷積神經(jīng)網(wǎng)絡(luò),每個(gè)運(yùn)算周期都需要對(duì)數(shù)百萬(wàn)個(gè)參數(shù)進(jìn)行讀取和運(yùn)算,大量參數(shù)的讀取既影響網(wǎng)絡(luò)的計(jì)算速度也帶來(lái)了功耗問(wèn)題?;隈T諾依曼架構(gòu)的硬件由于計(jì)算單元和存儲(chǔ)單元分離,在部署CNN模型時(shí)面臨存儲(chǔ)墻問(wèn)題,數(shù)據(jù)頻繁搬運(yùn)消耗的時(shí)間和能量遠(yuǎn)遠(yuǎn)大于計(jì)算單元計(jì)算消耗的時(shí)間和能量。

存算一體架構(gòu)的硬件相對(duì)于馮諾依曼架構(gòu)的硬件,將計(jì)算單元和存儲(chǔ)單元合并,大大減少了數(shù)據(jù)的傳輸,從而降低功耗和加快計(jì)算速度[4],因此將深度卷積神經(jīng)網(wǎng)絡(luò)部署在基于存算一體架構(gòu)的硬件上具有廣闊的前景。目前實(shí)現(xiàn)存算一體化的硬件主要包括相變存儲(chǔ)器[5](Phase Change Memory,PCM),阻變存儲(chǔ)器ReRAM[6]以及浮柵器件Flash,其中Flash由于制造工藝成熟,受到廣泛關(guān)注。



本文詳細(xì)內(nèi)容請(qǐng)下載:http://ihrv.cn/resource/share/2000003598




作者信息:

陳雅倩,黃  魯

(中國(guó)科學(xué)技術(shù)大學(xué) 微電子學(xué)院,安徽 合肥230026)


此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。