《電子技術(shù)應(yīng)用》
您所在的位置:首頁(yè) > 模擬設(shè)計(jì) > 設(shè)計(jì)應(yīng)用 > 基于GPU的稀疏深度神經(jīng)網(wǎng)絡(luò)性能優(yōu)化
基于GPU的稀疏深度神經(jīng)網(wǎng)絡(luò)性能優(yōu)化
電子技術(shù)應(yīng)用
石于誠(chéng),黃建強(qiáng),邊浩東,吳利,賈金芳,王曉英
青海大學(xué) 計(jì)算機(jī)技術(shù)與應(yīng)用系,青海 西寧 810016
摘要: 摘 要:隨著神經(jīng)網(wǎng)絡(luò)層數(shù)不斷加深,稀疏深度神經(jīng)網(wǎng)絡(luò)在計(jì)算與存儲(chǔ)空間上更具優(yōu)勢(shì),但稀疏深度神經(jīng)網(wǎng)絡(luò)的性能仍然有待優(yōu)化。為此提出基于GPU的稀疏深度神經(jīng)網(wǎng)絡(luò)性能優(yōu)化方法,對(duì)于計(jì)算順序進(jìn)行調(diào)整,增強(qiáng)數(shù)據(jù)重用性,并結(jié)合GPU的獨(dú)特結(jié)構(gòu)與CUDA編程方法,通過預(yù)取等方法進(jìn)一步提升性能?;贕raphChallenge官方提供的數(shù)據(jù)集,相較于cuSPARSE相關(guān)庫(kù)函數(shù),最高獲得了2.5倍的性能加速。
中文引用格式: 石于誠(chéng),黃建強(qiáng),邊浩東,等. 基于GPU的稀疏深度神經(jīng)網(wǎng)絡(luò)性能優(yōu)化[J]. 電子技術(shù)應(yīng)用,2023,49(12):14-19.
英文引用格式: Shi Yucheng,Huang Jianqiang,Bian Haodong,et al. Performance optimization of sparse deep neural network based on GPU[J]. Application of Electronic Technique,2023,49(12):14-19.
Performance optimization of sparse deep neural network based on GPU
Shi Yucheng,Huang Jianqiang,Bian Haodong,Wu Li,Jia Jinfang,Wang Xiaoying
Department of Computer Technology and Application,Qinghai University,Xining 810016,China
Abstract: With the deepening of neural network layers, the sparse deep neural network has more advantages in computing and storage space, but the performance of the sparse deep neural network still needs to be optimized. Therefore, a performance optimization method based on GPU sparse deep neural network is proposed, which adjusts the order of computation, enhances the reusability of data, and combines the unique structure of GPU with CUDA programming method, performance is further improved by prefetching and other methods. According to GraphChallenge's official data set, it achieved up to 2.5 times the performance acceleration compared to the related cuSPARSE library functions.
Key words : deep neural network;sparsification;heterogeneous platform;sparse matrix-matrix multiplication

0 引言

隨著神經(jīng)網(wǎng)絡(luò)原理性研究的不斷深入以及算力逐步增強(qiáng),越來越多的深度神經(jīng)網(wǎng)絡(luò)涌現(xiàn)。例如在自然語言處理[1]領(lǐng)域,谷歌提出Transformer[2]模型,其本身對(duì)于梯度消失這一難題的解決以及可以進(jìn)行并行訓(xùn)練等一系列的優(yōu)勢(shì),使得大模型愈發(fā)火熱,ChatGPT[3]也是在此基礎(chǔ)上訓(xùn)練得到的。但規(guī)模龐大的深度神經(jīng)網(wǎng)絡(luò)對(duì)于模型應(yīng)用的時(shí)效性提出了更大的挑戰(zhàn),由于“存儲(chǔ)墻”[4]和“功耗墻”[5]的存在,稀疏深度神經(jīng)網(wǎng)絡(luò)[6-7]進(jìn)入研究視野,GPU設(shè)備和稀疏深度神經(jīng)網(wǎng)絡(luò)的結(jié)合使得訓(xùn)練速度再邁上一個(gè)嶄新的臺(tái)階。



本文詳細(xì)內(nèi)容請(qǐng)下載:http://ihrv.cn/resource/share/2000005799


作者信息

石于誠(chéng),黃建強(qiáng),邊浩東,吳利,賈金芳,王曉英

(青海大學(xué) 計(jì)算機(jī)技術(shù)與應(yīng)用系,青海 西寧 810016)



weidian.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。