《電子技術應用》
您所在的位置:首頁 > 其他 > 設計應用 > 生成式大模型的數(shù)據(jù)安全風險與法律治理
生成式大模型的數(shù)據(jù)安全風險與法律治理
網絡安全與數(shù)據(jù)治理
劉羿鳴1,林梓瀚2
1 武漢大學網絡治理研究院,湖北武漢430072;2 上海數(shù)據(jù)交易所,上海201203
摘要: 生成式大模型具有廣泛的應用前景。大模型的訓練和運行均需要海量數(shù)據(jù)的支撐,極有可能引發(fā)數(shù)據(jù)安全風險。認知風險是化解風險的前提,需要從靜態(tài)、動態(tài)兩個視角建立起大模型應用數(shù)據(jù)安全風險的認知體系。結合歐盟、美國等大模型的治理經驗,針對我國大模型數(shù)據(jù)安全風險治理存在的不足,建議建立基于數(shù)據(jù)安全風險的分類監(jiān)管路徑、完善基于大模型運行全過程的數(shù)據(jù)安全責任制度、探索基于包容審慎監(jiān)管的創(chuàng)新監(jiān)管機制,為實現(xiàn)大模型應用的可信未來提供充分的法治保障。
中圖分類號:D912.29 文獻標識碼:ADOI:10.19358/j.issn.2097-1788.2023.12.005
引用格式:劉羿鳴,林梓瀚.生成式大模型的數(shù)據(jù)安全風險與法律治理[J].網絡安全與數(shù)據(jù)治理,2023,42(12):27-33.
Data security risks of generative large model and its legal governance
Liu Yiming1,Lin Zihan2
1 Institute of Cyber Governance, Wuhan University, Wuhan 430072, China; 2 Shanghai Data Exchange, Shanghai 201203, China
Abstract: Generative large models have a wide range of application prospects, however, the training and operation of those models need the support of massive data, which is very likely to cause data security risks. Cognitive risk is the premise of risk resolution, and it is necessary to establish a cognitive system of data security risk of generative model application from both static and dynamic perspectives. Combining the governance experience of generative models in the EU and the United States, and addressing the deficiencies in the governance of data security risks of generative models in China, it is recommended to establish a categorized regulatory path based on data security risks, improve the data security responsibility system based on the whole process of large model operation, and explore the innovative regulatory mechanism based on the inclusive and prudent regulation, in order to provide sufficient rule of law guarantee for realizing the credible future of large model applications.
Key words : generative large model; data security risk; ChatGPT; risk classification

引言

生成式大模型(以下簡稱大模型)是指基于海量數(shù)據(jù)訓練的、能夠通過微調等方式適配各類下游任務,并根據(jù)用戶指令生成各類內容的人工智能模型。大模型具有極為寬廣的應用前景,且使用門檻較低,用戶可通過開源或開放API工具等形式進行模型零樣本/小樣本數(shù)據(jù)學習,便可識別、理解、決策、生成效果更優(yōu)和成本更低的開發(fā)部署方案。然而,大模型的訓練及其應用的落地都需要大量的數(shù)據(jù)作為支撐,由此帶來的諸如個人隱私泄露和數(shù)據(jù)篡改等數(shù)據(jù)安全風險已成為法律所必須因應的重要議題。本文將基于大模型數(shù)據(jù)安全風險的系統(tǒng)性分析,對國內外既有規(guī)制路徑的不足進行梳理,最后提出我國大模型治理的完善建議,以期推動大模型應用的可信有序發(fā)展。1問題的提出大模型的廣泛應用與內生性技術局限的疊加引發(fā)了對大模型所導致的數(shù)據(jù)安全風險的擔憂。


作者信息

劉羿鳴1,林梓瀚2

(1 武漢大學網絡治理研究院,湖北武漢430072;2 上海數(shù)據(jù)交易所,上海201203)


文章下載地址:http://ihrv.cn/resource/share/2000005873


weidian.jpg

此內容為AET網站原創(chuàng),未經授權禁止轉載。