《電子技術(shù)應(yīng)用》
您所在的位置:首頁 > 人工智能 > 設(shè)計應(yīng)用 > 從失范到規(guī)范:生成式人工智能的監(jiān)管框架革新
從失范到規(guī)范:生成式人工智能的監(jiān)管框架革新
網(wǎng)絡(luò)安全與數(shù)據(jù)治理
劉學(xué)榮
吉林大學(xué)法學(xué)院
摘要: 生成式人工智能在技術(shù)變革下引發(fā)的失范性風(fēng)險,對既有人工智能監(jiān)管框架提出了挑戰(zhàn)。從底層技術(shù)機(jī)理出發(fā),可知當(dāng)前生成式人工智能呈現(xiàn)出“基礎(chǔ)模型-專業(yè)模型-服務(wù)應(yīng)用”的分層業(yè)態(tài),分別面臨算法監(jiān)管工具失靈、訓(xùn)練數(shù)據(jù)侵權(quán)風(fēng)險加劇、各層級間法律定位不明、責(zé)任界限劃分不清等監(jiān)管挑戰(zhàn)。為此需以分層監(jiān)管為邏輯內(nèi)核,對我國既有人工智能監(jiān)管框架進(jìn)行革新。在監(jiān)管方式上應(yīng)善用提示工程、機(jī)器遺忘等科技監(jiān)管工具;在責(zé)任劃定上應(yīng)進(jìn)行主體拆解與分層回溯,從而規(guī)范“基礎(chǔ)模型-專業(yè)模型-服務(wù)應(yīng)用”的分層監(jiān)管框架,以期實現(xiàn)有效監(jiān)管,促進(jìn)生成式人工智能的高質(zhì)量發(fā)展。
中圖分類號:D922;TP399文獻(xiàn)標(biāo)識碼:ADOI:10.19358/j.issn.2097-1788.2024.06.009
引用格式:劉學(xué)榮.從失范到規(guī)范:生成式人工智能的監(jiān)管框架革新[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,2024,43(6):58-63,71.
From illegal to legal: evolving regulatory frameworks for generative artificial intelligence
Liu Xuerong
School of Law, Jilin University
Abstract: The risk of aberration caused by generative artificial intelligence under technological change challenges the existing artificial intelligence regulatory system. Starting from the underlying technical mechanism, it can be seen that the current generative artificial intelligence presents a hierarchical format of "basic model-professional model-service application", and faces regulatory challenges such as the failure of algorithm supervision tools, the intensified risk of training data infringement, the unclear legal positioning between different levels, and the unclear division of responsibility boundaries. Therefore, it is necessary to take layered regulation as the logical core and reform the existing artificial intelligence regulatory framework in China. In the way of supervision, we should make good use of technology supervision tools such as prompt engineering and machine forgetting. In the delineation of responsibilities, the main body should be disassembled and hierarchical backtracking should be carried out, so as to standardize the hierarchical regulatory framework of "basic model-professional model-service application", in order to achieve effective supervision and promote the healthy and high-quality development of generated artificial intelligence.
Key words : generative artificial intelligence; algorithm black box; technical supervision; legal responsibility

引言

隨著人工智能的迭代升級,對其進(jìn)行的深層監(jiān)管不僅關(guān)系到法律治理實效,也直接影響到技術(shù)發(fā)展與應(yīng)用安全。生成式人工智能作為當(dāng)前新質(zhì)生產(chǎn)力發(fā)展的主要驅(qū)動力,需加以重點關(guān)注。相較于傳統(tǒng)的人工智能,生成式人工智能因其深度學(xué)習(xí)屬性而使技術(shù)原理變得更加復(fù)雜且難以理解,并由此導(dǎo)致算法黑箱、算法歧視、算法異化、算法權(quán)力失范等過去人工智能算法模型中常見的技術(shù)伴生風(fēng)險問題更為嚴(yán)峻。與此同時,算法解釋、算法審計、算法評估等過去對人工智能進(jìn)行法律監(jiān)管的傳統(tǒng)工具在生成式人工智能面前也面臨著失靈風(fēng)險,法律監(jiān)管體系的穩(wěn)定性與安全性都受到了極大的沖擊。

雖然我國人工智能法律監(jiān)管始終走在世界前沿,并形成了具有中國特色的算法模型監(jiān)管體系[1],但就目前針對生成式人工智能以及深度合成算法推出的監(jiān)管規(guī)定,仍主要停留在人工智能模型治理衍生出的信息安全層面,偏重服務(wù)應(yīng)用監(jiān)管而輕視底層技術(shù)監(jiān)管[2],無法克服因人工智能模型的技術(shù)升級而產(chǎn)生的監(jiān)管困境。

在技術(shù)失控風(fēng)險日益嚴(yán)重,現(xiàn)有方案又無法實現(xiàn)有效監(jiān)管的雙重困境下,生成式人工智能的監(jiān)管難度急劇增長。面對生成式人工智能蓄勢待發(fā)的落地應(yīng)用,需要針對性的法律監(jiān)管方案對風(fēng)險進(jìn)行治理。因此,本文將從生成式人工智能的底層技術(shù)出發(fā),首先對其采用的算法模型進(jìn)行技術(shù)穿透,在解析技術(shù)原理后清晰定位生成式人工智能的監(jiān)管困境,而后在底層技術(shù)特征的基礎(chǔ)之上挖掘生成式人工智能技術(shù)監(jiān)管的可行路徑,彌補當(dāng)前生成式人工智能法律監(jiān)管工具的失靈,并結(jié)合生成式人工智能的底層運行機(jī)理與相應(yīng)的運行主體進(jìn)行精準(zhǔn)分層責(zé)任落實,避免因“技術(shù)中立”濫用而引發(fā)法律責(zé)任逃避問題,以實現(xiàn)底層技術(shù)與分層主體有機(jī)協(xié)調(diào)的法律監(jiān)管模式。


本文詳細(xì)內(nèi)容請下載:

http://ihrv.cn/resource/share/2000006049


作者信息:

劉學(xué)榮

(吉林大學(xué)法學(xué)院,吉林長春130000)


Magazine.Subscription.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。