《電子技術應用》
您所在的位置:首頁 > 人工智能 > 設計應用 > 生成式人工智能視閾下算法審計的制度構(gòu)建與路徑創(chuàng)新
生成式人工智能視閾下算法審計的制度構(gòu)建與路徑創(chuàng)新
網(wǎng)絡安全與數(shù)據(jù)治理 8期
王兆毓   
(清華大學法學院,北京100084)
摘要: 隨著文心一言、通義千問、ChatGLM等我國生成式人工智能算法研發(fā)與應用的落地,加快構(gòu)建并完善面向生成式人工智能的算法治理已成為完善算法監(jiān)管體系的題中之義。在算法治理體系之中,算法審計制度有利于算法異化與算法風險的糾偏問責,促進算法公平與數(shù)字正義的實質(zhì)透明,并實現(xiàn)算法公開與商業(yè)秘密的張力彌合,從而成為算法規(guī)制的關鍵制度配置。在全流程治理視角下,唯有“用算法審計算法”,實現(xiàn)書面合規(guī)審計與技術合規(guī)審計的協(xié)同并舉,才能在算法內(nèi)外部對算法的透明度、公平性、可控性、包容性和可問責進行多維視角的有效評估。而在算法審計實踐之中,現(xiàn)行法仍需完善剛?cè)岵牟钚蛞?guī)制格局,鞏固分類分級的精準治理,以實現(xiàn)算法審計內(nèi)外兼修的制度銜接,促進算法治理從碎片化監(jiān)管邁向更為一體化、敏捷化、精準化的治理格局。
中圖分類號:D92
文獻標識碼:A
DOI:10.19358/j.issn.2097-1788.2023.08.002
引用格式:王兆毓.生成式人工智能視閾下算法審計的制度構(gòu)建與路徑創(chuàng)新[J].網(wǎng)絡安全與數(shù)據(jù)治理,2023,42(8):6-12.
System construction and legal innovation of algorithm audit for AI generated content
Wang Zhaoyu
(School of Law, Tsinghua University, Beijing 100084, China)
Abstract: With the development and application of generative AI algorithms in China, accelerating the construction and develpoment of algorithm governance for generative AI has become the key to improve the algorithm regulatory system. Algorithm audit is conducive to the correction and accountability of algorithm risk, the promotion of the transparency of algorithm fairness and digital justice, and the alleviation of the tension between algorithm disclosure and trade secrets. Therefore, it functions as an important institutional means of algorithm governance. To audit algorithm with algorithm, algorithm audit should be a modular and precise assessment from the perspective of algorithm wholeworkflow governance. As an effective evaluation of algorithm transparency, fairness, controllability, inclusiveness and accountability, algorithm audit is the combination of written audit and technical audit. Current laws established a rigid and flexible differential order regulation pattern in the governance of generative AI which requires further perfection. Precise governance of algorithms calls for the consolidation of AI classification and grading. Served as an institutional articulation, algorithm audit helps promote algorithm governance from fragmented regulation to an integrated, agile and precise governance pattern.
Key words : algorithm audit; algorithm governance; transparency; protection of personal information

0    引言

隨著信息技術的發(fā)展,算法已然在潛移默化地“編碼”我們的生活。AI繪畫工具MidJourney、Stable Diffusion、DALLE2和生成式對話模型ChatGPT等人工智能生成內(nèi)容(AI Generated Content,AIGC)一鳴驚人,作為支撐的生成式人工智能算法呈現(xiàn)出涌現(xiàn)性、拓展性、復合性等優(yōu)勢并成為人工智能的全新熱點。生成式人工智能算法已然能夠在職位推薦、創(chuàng)意繪圖、智慧醫(yī)療等諸多應用場景下提供高效的解決方案。與此同時,提示語注入攻擊、“毒害”內(nèi)容、深度合成圖像侵權(quán)等外部性風險層出不窮,使得面向生成式人工智能的治理與監(jiān)管呼之欲出。而作為算法監(jiān)管的重要工具,算法審計制度已經(jīng)在相關法律制度中得以明確并在算法治理體系中處于舉足輕重的地位。2022年12月,《中共中央 國務院關于構(gòu)建數(shù)據(jù)基礎制度更好發(fā)揮數(shù)據(jù)要素作用的意見》(以下簡稱“數(shù)據(jù)二十條”)對外發(fā)布,進一步明確了建立數(shù)據(jù)要素生產(chǎn)流通使用全過程的算法審查制度。2023年4月,國家互聯(lián)網(wǎng)信息辦公室發(fā)布關于《生成式人工智能服務管理辦法(征求意見稿)》(以下簡稱“生成式管理辦法”)并公開征求意見,成為首部直接面向生成式人工智能的規(guī)范性文件。在此之余,如何針對生成式人工智能建立合法有效的算法審計制度,亟待理論與實踐做出更有力的回應。




本文詳細內(nèi)容請下載:http://ihrv.cn/resource/share/2000005460




作者信息:

王兆毓

(清華大學法學院,北京100084)

微信圖片_20210517164139.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。