人工智慧的阿西洛馬23條準則

2017年1月,在加利福尼亞州阿西洛馬舉行的Beneficial AI會議上,近干名人工智慧和機器人領域的專家,聯合簽署了阿西洛馬人工智慧23條原則,呼籲全世界在發展人工智慧的同時嚴格遵守這些原則,共同保障人類未來的倫理、利益和安全。阿西洛馬人工智慧原則(Asilomar AI Principles)是著名的阿西莫夫的機器人三大法則的擴展版本。

這23條準則由「生命未來研究所」牽頭制定出,旨在確保人類在新技術出現時能順利規避其潛在的風險。其突出核心成員有Stephen Hawking和Elon Musk等。這個組織專註於由新技術和問題構成的潛在威脅,如人工智慧、生物技術、核武器和氣候變化等。

本文內容取自該所網站futureoflife.org/ai-pri,對其中文翻譯進行了更正,並發表了部分個人見解,歡迎大家討論。

Research Issues 研究問題

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

研究目標: 人工智慧研究的目標不應當是建立漫無方向的智能,而是建立有益的智能。

(評論:限定AI發展目標為有益性的,避免隨意發展可能導致的失控)

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

研究資金: 對人工智慧的投資,應當伴隨有資金進行確保其用於有益用途的研究,這些研究內容包括計算機、經濟、法律、倫理和社會等方面的棘手問題,諸如:

(評論:大意是你花一百塊錢研究AI,應該要拿出一塊兩塊的研究道德倫理等衍生問題)

· How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

我們如何使得未來人工智慧系統高度強健,從而按照我們的意願運作而不會發生故障和被黑客入侵?

(評論:擔心AI出故障做出出乎意料的重大行動或者被黑客利用)

· How can we grow our prosperity through automation while maintaining people』s resources and purpose?

在通過自動化來使我們繁榮的同時,如何維持人類的資源和目標?

(評論:擔心AI搶奪人類資源)

· How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

隨著人工智慧的發展,我們如何更公平和有效地修正法律系統以及管理與人工智慧相關的風險?

· What set of values should AI be aligned with, and what legal and ethical status should it have?

人工智慧應該遵守哪些價值體系?人工智慧應當具有什麼樣的的法律和倫理狀態?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

科學與政策鏈接:人工智慧研究人員和政策制定者之間,應形成積極、有建設性的溝通。

(評論:科研人員和政客要攜手處理這個問題)

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

研發文化: 人工智慧研發人員之間應該被培養一種互相合作、互相信任和互相透明的文化。

(評論:研究人員之間也不要相互掐架)

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

競爭規避: 開發人工智慧系統的團隊應積極相互合作,以避免在安全標準上偷工減料。

(評論:擔心有的團隊為競速而忽視AI安全性)

Ethics and Values 道德標準和價值觀念

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

安全性: 人工智慧系統在整個生命周期內應當是安全的,並且可驗證其實用性和可行性。

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

故障透明: 如果一個人工智慧系統引起損害,應該有辦法查明原因。

(評論:這個其實有點難,對於深層網路,設計者也搞不懂它究竟是怎麼工作的)

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

判決透明: 在司法裁決中,但凡涉及自主系統,都應提供一個有說服力的解釋,並由一個有能力勝任的權威人員進行審計。

(評論:凡AI相關判決必得權威人士出面)

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

職責: 高級人工智慧系統的設計者和建造者是其使用、濫用和行動過程中道德含義的權益方,他們有責任和機會塑造這些道德含義。

(評論:誰研究誰受益誰負責)

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

價值觀一致: 在設計高度自主的人工智慧系統時,應當確保其目標和行為在整個運行過程中與人類價值觀相一致。

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

人類價值觀: AI系統的設計和運作應符合人類尊嚴、權利、自由和文化多樣性的理念。

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems』 power to analyze and utilize that data.

個人隱私: 既然人工智慧系統能分析和利用數據,人們應該有權利獲取、管理和控制他們產生的數據。

(評論:問題是咱們看得懂AI吐出的某些數據嗎?)

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people』s real or perceived liberty.

自由與隱私: 人工智慧在個人數據方面的應用不能無故縮減人們的實際或感知的自由。

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

共享利益: 人工智慧技術應該儘可能地使儘可能多的人受益和授權。

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

共享繁榮: 人工智慧創造的經濟繁榮應該廣泛的共享,造福全人類。

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

人類控制: 應該由人類選擇如何以及是否代表人工智慧做決策,用來實現人為目標。

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

非顛覆:通過控制高級人工智慧系統所獲得的權力,應當尊重和改善一個健康社會所依存的社會和公民進程,而不是顛覆它。

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

人工智慧軍備競賽: 應該避免一個使用致命自主武器的軍備競賽。

Longer-term Issues遠期問題

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

性能警示: 因為沒有達成共識,我們應該強烈避免關於未來人工智慧性能的假設上限。

(評論:也就是不能輕易承認AI的性能局限,以免低估它的負作用)

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

重要性: 高級人工智慧可代表地球上生命歷史的深奧變化,應該計劃和管理相應的關注和資源。

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

風險: 對於人工智慧造成的風險,尤其是那些災難性的和毀滅性的風險,必須付出與其所造成的影響相稱的努力,以用於規劃和緩解風險。

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

遞歸自我完善: 那些會遞歸地自我改進和自我複製的AI系統若能迅速增加質量或數量, 必須服從嚴格的安全控制措施。

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

共同利益: 超級智能的發展應當服務於廣泛認同的道德觀念,應當是為了全人類而不是一個國家或者一個組織的利益。

註:轉載請註明出處並保持內容完整性

推薦閱讀:

智能硬體:AI給智能硬體帶來了第二春還是第二個大泡沫?| 2018展望
iOS: 使用 Turi Create 自定義 Core ML Model

TAG:人工智慧 | 人工智慧AI醬 |