人工智慧思維的邊界在哪裡?2018.03.NO.5(Note)
Is there something that Deep Learning will never be able to learn?
有沒有人工智慧思維的邊界?
Under the discussion , of which the topic is "Is there something that Deep Learning will never be able to learn?", there is an answer written by Yann LeCun , a director of AI Research at Facebook and Professor at NYU . For practicing,i read it and translate the view of Yann LeCun into Chinese . Because of the mindset of the practicing , i will not strictly translate the answer word by word , but just write down my explanation .
這是Quora上關於「有沒有人工智慧思維的邊界?」上Yann LeCun回答的翻譯,Yann LeCun是Facebook聘用的人工智慧研究員,同時也是NYU的教授。
And it is my duty to provide the link of the discussion (in Quora)(問題鏈接如下):
https://www.quora.com/Is-there-something-that-Deep-Learning-will-never-be-able-to-learn-1顯而易見,深度學習的構架有其缺陷與不足,但是當人類開始嘗試去開發一個擁有人類水準的AI時,我們是依然離不開深度學習的。
Clearly, deep learning in its current form is quite limited. But when people figure out how to build human-level AI, something like deep learning will have to be part of the solution.
- 學習能力是人工智慧不可缺少的一部分(雖然這一點在1980~1990年代並不是一個被廣泛接收的觀點。)不過我倒一直是這一觀點最忠實的信徒,現在我們的同道者群體也在不停的壯大。
- 事實上,深度學習理論認為,人工智慧系統應該學會理解現實生活中物象的抽象的,隱晦的,有深度的意義。 這一個想法甚至在一段時間內變成了人們對AI的終極要求,而怎麼去達到這一目標則被認為是無關緊要的。
The ideas of deep learning are:
- (1) learning is an indispensable component of AI: this wasn』t widely accepted in the 1980s and 1990s. But I』ve always been convinced of this, and more and more people are convinced of this now.
- (2) deep learning is really the idea that an AI system should learn abstract/high-level/hierarchical representations of the world. That has to be part of the solution to AI, regardless of the method through which the system learns these representations.
- (3) One question is whether human-level AI can be built around the central paradigm of machine learning which is to minimize an objective function,and whether this minimization can be done through gradient-based methods (like stochastic gradient descent where the gradient is computed with backprop). If not, we need to find new paradigms around which to build future algorithms for representation learning.
Beyond
that, there is a philosophical and theoretical question: what are the tasks that are learnable, and those that cannot possibly be learned regardless of how much resources you throw at them. There has been quite a bit of work in learning theory about these questions. An interesting set of results is the 「no free lunch theorems」 that show that a particular learning machine can tractably learn a small number of tasks among the set of all possible tasks. No learning machine can learn all possible tasks efficiently. Al machines have to be 「biased」 to learn certain tasks. It seem humbling to us, humans, that our brains are not general learning machines, but it』s true. Our brains are incredibly specialized, despite their apparent adaptability.There are problems that are inherently difficult for any computing device. This is why even if we build machines with super-human intelligence, they will have limited abilities to outsmart us in the real world. They may best us at chess and go, but if we flip a coin, they will be as bad as we are at predicting head or tail.
推薦閱讀:
※阿里巴巴6大行業報告免費分享啦!限時0積分下載!
※微表情透視「愛樂之城」:10秒分手戲潛台詞知多少
※智能投顧大賽十強公司:一張圖預測未來三天股價
※斯坦福CS231n項目實戰(二):線性支持向量機SVM
TAG:人工智慧 |