標籤:

Hinston大牛為什麼會對神經網路不滿?2018.03.NO.4(Note)

Why is Geoffrey Hinton suspicious of backpropagation and wants AI to start over?

為什麼 Geoffrey Hinton對反向傳播演算法如此懷疑,並且認為我們應該重新確定AI的發展方向?

This is an answer about the question "Why is Geoffrey Hinton suspicious of backpropagation and wants AI to start over?" in Quora. For practicing,i read it and write down some note in Chinese. The original answer was written by Raanan Hadar . Because of the mindset of the practicing , i will not strictly translate the answer word by word , but just write down my explanation of the answer.

這是在Quora上回答「為什麼Geoffrey Hiton對反向傳播演算法如此懷疑並且認為我們要重新定義AI的發展方向? 」問題下的一個回答。為了練習我自己的英文科技板塊寫作和閱讀的能力,我閱讀了這些內容並且用中文寫下了一些筆記(翻譯)。原回答是Raanan Hadar寫的。當然因為我主要是為了練習,所以我僅僅是寫下了大概的內容理解(當然和翻譯也差不了太多),而不是特別嚴謹還原地翻譯了這一個回答。

And it is my duty to provide the link of the discussion (of Quora)(問題鏈接如下):

Why is Geoffrey Hinton suspicious of backpropagation and wants AI to start over??

www.quora.com

And about Hinton(一個神經網路大牛,文章高產如野牛):

Geoffrey Hinton - Wikipedia?

en.wikipedia.org圖標機器學習大牛系列之Geoffrey Hinton | 圖靈之心?

baijiahao.baidu.com圖標

雖然已經有了很多非常優秀的回答,但是我依然希望提一些邊緣的或者容易被忽略的方面。

Lets start by saying that the above are great answers. I wanted to add another subtle and different perspective I think some people missed.

Hinston有一節非常精彩的課程,叫做「CNN(卷積神經網路)有什麼缺陷?」。同時Hinston最近在論文「兇殘而又巨大的神經網路」里聲嘶力竭的控訴幫助我完善了自己的想法。

Hinton has a must see lecture called 『What is wrong with Convolutional Neural Nets』. I also think that Hinton』s more recent endeavor(努力,儘力) into 『Outrageously large Neural Networks』 also helps to bring my point home…

我對於這件事的看法,在某些方面和Hinston是一樣的,就比如,我同樣認為他簡直可能會成為人類社會的掘墓人。DNN(深度神經網路)本身並不靈活小巧,然而他卻可以通過藉助巨大的數據量和強大的計算機,來高效地完成一個學習的任務。有了DNN後,科研人員們經常借暴力破解野蠻地得到結果。CNN已經是一個非常容易理解的例子了,因為CNN的參照系體系本身就沒有非常嚴格標準的規範或者定義,在不同情境下往往有不一樣的利用方式。現在的趨勢,是用大量的數據套用不同的模型聚類,得到一個浮動的參考系。

(註:Frankensteins ,即弗蘭肯斯坦,是英國女作家 Mary Wollstonecraft Shelley於1818年所著的小說中的主人公,他是一個年輕的醫學研究者,他創造了一個毀滅了他的自己的怪物,Frankensteins monster,即科學怪人,Frankensteins創造的那傢伙)

My feeling, which I hope is also Hinton』s is that he has created a Frankensteins monster of the sort. You see, deep neural nets are huge and bulky(巨大的) inefficient creatures that allow you to effectively solve a learning problem by getting huge amounts of data and a super computer. They currently trade efficiency for brute force almost every time. Convolutional neural nets are prime example because they have no standard notion of frame of reference and they are not invariant(不變的) to rotation and scale changes. The goto solution today is not making them invariant but making them 『tolerant』: using huge amounts of data and huge model capacity.

Hinston自己的辦法也存在著缺陷:

  • 如今,很多研究者無法接觸到問題,原因是他們缺乏足夠的硬體。例如,視頻方面的深度學習如今進展緩慢。
  • 同樣的遲緩性還表現在,深度學習網路對於「小數據集」來說是幾乎不起作用的。

The Frankensteins way has its problems too:

  • Some problems today are simply out of reach of most casual researchers because they lack adequate hardware. Take a look at the slow pace of deep learning for video has made until recently.
  • This same inefficiency is what makes deep nets less useful for 『small data』 problems.

當然,如果你是Google的一員,然後有機會使用億級規模的數據集和以十計的超級電腦,那麼你無疑是令人羨慕的。問題就是,NN網路更像是工程師們為了解決具體問題的局促之舉,而非科學家們研究問題的嚴謹路子。Hinston認為,我們更應該通過投資神經網路使其變得更有效。經過對神經網路演算法的改進之後,我們仍然需要已經被整理標記好的數據,不過當然會比以前少一些。

If you are Google and getting a 100Million labeled dataset and a 10 Petaflop super computer is within your reach, you are golden. The problem is this is the way of the engineer and not the scientist. Hinton believes we should invest instead in making neural nets more efficient. You would still need labeled data, but a lot less.

【subtle】:not easy to notice or understand unless you pay careful attention .

【Outrageously】:兇殘的


推薦閱讀:

從下往上看--新皮層資料的讀後感 第一部分:皮層細胞
項目筆記(一):實驗——用神經網路實現midi音樂旋律音軌的確定
神經網路告訴我,誰是世界上最「美」的人?
深度學習之人工神經網路及優化

TAG:神經網路 |