如何評價5月28日LeCun等人刊發於Nature的Deep Learning這一論文?

近日,在Nature上,LeCun,Bengio和Hinton三位人工智慧大佬刊發了一篇Review,題為Deep Learning。如何評價這篇文章?與之前刊發在計算機領域期刊的一些綜述相比有何異同?

文章具體信息:436 | NATURE | VOL 521 | 28 MAY 2015

本文在Nature官網的鏈接:http://www.nature.com/nature/journal/v521/n7553/pdf/nature14539.pdf

-6.4更-

在計算機科學領域,很少有論文能在Nature和Science上發表(上一篇給我留下深刻印象的是Hinton2006年降維的論文發表於Science),而這次Nature專門為Deep Learning開了一個專欄,以Review的形式一次性發表了多篇高質量論文,涉及概率機器學習(恕我直譯)、強化學習、機器人等多個方面。這能否被看做是一向因缺乏嚴格的理論支持受到詬病的深度學習被更廣泛意義上的學術界接受呢?


科普類文章,csdn首頁有中文版的,我參與了翻譯。歡迎指正!


據Hinton本人的說法是「you won"t learn much from that paper」。那個更多的是一個overview,如果希望了解一下DL的來龍去脈的話值得讀一下。(這個是yangqing jia在一個科普答疑中說的)

其實裡面還是有一些挺有意思的東西,比如給出的那個可視化的網站,簡直太贊。

另外還有一些吐槽:

「In the late 1990s, neural nets and backpropagation were largely forsaken by the machine-learning community and ignored by the computer-vision and speech-recognition communities. It was widely thought that learning useful, multistage, feature extractors with lit- tle prior knowledge was infeasible.」

估計是LeCun本人寫的,看來有點小怨氣的哈哈


I"m not an expert on deep learning, I"m only interested in it and have done some side projects using CNNs. Here"s my a priori knowledge of the three authors:

LeCun: Inventor of the CNN.

Hinton: Coauthor of the famous 2012 Alexnet paper which helped re-popularize deep learning with the power of GPUs.

Bengio: Honestly, I don"t know him before I read that review.

But judging only from the other two authors I know, I was convinced that this is gonna be an awesome review. Here"s some of my thoughts:

Traditionally, Nature doesn』t publish too many computer science papers. But this February, it published the DeepMind"s paper: Human-level control through deep reinforcement learning. Now this issue is a special issue on machine intelligence. This means deep learning is helping computer science get more attentions from people in traditional science field, which is a very good thing.

The paper itself is extremely well-written, and if you want to get a general idea of what deep learning is, just read this paper. I"ve taken two deep learning-related courses and have read through two tutorials on the topic of CNNs. Yet this paper somehow still gave me a systematic point of view on deep learning and managed to answer several of my confusions.

Two relevant resource if you guys want to treat Deep Learning more than a black box:

http://colah.github.io/ This is where the Figure 1 in the review came from. C.Olah"s blog has many insights on deep learning.

Home Page of Matthew Zeiler This guy is Rob Fergus"s student and he founded a deep learning company: Clarifai. Both him and his advisor have written great reviews and analysis papers on CNNs.


Figure 1裡面所有的"屬於"符號in都打成了字母varepsilon. 當真不是journal是magazine呀

Nature這編輯水平... 也是醉了


剛剛在csdn上看了一下,先上地址:深度學習-LeCun、Bengio和Hinton的聯合綜述(上)-CSDN.NET,這篇論文指出了多層神經網路存在的一些問題,並對CNN在圖像上的應用和RNN在文本上的應用進行闡述,現在深度學習主要應該也是應用在這些方面吧。我還沒有去研究過RNN,以前都是用SVM,所以也不太好評價RNN。不過目前來說CNN應該是公認的在圖像識別方面最好的演算法了吧。最近考雅思,苦逼中……


就幾個大牛在nature上拍拍手,唱了個we will rock you. 差不多就這意思吧。


然並卵。

定位是科普,可我覺得看別人的tutorial的更好,如Fergus,Xiaogang的。


實驗室老師一直「樂村樂村」的叫,還別說這個翻譯真是無懈可擊。。。我是來搞笑的,請忽略我


推薦閱讀:

大學裡面,怎樣有效的學習知識?理論和實踐怎麼權衡?
計算機的最底層指令是動態類型(dynamic typing)的還是靜態類型(static typing)的?
想教五歲的女兒學編程,什麼語言比較適合?
棧式虛擬機和寄存器式虛擬機?
計算機會認為(-b)是(0-b)還是((-1)*b)?

TAG:人工智慧 | 機器學習 | 論文 | 計算機科學 | 深度學習DeepLearning |