模擬鼠腦的 48 塊 TrueNorth 晶元陣列所運行的演算法是什麼?
參見 IBM開發了一個人工大腦_IBM_cnBeta.COM
其所用的深度學習演算法是什麼?可能是什麼?
謝邀。著名毒舌Yann LeCun教授曾經在他的FaceBook上對IBM的TrueNorth晶元進行過評論:
Now, what wrong with TrueNorth? My main criticism is that TrueNorth implements networks of integrate-and-fire spiking neurons. This type of neural net that has never been shown to yield accuracy anywhere close to state of the art on any task of interest (like, say recognizing objects from the ImageNet dataset). Spiking neurons have binary outputs (like neurons in the brain). The advantage of spiking neurons is that you don"t need multipliers (since the neuron states are binary). But to get good results on a task like ImageNet you need about 8 bit of precision on the neuron states. To get this kind of precision with spiking neurons requires to wait multiple cycles so the spikes "average out". This slows down the overall computation.
大意是TrueNorth這種架構並不利於科學計算,並舉了一個例子來說明使用spiking neurons做乘法有多麼困難。而現在的深度學習模型,大量使用了乘法,因此把現在深度學習的模型直接集成至TrueNorth晶元是很困難的,而且計算速度也是很慢的。
如果脫離現在的深度學習模型,建立自己的學習演算法,是需要研究者們投入大量的時間去做的,而且看不到意義所在:既然深度學習取得了這麼大的成功,何必再去用幾年時間研究一套新的演算法?等TrueNorth大規模投產,馮·諾依曼架構下的人工智慧說不定又更進一步了呢。
不過也不好說,說不定IBM是在下一盤很大的棋。也許深度學習這條路最後就是走不通,而更像人類大腦模型的spike neuron會成功產生智能呢?
參考文章:Yann LeCun的評論:https://www.facebook.com/yann.lecun/posts/10152184295832143首先,第一次被邀請回答,受寵若驚。(部分內容摘自boxi在36kr的文章,http://36kr.com/p/214445.html)剛看到這個題目,TureNorth,14年的時候就聽說了一些,IBM搞的一個新晶元,傳言要打破馮諾依曼架構的晶元,當時仔細看了下
這種晶元把數字處理器當作神經元,把內存作為突觸,跟傳統馮諾依曼結構不一樣,它的內存、CPU 和通信部件是完全集成在一起。因此信息的處理完全在本地進行,而且由於本地處理的數據量並不大,傳統計算機內存與 CPU 之間的瓶頸不復存在了。同時神經元之間可以方便快捷地相互溝通,只要接收到其他神經元發過來的脈衝(動作電位),這些神經元就會同時做動作。
就是說NT其實是用晶元來模擬神經元的,與馮諾依曼架構不同之處就在於將處理存儲和通訊集成在了一起。詳細看下圖:
Let』s be clear: we have not built the brain, or any brain. We have built a computer that is inspired by the brain. The inputs to and outputs of this computer are spikes. Functionally, it transforms a spatio-temporal stream of input spikes into a spatio-temporal stream of output spikes.
可以看到,NT的輸入輸出其實是經過處理的時空信號,有點類似處理時域信號的神經網路。那麼從這裡看,NT做的事情和DeepLearning好像沒什麼區別。但是IBM的研究員又說了NT現在並沒有使用傳統的DL演算法,這是為何?
其實從NT的架構就能看出,他所運用的演算法應該不是是傳統的DL。DL演算法中,每個unit是模擬的某個神經元,每層表示了神經通路中每個功能區域,在演算法上,每個unit具有獨特參數的激活函數。我們的訓練,其實就是來學習這些獨特的參數。而NT的每個core是模擬的某個voxel,不同的功能區域是通過不同的晶元集群來進行表示的,但是不同的集群間按照NT的架構,應該是有連接存在的,可是目前的深度網路,層間的連接僅存在與相鄰層,就是說,傳統的DL可以看作模仿了一個有向的神經通路,信息在網路中沒有回傳(DBM雖然在理論上是將網路作為了一個信息傳遞的整體,但網路結構始終不存在跨層直接連接,雖然RNN有自反饋,但是反饋並沒有在層間特別是跨層進行)。如果NT真的是做到了他們介紹的那樣,那麼無疑這是目前最接近大腦對信息處理方式的架構。
至於NT究竟是用了什麼演算法,抱歉說了這麼多,我也看了一下IBM的研究首頁,沒有得到非常準確的答案。以為NT特殊架構的問題,NT採用的是一種新的語言:If one were to measure activities of 1 million neurons in TrueNorth, one would see something akin to a night cityscape with blinking lights. Given this unconventional computing paradigm, compiling C++ to TrueNorth is like using a hammer for a screw. As a result, to harness TrueNorth, we have designed an end-to-end ecosystem complete with a new simulator, a new programming language, an integrated programming environment, new libraries, new (and old) algorithms as well as applications, and a new teaching curriculum (affectionately called, 「SyNAPSE University」). The goal of the ecosystem is to dramatically increase programmer productivity. Metaphorically, if TrueNorth is 「ENIAC」, then our ecosystem is the corresponding 「FORTRAN.」
畢竟我只是醬油黨,這種太精尖的東西,不是一時半會兒就搞透了的,不過這確實是個很有希望的東西,我也會繼續關注下去,慢慢更新把~~
第一次寫這麼多字。。。不討厭的希望留個贊,也給我繼續更新的動力。、參考文章IBM Research: Brain-inspired Chiphttp://36kr.com/p/214445.html
Mark.
Please excuse my language. I don"t have Chinese typing here at company ;((( But I like this question. It goes deep into the architecture itself instead of a superficial marketing slogan.Fortunately I attended Modha"s keynote talk at HPCA earlier this year at. Tbh the talk was not that impressive despite the well-decorated IBM slides. In fact the group I work in right now is cooperating with IBM"s true-north team. As far as I know the true north team is trying to port caffe on their processors.
In practice the real problem with true north (or other similar cortical processors) is their performance in terms of classification error rate or detection accuracy is rather low compared with conventional platforms, i.e. CPUs, GPUs, or FPGAs. TN"s selling point is improved power consumption, which is totally expected because their architecture is re-designed from the bottom up and their circuit style is mostly asynchronous. The question is do you really want to go back to that level of poor performance just for power saving? (hehehehe)
What I learnt from my manager is TN chips are not running any deep architecture right now. Their pedestrian detection/tracking demo at HPCA earlier this year is not that good at all, which also implies the models running on TN is not very effective. True north can simulate neural network, but there are many other important components in a high-performance deep architecture, the simplest one being convolution filters. I don"t know how TN can simulate these parts.
It seems that right now the TN team wants to know whether it is possible to build a software layer atop several interconnected TN chips so that the learning model TN chips simulate can be more complex and more capable as a whole. There must be smooth co-operation between conventional general purpose processors and a TN chip. A large chunk of NN by itself is meaningless.
Personally I prefer Yunji"s DianNao-series paper. Their accelerator seems to be more flexible and has better interface with conventional CPU. I"m looking forward to their commercial use, that is, how much performance/training improvement they can bring about if industry companies like Baidu adopts it.
Again, I apologize for my language...樓主啊,既然是模擬鼠腦,那麼又哪來的演算法呢?
它就是根據cortex的解剖結構搭建的一個神經網路。
適當的調整neuron的參數,其實主要就是time constant,你就可以得到常見的幾種oscillation,然後就又灌一篇neuromorphic engineering的兩大目的:1. 通過仿大腦運行,更好的理解大腦運行的方式,這是scientific的貢獻2. 借鑒大腦的運行方式,設計更好的晶元,engineering的貢獻作為一名調研過這個項目的人,我想還是給大家普及一下這個神奇的TrueNorth晶元到底是什麼玩意!
目前,世界上做spike神經網路著名的只有兩家,英國的曼大,美國的IBM。
IBM搞這個東西搞了快十年了,說實話過程並沒有達到公司的預期,模擬鼠腦都好幾年了,然並卵!以至於中後期不得不厚著臉皮往深度學習上靠攏,開始用這個網路搭玻爾茲曼機去了。至於效果,反正是目前只有他們自己玩,別人都玩NV的GPU。
有人說,IBM在下一步大棋,的確,這玩意能搭的東西越多,那麼未來只要有新概念網路提出來,它們可以很快實現並商業化,甩對手N條街,但是反過來,得有公司願意往裡面砸錢,IBM就是這樣一家!
美國IBM這個項目並不只是它一家公司做,還有N多牛逼院校和美國ZF做支撐!往大了說,這比錢美國ZF願意花!
有機會再補充關於脈衝神經網路的生物學觀點。
謝邀。前面答案提供的信息已經超過我所知。這裡說說個人想法。TrueNorth即使使用的演算法不同於當前標準DL,也可以看成是一類的。TN最重要的是改變計算方式,不再是馮氏體系。由此帶來的好處是功耗下降。TN和DL,前者從硬體層面靠近人腦,後者從演算法層面模擬人腦,軟硬不分家,抽象一點看,兩者的核心並無不同。
謝邀 確實不知道 sorry
謝邀~機器學習初學者,先mark…之後來答
第一次受邀,多謝。不過並不了解這一塊兒。。
推薦閱讀:
※IBM(中國)是一家怎樣的公司?
※IBM 公司的核心競爭力是什麼?
※巴菲特為什麼要選擇購買 IBM 的股票?
※IBM(中國)的公司文化怎樣?這次裁員是否有點兒官僚?
TAG:IBM | 晶元集成電路 | 人工智慧演算法 | 深度學習DeepLearning |