史上最全神經網路結構圖畫圖工具介紹,沒有之一!
前言最近看到知乎上有人提問,關於神經網路結構圖的問題,編輯部決定給大家做一期比較全面詳細的介紹,希望對大家在這方面的空缺和疑惑有所幫助。
所有文檔在文末下載。
LaTeX
我們給出了部分內容,全部文章,請在文末獲取
繪製網路結點圖的tikz庫
在控制論或者是智能領域,神經網路是經常接觸到的,另外,研究網路時,也經常需要繪製網路結點圖,下面介紹一個tikz庫可以非常方便地繪製這類圖。
The following example shows a Rearrangeable Clos Network.
神經網路繪圖包
包的整體設計非常不錯,使用也很方便,作者使用該包寫了一個版面不錯的文檔。
Linear regression may be visualised as a graph. The output is simply the weighted sum of the inputs:
Logistic regression is a powerful tool but it can only form simple hypotheses, since it operates on a linear combination of the input values (albeit applying a non-linear function as soon as possible). Neural networks are constructed from layers of such non-linear mixing elements, allowing development of more complex hypotheses. This is achieved by stacking4 logistic regression networks to produce more complex behaviour. The inclusion of extra non-linear mixing stages between the input and the output nodes can increase the complexity of the network, allowing it to develop more advanced hypotheses. This is relatively simple:
The presence of multiple layers can be used to construct all the elementary logic gates. This in turn allows construction of advanced digital processing logic in neural networks – and this construction occurs automatically during the learning stage. Some examples are shown below, which take inputs of 0/1 and which return a positive output for true and a non-positive output for false:
From these, it becomes trivial to construct other gates. Negating the values produces the inverted gates, and these can be used to construct more complex gates. Thus, neural networks may be understood as 「self-designing microchips」, capable of both digital and analogue processing.
Omnigraffle
我們給出了部分內容,全部文章,請在文末獲取
OmniGraffle是由The Omni Group製作的一款繪圖軟體,其只能於運行在Mac OS X和iPad平台之上,添加公式可以配合latexit使用。可以用來繪製圖表,流程圖,組織結構圖以及插圖,也可以用來組織頭腦中思考的信息,組織頭腦風暴的結果,繪製心智圖,作為樣式管理器,或設計網頁或PDF文檔的原型。嵌入在論文里導出成pdf,嵌入在網頁里導出成svg。十分方便。
Python
我們給出了部分內容,全部文章,請在文末獲取
draw_convnet
Python script for illustrating Convolutional Neural Network (ConvNet)
部分代碼:
DSL
我們給出了部分內容,全部文章,請在文末獲取
DSL 深度神經網路,支持 Torch 和 Caffe
DNNGraph - A deep neural network model generation DSL in Haskell
It consists of several parts:
- A DSL for specifying the model. This uses the lens library for elegant, composable constructions, and the fgl graph library for specifying the network layout.
- A set of optimization passes that run over the graph representation to improve the performance of the model. For example, we can take advantage of the fact that several layers types (ReLU, Dropout) can operate in-place.
- A set of backends to generate code for the platform. Currently, we generate
- Caffe (by generating model prototxt files)
- Torch (by generating Lua scripts)
- A set of useful CLI tools for exporting, visualizing and understanding a model (visualization of network structure, parameter density)
DSL Examples
Joseph Paul Cohen Ph.D
* Postdoctoral Fellow at Montreal Institute for Learning Algorithms at University of Montreal
* Friend of the Farlow Fellow at Harvard University
* National Science Foundation Graduate Fellow
Visualizing CNN architectures side by side with mxnet
Convolutional Neural Networks can be visualized as computation graphs with input nodes where the computation starts and output nodes where the result can be read. Here the models that are provided with mxnet are compared using the mx.viz.plot_network method. The output node is at the top and the input node is at the bottom.
Python + Graphviz
我們給出了部分內容,全部文章,請在文末獲取
針對節點較多的網路,不可避免需要投入大量盡量來寫重複的腳本代碼。用python編寫了一個簡單的dot腳本生成工具(MakeNN),可以很方便的輸入參數生成nn結構圖。
部分代碼
Graphviz - dot
我們給出了部分內容,全部文章,請在文末獲取
在dot裡面label的玩法比較多,在上面看到的每個節點都是簡單的一段文字,如果想要比較複雜的結構怎麼辦?如下圖:
對應的代碼如下:
這個還不算厲害的,label還支持HTML格式的,這樣你能想得到的大部分樣子的節點都可以被定義出來了:
對應的代碼如下:
接著來看cluster的概念,在dot中以cluster開頭的子圖會被當做是一個新的布局來處理,而不是在原圖的基礎上繼續操作。比如:
對應的代碼如下:
如果沒有cluster的話我們大概能想像的出來最後的結果是什麼樣子的。可能會想能不能將一個節點直接指向cluster?答案是不能!對於這種需求可以用lhead來搞定:
生成圖片如下:
Keras
我們給出了部分內容,全部文章,請在文末獲取
使用Keras框架(後端可選tensorflow或者theano),可以畫出卷積神經網路的結構圖。
from keras.layers import Input, Convolution2D, Flatten, Dense, Activationfrom keras.models import Sequentialfrom keras.optimizers import SGD , Adamfrom keras.initializations import normalfrom keras.utils.visualize_util import plot# apply a 3x3 convolution with 64 output filters on a 256x256 image:model = Sequential()nmodel.add(Convolution2D(64, 3, 3, border_mode=same, dim_ordering=th,input_shape=(3, 256, 256)))# now model.output_shape == (None, 64, 256, 256)# add a 3x3 convolution on top, with 32 output filters:model.add(Convolution2D(32, 3, 3, border_mode=same, dim_ordering=th))# now model.output_shape == (None, 32, 256, 256)adam = Adam(lr=1e-6)nmodel.compile(loss=mse,optimizer=adam)nprint("We finish building the model")nnplot(model, to_file=model1.png, show_shapes=True) n
from keras.layers import Input, Convolution2D, MaxPooling2D, Flatten, Densenfrom keras.models import Modelnfrom keras.utils.visualize_util import plotinputs = Input(shape=(229, 229, 3))x = Convolution2D(32, 3, 3, subsample=(2, 2), border_mode=valid, dim_ordering=tf)(inputs)x = Flatten()(x)loss = Dense(32, activation=relu, name=loss)(x)model = Model(input=inputs, output=loss)model.compile(optimizer=rmsprop, loss=binary_crossentropy)# visualize model layout with pydot_ngplot(model, to_file=model2.png, show_shapes=True) n
from keras.layers import Input, Convolution2D, Flatten, Dense, Activationfrom keras.models import Sequentialfrom keras.optimizers import SGD , Adamfrom keras.initializations import normalfrom keras.utils.visualize_util import plotnnprint("Now we build the model")nmodel = Sequential()nimg_channels = 4 #output dimenson nothing with channelsimg_rows = 80img_cols = 80model.add(Convolution2D(32, 8, 8, subsample=(4,4),init=lambda shape, name: normal(shape, scale=0.01, name=name), border_mode=same, dim_ordering=th,input_shape=(img_channels,img_rows,img_cols)))nmodel.add(Activation(relu))nmodel.add(Convolution2D(64, 4, 4, subsample=(2,2),init=lambda shape, name: normal(shape, scale=0.01, name=name), border_mode=same, dim_ordering=th))nmodel.add(Activation(relu))nmodel.add(Convolution2D(64, 3, 3, subsample=(1,1),init=lambda shape, name: normal(shape, scale=0.01, name=name), border_mode=same, dim_ordering=th))nmodel.add(Activation(relu))nmodel.add(Flatten())nmodel.add(Dense(512, init=lambda shape, name: normal(shape, scale=0.01, name=name)))nmodel.add(Activation(relu))nmodel.add(Dense(2,init=lambda shape, name: normal(shape, scale=0.01, name=name)))nnadam = Adam(lr=1e-6)nmodel.compile(loss=mse,optimizer=adam)nprint("We finish building the model")nnplot(model, to_file=model3.png, show_shapes=True) n
Netscope
Netscope是個支持prototxt格式描述的神經網路結構的在線可視工具,地址:http://ethereon.github.io/netscope/quickstart.html
它可以用來可視化Caffe結構里prototxt格式的網路結構。地址:http://ethereon.github.io/netscope/#/editor
點擊Launch Editor,把你的描述神經網路結構的prototxt文件複製到該編輯框里,按shift+enter,就可以直接以圖形方式顯示網路的結構。
比如,以mnist的LeNet網路結構為例,把Caffe中example/mnist/lenet_train_test.prototxt文件的內容複製到編譯框,按shift + enter,立即就可以得到可視化的結構圖。
Caffe
我們給出了部分內容,全部文章,請在文末獲取
Python/draw_net.py, 這個文件,就是用來繪製網路模型的。也就是將網路模型由prototxt變成一張圖片。
繪製Lenet模型
# sudo python python/draw_net.py examples/mnist/lenet_train_test.prototxt netImage/lenet.png --rankdir=TB
Draw Freely | Inkscape
鏈接: https://pan.baidu.com/s/1eSefQBg
密碼: ku9f
推薦閱讀:
TAG:Python | 机器学习 | 深度学习DeepLearning |