利用FCN-8s網路訓練自己數據集(NYUD為例)

利用FCN-8s網路訓練自己數據集(NYUD為例)

FCN 官方Github 地址: shelhamer/fcn.berkeleyvision.org

yxliwhu/NYUD-FCN8s我修改後的Gitbub 地址:

Papers:

Fully Convolutional Models for Semantic Segmentation

Evan Shelhamer*, Jonathan Long*, Trevor Darrell

PAMI 2016

arXiv:1605.06211

Fully Convolutional Models for Semantic Segmentation

Jonathan Long*, Evan Shelhamer*, Trevor Darrell

CVPR 2015

arXiv:1411.4038

官方的Code提供了PASCAL VOC models,SIFT Flow models,PASCAL-Context models的完整(32s,16s,8s)的代碼,但對於NYUD只提供了32s的代碼,這裡我們就以NYUD作為例子說明一下FCN-8s訓練的完整過程(網上有很多教程,但不是不完整,就是存在錯誤)。

源代碼下載和數據集預處理

下載官方源代碼:

git clone https://github.com/shelhamer/fcn.berkeleyvision.org.git

下載VGG16的預訓練模型並放在FCN源碼文件夾中的ilsvrc-nets文件夾下:

cd /fcn.berkeleyvision.org/ilsvrc-netswget http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel

獲取與其相對應的deploy文件:

wget https://gist.githubusercontent.com/ksimonyan/211839e770f7b538e2d8/raw/0067c9b32f60362c74f4c445a080beed06b07eb3/VGG_ILSVRC_16_layers_deploy.prototxt

下載數據集:

cd data/nyud/wget http://people.eecs.berkeley.edu/~sgupta/cvpr13/data.tgztar -xvf data.tgz

這個時候,nyud的文件夾應該是這個樣子:

其中data文件夾中有三個子文件夾:benchmarkData, colorImage, pointCloud. 其中benchmarkData/groundTruth中儲存這所有我們需要的分割的真值,colorImage文件夾儲存著原始的RGB文件.由於源代碼設置的groundTruth路徑和現有的路徑不一樣,所以我們要把groundTruth文件copy到指定路徑:

mkdir segmentationcp data/benchmarkData/groundTruth/*.mat segmentation/

同時,我們要合併train.txt和val.txt: 在nyud文件夾中新建一個空白的.txt文件並命名為trainval.txt,然後將train.txt和val.txt中的內容Copy過去.這個時候nyud文件夾應為是這個樣子:

數據準備完畢,現在開始訓練FCN-32s網路.

FCN-32s網路訓練:

把要用到的.py文件Copy到nyud-fcn32s-color文件夾:

cd fcn.berkeleyvision.orgcp *.py nyud-fcn32s-color/cd nyud-fcn32s-colorrm pascalcontext_layers.pyrm voc_helper.pyrm voc_layer.pyrm siftflow_layers.py

修改solver.prototxt文件,下面是我個人的文件內容,關於相關參數的含義請參考:Caffe學習系列(7):solver及其配置 - denny402 - 博客園

train_net: "trainval.prototxt"test_net: "test.prototxt"test_iter: 200# make test net, but dont invoke it from the solver itselftest_interval: 999999999display: 20average_loss: 20lr_policy: "fixed"# lr for unnormalized softmaxbase_lr: 1e-10# high momentummomentum: 0.99# no gradient accumulationiter_size: 1max_iter: 300000weight_decay: 0.0005snapshot: 5000snapshot_prefix: "snapshot/train"test_initialization: false

solve.py文件修改:

在這裡鄭重聲明一下:如果訓練fcn32s的網路模型,一定要修改solve.py,利用transplant的方式獲取vgg16的網路權重.具體操作為:

import caffeimport surgery, scoreimport numpy as npimport osimport systry: import setproctitle setproctitle.setproctitle(os.path.basename(os.getcwd()))except: passvgg_weights = ../ilsvrc-nets/VGG_ILSVRC_16_layers.caffemodelvgg_proto = ../ilsvrc-nets/VGG_ILSVRC_16_layers_deploy.prototxt#weights = ../ilsvrc-nets/vgg16-fcn.caffemodel# init#caffe.set_device(int(sys.argv[1]))caffe.set_device(0)caffe.set_mode_gpu()# solver = caffe.SGDSolver(solver.prototxt)# solver.net.copy_from(weights)solver = caffe.SGDSolver(solver.prototxt)vgg_net = caffe.Net(vgg_proto, vgg_weights, caffe.TRAIN)surgery.transplant(solver.net, vgg_net)del vgg_net# surgeriesinterp_layers = [k for k in solver.net.params.keys() if up in k]surgery.interp(solver.net, interp_layers)# scoringtest = np.loadtxt(../data/nyud/test.txt, dtype=str)for _ in range(50): solver.step(5000) score.seg_tests(solver, False, test, layer=score)

可以看到我注釋了:

#weights = ../ilsvrc-nets/vgg16-fcn.caffemodel......# solver = caffe.SGDSolver(solver.prototxt)# solver.net.copy_from(weights)

添加了:

vgg_weights = ../ilsvrc-nets/VGG_ILSVRC_16_layers.caffemodelvgg_proto = ../ilsvrc-nets/VGG_ILSVRC_16_layers_deploy.prototxt......solver = caffe.SGDSolver(solver.prototxt)vgg_net = caffe.Net(vgg_proto, vgg_weights, caffe.TRAIN)surgery.transplant(solver.net, vgg_net)del vgg_net

關於transplant函數的解釋可以再surgery.py文件中找到:

同時由於路徑原因,需要修改nyud_layers.py中laod_label function 的內容:

#label = scipy.io.loadmat({}/segmentation/img_{}.mat.format(self.nyud_dir, idx))[segmentation].astype(np.uint8)label = scipy.io.loadmat({}/segmentation/img_{}.mat.format(self.nyud_dir, idx))[groundTruth][0,0][0,0][SegmentationClass].astype(np.uint16)for (x,y), value in np.ndenumerate(label): label[x,y] = self.class_map[0][value-1]label = label.astype(np.uint8)

以上配置全部結束,開始進行模型訓練:

cd nyud-fcn32s-colormkdir snapshotpython solve.py

大概迭代150000次以後,就可以達到論文描述的精度.

測試單張圖片

在fcn源碼文件夾,找到infer.py,重命名為test.py 並修改:

im = Image.open(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn32s-color/test.png)......net = caffe.Net(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn32s-color/deploy.prototxt, /home/li/Downloads/nyud-fcn32s-color-heavy.caffemodel, caffe.TEST)

其中:nyud-fcn32s-color-heavy.caffemodel 為訓練得到的model文件

test.png 為測試文件.

我附上個人完整的test.py的代碼:

import numpy as np from PIL import Image import matplotlib.pyplot as plt import sys import caffe import cvimport scipy.io# import pydensecrf.densecrf as dcrf # from pydensecrf.utils import compute_unary, create_pairwise_bilateral,create_pairwise_gaussian, softmax_to_unary import pdb# matplotlib inline # load image, switch to BGR, subtract mean, and make dims C x H x W for Caffe im = Image.open(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn32s-color/test.png) in_ = np.array(im, dtype=np.float32) in_ = in_[:,:,::-1] in_ -= np.array((104.00698793,116.66876762,122.67891434)) in_ = in_.transpose((2,0,1)) # load net net = caffe.Net(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn32s-color/deploy.prototxt, /home/li/Downloads/nyud-fcn32s-color-heavy.caffemodel, caffe.TEST) # shape for input (data blob is N x C x H x W), set data net.blobs[data].reshape(1, *in_.shape) net.blobs[data].data[...] = in_ # run net and take argmax for prediction net.forward() # pdb.set_trace()out = net.blobs[score].data[0].argmax(axis=0) scipy.io.savemat(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn32s-color/out.mat,{X:out}) #print "hello,python!" #plt.imshow(out,cmap=gray); plt.imshow(out) plt.axis(off) plt.savefig(testout_32s.png)

Learning如果沒有deploy文件,可以參考如下方法:

首先,根據你利用的模型,例如模型是nyud-fcn32s-color的,那麼你就去nyud-fcn32s-color的文件夾,裡面有trainval.prototxt文件,將文件打開,全選,複製,新建一個名為deploy.prototxt文件,粘貼進去,然後ctrl+F 尋找所有名為loss的layer 將這個layer統統刪除

然後在文件頂部加上

layer { name: "input" type: "Input" top: "data" input_param { # These dimensions are purely for sake of example; # see infer.py for how to reshape the net to the given input size. shape { dim: 1 dim: 3 dim: 425 dim: 540 } }}

其中shape{dim:1 dim:3 dim:425 dim:540}, 這裡425 和540代表測試文件的維度.

FCN-16s網路訓練:

對於FCN-16s網路的訓練,由於沒有對應的源代碼,所以一切的東西都要我們自己來做,還好官方提供了其他dataset的源代碼,我們可以依照這些內容生成相應的訓練文件.我們可以先比較一下voc-fcn16s和voc-fcn32s 相對應的net.py(用來生成.prototxt文件)代碼:

紅色框是兩個文件的不同的地方,對比兩者的network結構可以清楚的看到區別.

想要獲取network,運行/caffe/python 文件夾下的draw_net.py文件,這裡就不展開了.

所以我們可以根據上圖對比的結構從nyud-fcn32s-color/net.py 修改得到新的net.py文件:

cd fcn.berkeleyvision.orgmkdir nyud-fcn16s-colorcp nyud-fcn32s-color/net.py nyud-fcn16s-color/net.py

修改後的net.py文件為:

import caffefrom caffe import layers as L, params as Pfrom caffe.coord_map import cropdef conv_relu(bottom, nout, ks=3, stride=1, pad=1): conv = L.Convolution(bottom, kernel_size=ks, stride=stride, num_output=nout, pad=pad, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)]) return conv, L.ReLU(conv, in_place=True)def max_pool(bottom, ks=2, stride=2): return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)def fcn(split, tops): n = caffe.NetSpec() n.data, n.label = L.Python(module=nyud_layers, layer=NYUDSegDataLayer, ntop=2, param_str=str(dict(nyud_dir=../data/nyud, split=split, tops=tops, seed=1337))) # the base net n.conv1_1, n.relu1_1 = conv_relu(n.data, 64, pad=100) n.conv1_2, n.relu1_2 = conv_relu(n.relu1_1, 64) n.pool1 = max_pool(n.relu1_2) n.conv2_1, n.relu2_1 = conv_relu(n.pool1, 128) n.conv2_2, n.relu2_2 = conv_relu(n.relu2_1, 128) n.pool2 = max_pool(n.relu2_2) n.conv3_1, n.relu3_1 = conv_relu(n.pool2, 256) n.conv3_2, n.relu3_2 = conv_relu(n.relu3_1, 256) n.conv3_3, n.relu3_3 = conv_relu(n.relu3_2, 256) n.pool3 = max_pool(n.relu3_3) n.conv4_1, n.relu4_1 = conv_relu(n.pool3, 512) n.conv4_2, n.relu4_2 = conv_relu(n.relu4_1, 512) n.conv4_3, n.relu4_3 = conv_relu(n.relu4_2, 512) n.pool4 = max_pool(n.relu4_3) n.conv5_1, n.relu5_1 = conv_relu(n.pool4, 512) n.conv5_2, n.relu5_2 = conv_relu(n.relu5_1, 512) n.conv5_3, n.relu5_3 = conv_relu(n.relu5_2, 512) n.pool5 = max_pool(n.relu5_3) # fully conv n.fc6, n.relu6 = conv_relu(n.pool5, 4096, ks=7, pad=0) n.drop6 = L.Dropout(n.relu6, dropout_ratio=0.5, in_place=True) n.fc7, n.relu7 = conv_relu(n.drop6, 4096, ks=1, pad=0) n.drop7 = L.Dropout(n.relu7, dropout_ratio=0.5, in_place=True) n.score_fr = L.Convolution(n.drop7, num_output=40, kernel_size=1, pad=0, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)]) n.upscore2 = L.Deconvolution(n.score_fr,Learning convolution_param=dict(num_output=40, kernel_size=4, stride=2, bias_term=False), param=[dict(lr_mult=0)]) n.score_pool4 = L.Convolution(n.pool4, num_output=40, kernel_size=1, pad=0, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)]) n.score_pool4c = crop(n.score_pool4, n.upscore2) n.fuse_pool4 = L.Eltwise(n.upscore2, n.score_pool4c, operation=P.Eltwise.SUM) n.upscore16 = L.Deconvolution(n.fuse_pool4, convolution_param=dict(num_output=40, kernel_size=32, stride=16, bias_term=False), param=[dict(lr_mult=0)]) n.score = crop(n.upscore16, n.data) n.loss = L.SoftmaxWithLoss(n.score, n.label, loss_param=dict(normalize=False, ignore_label=255)) return n.to_proto()def make_net(): tops = [color, label] with open(trainval.prototxt, w) as f: f.write(str(fcn(trainval, tops))) with open(test.prototxt, w) as f: f.write(str(fcn(test, tops)))if __name__ == __main__: make_net()

運行net.py文件來生成.prototxt 文件:

cd nyud-fcn16s-color/python net.py

Copy並修改solve.py文件:

import caffeimport surgery, scoreimport numpy as npimport osimport systry: import setproctitle setproctitle.setproctitle(os.path.basename(os.getcwd()))except: passweights = ../nyud-fcn32s-color/nyud-fcn32s-color-heavy.caffemodel#vgg_weights = ../ilsvrc-nets/VGG_ILSVRC_16_layers.caffemodel#vgg_proto = ../ilsvrc-nets/VGG_ILSVRC_16_layers_deploy.prototxt# init#caffe.set_device(int(sys.argv[1]))caffe.set_device(0)caffe.set_mode_gpu()#caffe.set_mode_cpu()solver = caffe.SGDSolver(solver.prototxt)solver.net.copy_from(weights)#solver = caffe.SGDSolver(solver.prototxt)#vgg_net = caffe.Net(vgg_proto, vgg_weights, caffe.TRAIN)#surgery.transplant(solver.net, vgg_net)#del vgg_net# surgeriesinterp_layers = [k for k in solver.net.params.keys() if up in k]surgery.interp(solver.net, interp_layers)# scoringtest = np.loadtxt(../data/nyud/test.txt, dtype=str)for _ in range(50): solver.step(5000) score.seg_tests(solver, False, test, layer=score)

Copy並修改solver.prototxt文件(主要是修改base_lr的值,也就是Learning rate):

train_net: "trainval.prototxt"test_net: "test.prototxt"test_iter: 200# make test net, but dont invoke it from the solver itselftest_interval: 999999999display: 20average_loss: 20lr_policy: "fixed"# lr for unnormalized softmaxbase_lr: 1e-12# high momentummomentum: 0.99# no gradient accumulationiter_size: 1max_iter: 300000weight_decay: 0.0005snapshot: 5000snapshot_prefix: "snapshot/train"test_initialization: false

然後把要用到的.py文件拷貝到nyud-fcn16s-color文件夾:

cd fcn.berkeleyvision.orgcp *.py nyud-fcn16s-color/cd nyud-fcn32s-colorrm pascalcontext_layers.pyrm voc_helper.pyrm voc_layer.pyrm siftflow_layers.py

別忘修改nyud_layer.py文件.

運行solve.py開始訓練:

cd nyud-fcn32s-colormkdir snapshotpython solve.py

測試的過程和FCN-32s相同,對應的test.py文件為:

import numpy as np from PIL import Image import matplotlib.pyplot as plt import sys import caffe import cvimport scipy.io# import pydensecrf.densecrf as dcrf # from pydensecrf.utils import compute_unary, create_pairwise_bilateral,create_pairwise_gaussian, softmax_to_unary import pdb# matplotlib inline # load image, switch to BGR, subtract mean, and make dims C x H x W for Caffe im = Image.open(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn16s-color/test2.png) in_ = np.array(im, dtype=np.float32) in_ = in_[:,:,::-1] in_ -= np.array((104.00698793,116.66876762,122.67891434)) in_ = in_.transpose((2,0,1)) # load net net = caffe.Net(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn16s-color/deploy.prototxt, /home/li/Downloads/16_170000.caffemodel, caffe.TEST) # shape for input (data blob is N x C x H x W), set data net.blobs[data].reshape(1, *in_.shape) net.blobs[data].data[...] = in_ # run net and take argmax for prediction net.forward() # pdb.set_trace()out = net.blobs[score].data[0].argmax(axis=0) scipy.io.savemat(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn16s-color/out.mat,{X:out}) #print "hello,python!" #plt.imshow(out,cmap=gray); plt.imshow(out) plt.axis(off) plt.savefig(testout2_170000.png)

FCN-8s網路訓練:

代碼修改和FCN-16s相似,對應的net.py文件為:

import caffefrom caffe import layers as L, params as Pfrom caffe.coord_map import cropdef conv_relu(bottom, nout, ks=3, stride=1, pad=1): conv = L.Convolution(bottom, kernel_size=ks, stride=stride, num_output=nout, pad=pad, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)]) return conv, L.ReLU(conv, in_place=True)def max_pool(bottom, ks=2, stride=2): return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)def fcn(split, tops): n = caffe.NetSpec() n.data, n.label = L.Python(module=nyud_layers, layer=NYUDSegDataLayer, ntop=2, param_str=str(dict(nyud_dir=../data/nyud, split=split, tops=tops, seed=1337))) # the base net n.conv1_1, n.relu1_1 = conv_relu(n.data, 64, pad=100) n.conv1_2, n.relu1_2 = conv_relu(n.relu1_1, 64) n.pool1 = max_pool(n.relu1_2) n.conv2_1, n.relu2_1 = conv_relu(n.pool1, 128) n.conv2_2, n.relu2_2 = conv_relu(n.relu2_1, 128) n.pool2 = max_pool(n.relu2_2) n.conv3_1, n.relu3_1 = conv_relu(n.pool2, 256) n.conv3_2, n.relu3_2 = conv_relu(n.relu3_1, 256) n.conv3_3, n.relu3_3 = conv_relu(n.relu3_2, 256) n.pool3 = max_pool(n.relu3_3) n.conv4_1, n.relu4_1 = conv_relu(n.pool3, 512) n.conv4_2, n.relu4_2 = conv_relu(n.relu4_1, 512) n.conv4_3, n.relu4_3 = conv_relu(n.relu4_2, 512) n.pool4 = max_pool(n.relu4_3) n.conv5_1, n.relu5_1 = conv_relu(n.pool4, 512) n.conv5_2, n.relu5_2 = conv_relu(n.relu5_1, 512) n.conv5_3, n.relu5_3 = conv_relu(n.relu5_2, 512) n.pool5 = max_pool(n.relu5_3) # fully conv n.fc6, n.relu6 = conv_relu(n.pool5, 4096, ks=7, pad=0) n.drop6 = L.Dropout(n.relu6, dropout_ratio=0.5, in_place=True) n.fc7, n.relu7 = conv_relu(n.drop6, 4096, ks=1, pad=0) n.drop7 = L.Dropout(n.relu7, dropout_ratio=0.5, in_place=True) n.score_fr = L.Convolution(n.drop7, num_output=40, kernel_size=1, pad=0, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)]) n.upscore2 = L.Deconvolution(n.score_fr, convolution_param=dict(num_output=40, kernel_size=4, stride=2, bias_term=False), param=[dict(lr_mult=0)]) n.score_pool4 = L.Convolution(n.pool4, num_output=40, kernel_size=1, pad=0, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)]) n.score_pool4c = crop(n.score_pool4, n.upscore2) n.fuse_pool4 = L.Eltwise(n.upscore2, n.score_pool4c, operation=P.Eltwise.SUM) n.upscore_pool4 = L.Deconvolution(n.fuse_pool4, convolution_param=dict(num_output=40, kernel_size=4, stride=2, bias_term=False), param=[dict(lr_mult=0)]) n.score_pool3 = L.Convolution(n.pool3, num_output=40, kernel_size=1, pad=0, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)]) n.score_pool3c = crop(n.score_pool3, n.upscore_pool4) n.fuse_pool3 = L.Eltwise(n.upscore_pool4, n.score_pool3c, operation=P.Eltwise.SUM) n.upscore8 = L.Deconvolution(n.fuse_pool3, convolution_param=dict(num_output=40, kernel_size=16, stride=8, bias_term=False), param=[dict(lr_mult=0)]) n.score = crop(n.upscore8, n.data) n.loss = L.SoftmaxWithLoss(n.score, n.label, loss_param=dict(normalize=False, ignore_label=255)) return n.to_proto()def make_net(): tops = [color, label] with open(trainval.prototxt, w) as f: f.write(str(fcn(trainval, tops))) with open(test.prototxt, w) as f: f.write(str(fcn(test, tops)))if __name__ == __main__: make_net()

solve.py 文件為:

import caffeimport surgery, scoreimport numpy as npimport osimport systry: import setproctitle setproctitle.setproctitle(os.path.basename(os.getcwd()))except: passweights = ../nyud-fcn16s-color/snapshot/train_iter_170000.caffemodel#vgg_weights = ../ilsvrc-nets/VGG_ILSVRC_16_layers.caffemodel#vgg_proto = ../ilsvrc-nets/VGG_ILSVRC_16_layers_deploy.prototxt# init#caffe.set_device(int(sys.argv[1]))caffe.set_device(0)caffe.set_mode_gpu()#caffe.set_mode_cpu()solver = caffe.SGDSolver(solver.prototxt)solver.net.copy_from(weights)#solver = caffe.SGDSolver(solver.prototxt)#vgg_net = caffe.Net(vgg_proto, vgg_weights, caffe.TRAIN)#surgery.transplant(solver.net, vgg_net)#del vgg_net# surgeriesinterp_layers = [k for k in solver.net.params.keys() if up in k]surgery.interp(solver.net, interp_layers)# scoringtest = np.loadtxt(../data/nyud/test.txt, dtype=str)for _ in range(50): solver.step(5000) score.seg_tests(solver, False, test, layer=score)

test.py 文件為:

import numpy as np from PIL import Image import matplotlib.pyplot as plt import sys import caffe import cvimport scipy.io# import pydensecrf.densecrf as dcrf # from pydensecrf.utils import compute_unary, create_pairwise_bilateral,create_pairwise_gaussian, softmax_to_unary import pdb# matplotlib inline # load image, switch to BGR, subtract mean, and make dims C x H x W for Caffe im = Image.open(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn8s-color/test.png) in_ = np.array(im, dtype=np.float32) in_ = in_[:,:,::-1] in_ -= np.array((104.00698793,116.66876762,122.67891434)) in_ = in_.transpose((2,0,1)) # load net net = caffe.Net(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn8s-color/deploy.prototxt, /home/li/Downloads/8_55000.caffemodel, caffe.TEST) # shape for input (data blob is N x C x H x W), set data net.blobs[data].reshape(1, *in_.shape) net.blobs[data].data[...] = in_ # run net and take argmax for prediction net.forward() # pdb.set_trace()out = net.blobs[score].data[0].argmax(axis=0) scipy.io.savemat(/home/li/Documents/fcn.berkeleyvision.org/nyud-fcn8s-color/out.mat,{X:out}) #print "hello,python!" #plt.imshow(out,cmap=gray); plt.imshow(out) plt.axis(off) plt.savefig(testout_8s_55000.png)

Result:

Reference:

FCN網路訓練 SIFTFLOW數據集

shelhamer/fcn.berkeleyvision.org

Caffe學習系列(7):solver及其配置 - denny402 - 博客園

gist.githubusercontent.com

推薦閱讀:

TAG:圖像識別 | 深度學習DeepLearning |