pytorch實現capsule

代碼運行環境:

python2.7,pytoch0.4.0a0+6eca9e0,visdom0.1.6.0(loss可視化)

數據集使用mnist數據集,解壓之後的數據

參考論文:arxiv.org/pdf/1710.0982

參考博客:先讀懂CapsNet架構然後用TensorFlow實現:全面解析Hinton的提出的Capsule - CSDN博客

參考代碼:timomernick/pytorch-capsule

代碼組織格式參考:PyTorch實戰指南

寫在開頭

capsule是hinton老爺子最近提出的神經網路架構,其目的在於將傳統的用點表示特徵的方式改為使用向量表示.所以capsule網路的輸出v的shape為[batch_size,10,16,1],其中向量v_j([16,1])的範數就為對應類別的可能性P

基本概念

loss function

每一個類的損失:

L_c = T_cmax(0,m^+-||v||)^2+lambda(1-T_c)max(0,||v||-m^-)^2

總損失為:

loss = frac{1}{c}sum_{i=0}^cL_i

def loss(labels,v): """ input: labels:[batch_size,10] v:[batch_size,10,16,1] """ #shape->[batch_size,10,1,1] #print v v_norm = torch.sqrt(torch.sum(v**2,dim=2,keepdim=True)).squeeze() zero = torch.zeros([1]).double() lamda = torch.Tensor([0.5]).double() if conf.cuda: zero = zero.cuda() lamda = lamda.cuda() zero = Variable(zero) lamda = Variable(lamda) m_plus = 0.9 m_minus = 0.1 #shape->[batch_size,10] L = torch.max(zero,m_plus-v_norm)**2 R = torch.max(zero,v_norm-m_minus)**2 #equation 4 in paper loss = torch.sum(labels*L+lamda*(1-labels)*R,dim=1) #shape->[batch_size,] loss = loss.mean() #shape->[1,] return loss

activation function(squashing)

v_j = frac{||s_j||}{1+||s_j||}frac{s_j}{||s_j||}

易知 p(pre\_y=j) = ||v_j||<1

def squash(x,dim): #we should do spuash in each capsule. #we use dim to select sum_sq = torch.sum(x**2,dim=dim,keepdim=True) sum_sqrt = torch.sqrt(sum_sq) return (sum_sq/(1.0+sum_sq))* x/sum_sqrt

網路架構

下面的網路架構主要通過輸入的維度變化進行演示(一直沒有找到好的畫圖的工具)

1.conv

輸入是input([batch_size,1,28,28])

選擇kernel([1,256,9,9]),stride=1,

則第一層conv輸出為conv_output([batch_size,256,20,20])

2.capsconv

同樣是卷積操作,input([batch_size,256,20,20])

選擇kernel([256,32,9,9]),stride=2,卷積運算結果為conv_result([batch_size,32,6,6])

**完全獨立的重複上面的操作8次,結果儲存為list_conv_result[conv_1_result,...,conv_8_result]**

然後把這8個結果stack在一起,結果為capsconv_stack([batch_size,8,32,6,6]),然後將後三維合併在一起時候才能capsconv_output([batch_size,8,1152]),交換一下1,2維capsconv_output([batch_size,1152,8]),然後調用一次激活函數

#coding:utf-8#author:selousimport torchimport torch.nn as nnimport utilsclass ConvUnit(nn.Module): def __init__(self, in_channels): super(ConvUnit, self).__init__() self.conv0 = nn.Conv2d(in_channels=in_channels, out_channels=32, # fixme constant kernel_size=9, # fixme constant stride=2, # fixme constant bias=True) def forward(self, x): return self.conv0(x)class CapsConv(nn.Module): def __init__(self,in_channels=256,out_dim=8): super(CapsConv,self).__init__() self.in_channels = in_channels self.out_dim = out_dim def create_conv_unit(unit_idx): unit = ConvUnit(in_channels=in_channels) self.add_module("unit_" + str(unit_idx), unit) return unit #定義8次卷積操作 self.conv = [create_conv_unit(i) for i in range(self.out_dim)] def forward(self,x): #input x with shape ->[batch_size,in_features,height,width] #output with shape->[batch_size,32,6,6] x = [self.conv[i](x) for i in range(self.out_dim)] #output with shape->[batch_size,8,32,6,6] x = torch.stack(x,dim=1) #return shape->[batch_size,1152,8] x = x.view(x.size(0),self.out_dim,-1).transpose(1,2) #return shape->[batch_size,1152,8] x = utils.squash(x,dim=2) return x

3.capsnet:

輸入為input[batch_size,1152,8],首先stack成[batch_size,1152,10,1,8],然後構建一個全連接參數W[1,1152,10,8,16],堆疊成[batch_size,1152,10,8,16],兩個點乘結果為mut_result[batch_size,1152,10,1,16]

下面就是要使用動態路由的演算法,優化一個c[batch_size,1152,10,1,1],先不考慮動態優化演算法,假設我們已經優化好了c,用c[batch_size,1152,10,1,1]與mut_result[batch_size,1152,10,1,16]進行數乘之後關於第1維求和capsnet_result[batch_size,1,10,1,16]

然後調用一下激活函數輸出結果capsnet_result[batch_size,10,16,1]

4.Dynamic Routing演算法

首先初始化一個b([1,1152,10,1]),先關於第二維求softmax,然後結果stack成b_stack[batch_size,1152,10,1],增加一維變成c[batch_size,1152,10,1,1],然後通過上面說的過程求capsnet_result[batch_size,1,10,1,16],利用這個結果更新b,先將capsnet_result變成[batch_size,1152,10,1,16],然後轉成[batch_size,1152,10,16,1],然後與mut_result[batch_size,1152,10,1,16]進行點乘得到結果[batch_size,1152,10,1,1],然後關於第0維求平均結果squeeze變成[1,1152,10,1],然後直接更新b

循環三次得出最優結果

#coding:utf-8import torchimport torch.nn as nnfrom torch.autograd import Variableimport torch.nn.functional as Fimport utilsimport configconf = config.DefaultConf()class CapsNet(nn.Module): """ input :a group of capsule -> shape:[batch_size*1152(feature_num)*8(in_dim)] output:a group of new capsule -> shape[batch_size*10(feature_num)*16(out_dim)] """ def __init__(self,in_features,out_features,in_dim,out_dim): """ """ super(CapsNet,self).__init__() #number of output features,10 self.out_features = out_features #number of input features,1152 self.in_features = in_features #dimension of input capsule self.in_dim = in_dim #dimension of output capsule self.out_dim = out_dim #full connect parameter W with shape [1(batch共享),1152,10,8,16] self.W = nn.Parameter(torch.randn(1,self.in_features,self.out_features,in_dim,out_dim)) def forward(self,x): #input x,shape=[batch_size,in_features,in_dim] #[batch_size,1152,8] # (batch, input_features, in_dim) -> (batch, in_features, out_features,1,in_dim) x = torch.stack([x] * self.out_features, dim=2).unsqueeze(3) W = torch.cat([self.W] * conf.batch_size,dim=0) # u_hat shape->(batch_size,in_features,out_features,out_dim)=(batch,1152,10,1,16) u_hat = torch.matmul(x,W) #b for generate weight c,with shape->[1,1152,10,1] b = torch.zeros([1,self.in_features,self.out_features,1]).double() if self.cuda: b = b.cuda() b = Variable(b) for i in range(3): c = F.softmax(b,dim=2) #c shape->[batch_size,1152,10,1,1] c = torch.cat([c] * conf.batch_size, dim=0).unsqueeze(dim=4) #s shape->[batch_size,1,10,1,16] s = (u_hat * c).sum(dim=1,keepdim=True) #output shape->[batch_size,1,10,1,16] v = utils.squash(s,dim=-1) v_1 = torch.cat([v] * self.in_features, dim=1) #(batch,1152,10,1,16)matmul(batch,1152,10,16,1)->(batch,1152,10,1,1) #squeeze #mean->(1,1152,10,1) #print u_hat.shape,v_1.shape update_b = torch.matmul(u_hat,v_1.transpose(3,4)).squeeze(dim=4).mean(dim=0,keepdim=True) b = b+update_b return v.squeeze(1).transpose(2,3)

其他的常規代碼可以參看我的github地址:selous123/pytorch-example


推薦閱讀:

logistic regression 邏輯回歸
機器學習演算法簡介
馬庫斯:DeepMind新出的機器心智網路不錯,但有誤導性
學Python,這10道題你一定得會
過擬合與正則化

TAG:機器學習 | PyTorch | 神經網路 |