基於TensorFlow一次簡單的RNN實現

由於發現網上大部分tensorflow的RNN教程都過於簡答或者複雜,所以嘗試一下從簡單到深的在TF中寫出RNN代碼,這篇文章主要參考打是TensorFlow人工智慧引擎入門教程之九 RNN/LSTM循環神經網路長短期記憶網路使用中使用的代碼,但是由於代碼版本較為古老,所以TF報錯,參考解讀tensorflow之rnn 對代碼進行修改和實現,第一版實現來一個最簡單打RNN模型。

RNN原理見參考資料

由於本次實驗在jupyter中完成的,所以部分圖片和輸出不好更如知乎中,好一點的版本見:RNNStudy/simpleRNN.ipynb

記錄步驟如下:

引入相關包

from tensorflow.examples.tutorials.mnist import input_datanmnist = input_data.read_data_sets("MNIST_data/", one_hot=True)nimport tensorflow as tfnn#from tensorflow.nn import rnn, rnn_cellnimport numpy as npn

先來看輸入數據,本次用打輸入數據是MNIST打數據可以看到如下

print 輸入數據:nprint mnist.train.imagesnprint 輸入數據打shape:nprint mnist.train.images.shapen

可以看到其中784是圖據28×28像素打圖像,將其轉化成圖像觀察一下如下圖所示,

%pylab inlinen%matplotlib inlinenimport pylab nnim = mnist.train.images[1]nim = im.reshape(-1,28)npylab.imshow(im)npylab.show()n

如果我們要用RNN來訓練這個網路打話,則應該選擇n_input = 28 ,n_steps = 28結構

a= np.asarray(range(20))nb = a.reshape(-1,2,2)nprint 生成一列數據nprint anprint reshape函數的效果nprint bnc = np.transpose(b,[1,0,2])nd = c.reshape(-1,2)nprint --------c-----------nprint cnprint --------d-----------nprint dn

定義一些模型打參數

nTo classify images using a reccurent neural network, we consider every image row as a sequence of pixels.nBecause MNIST image shape is 28*28px, we will then handle 28 sequences of 28 steps for every sample.nn# Parametersnlearning_rate = 0.001ntraining_iters = 100000nbatch_size = 128ndisplay_step = 100nn# Network Parametersnn_input = 28 # MNIST data input (img shape: 28*28)nn_steps = 28 # timestepsnn_hidden = 128 # hidden layer num of featuresnn_classes = 10 # MNIST total classes (0-9 digits)n

構建RNN打函數可以參考 :Neural Network開始我們先創建兩個佔位符placeholder,基本使用可以參考官方文檔:基本使用 - TensorFlow 官方文檔中文版

# tf Graph inputnx = tf.placeholder("float32", [None, n_steps, n_input])n# Tensorflow LSTM cell requires 2x n_hidden length (state & cell)ny = tf.placeholder("float32", [None, n_classes])nn# Define weightsnweights = {n hidden: tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weightsn out: tf.Variable(tf.random_normal([n_hidden, n_classes]))n}nbiases = {n hidden: tf.Variable(tf.random_normal([n_hidden])),n out: tf.Variable(tf.random_normal([n_classes]))n}n

首先創建一個CELL這裡需要打一個參數是隱藏單元打個數n_hidden,在創建完成後對其進行初始化

這裡會造成一個BUG,後面說道

lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden, forget_bias=0.0, state_is_tuple=True)n_state = lstm_cell.zero_state(batch_size,tf.float32)n

為了使得 原始數據打輸入和模型匹配,我們對數據進行一系列變換,變換打結果如下,數據變化可以參考上面打小實驗

a1 = tf.transpose(x, [1, 0, 2])na2 = tf.reshape(a1, [-1, n_input]) na3 = tf.matmul(a2, weights[hidden]) + biases[hidden] na4 = tf.split(0, n_steps, a3)nprint -----------------------nnprint a1:nprint a1nprint -----------------------nnprint a2:nprint a2nprint -----------------------nprint a3:nprint a3nprint -----------------------nprint a4:nprint a4n

為了使得 原始數據打輸入和模型匹配,我們對數據進行一系列變換,變換打結果如下這裡主要是為了匹配tf.nn.rnn遮蓋函數,函數可參考官方文檔:Neural Network或者前面解讀RNN那篇解讀tensorflow之rnn

outputs, states = tf.nn.rnn(lstm_cell, a4, initial_state = _state)nprint outputs[-1]nprint outputs[-1]nprint -----------------------nna5 = tf.matmul(outputs[-1], weights[out]) + biases[out]nprint a5:nprint a5nprint -----------------------n

定義cost,使用梯度下降求最優

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(a5, y))nn#AdamOptimizern#optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizernoptimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizerncorrect_pred = tf.equal(tf.argmax(a5,1), tf.argmax(y,1))naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))ninit = tf.initialize_all_variables()n

進行模型訓練,這裡需要注意,由於我使用打是Jupyter,採取來互動式環境,所以在普通py中sess = tf.InteractiveSession() 這一句不一定正確,需要自己修改為tf.Session()

sess = tf.InteractiveSession() nsess.run(init)nstep = 1n# Keep training until reach max iterationsnwhile step * batch_size < training_iters:n batch_xs, batch_ys = mnist.train.next_batch(batch_size)n # Reshape data to get 28 seq of 28 elementsn batch_xs = batch_xs.reshape((batch_size, n_steps, n_input))n # Fit training using batch datan sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})n if step % display_step == 0:n # Calculate batch accuracyn acc = sess.run(accuracy, feed_dict={x: batch_xs, y: batch_ys,})n # Calculate batch lossn loss = sess.run(cost, feed_dict={x: batch_xs, y: batch_ys})n print "Iter " + str(step*batch_size) + ", Minibatch Loss= " + "{:.6f}".format(loss) + ", Training Accuracy= " + "{:.5f}".format(acc)n step += 1nprint "Optimization Finished!"n

測試模型準確率

test_len = batch_sizentest_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))ntest_label = mnist.test.labels[:test_len]n# Evaluate modelncorrect_pred = tf.equal(tf.argmax(a5,1), tf.argmax(y,1))nprint "Testing Accuracy:", sess.run(accuracy, feed_dict={x: test_data, y: test_label})n

在這裡測試準確率有一個BUG,test_len 必須和batch_size相等,這是由於前面在初始化模型打時候選擇batch_size作為參數,導致a5輸出一直是一個batch_size行打矩陣,若est_len 和batch_size不想等,accuracy計算會報錯。 由於暫時沒想到簡單打解決方法,所以待下次處理。

python參考資料:

解讀tensorflow之rnn

RNN以及LSTM的介紹和公式梳理

TensorFlow人工智慧引擎入門教程之九 RNN/LSTM循環神經網路長短期記憶網路使用

LSTM模型理論總結(產生、發展和性能等)

解析Tensorflow官方PTB模型的demo


推薦閱讀:

勘誤與科普:如何在飛鳥的背後,捕捉到飛行|張崢談人工智慧
數據學習之路---每周好文分享(第一期)
面向數據科學家的兩門課:Data8 和 DS100
智能單元專欄目錄
Image Caption 深度學習方法綜述

TAG:深度学习DeepLearning | TensorFlow | 机器学习 |