標籤:

TensorFlow 訓練好模型參數的保存和恢復代碼

TensorFlow 訓練好模型參數的保存和恢復代碼,之前就在想模型不應該每次要個結果都要重新訓練一遍吧,應該訓練一次就可以一直使用吧。

TensorFlow 提供了 Saver 類,可以進行保存和恢復。下面是 TensorFlow-Examples 項目中提供的保存和恢復代碼。

現在很多項目都還只是分享了模型和數據,還需要自己訓練,以後應該都可以裝好就用的。

博客:TensorFlow 安裝,TensorFlow 教程,TensorFlow 資源,TensorFlow 導航。

nSave and Restore a model using TensorFlow.nThis example is using the MNIST database of handwritten digitsn(http://yann.lecun.com/exdb/mnist/)nnAuthor: Aymeric DamiennProject: https://github.com/aymericdamien/TensorFlow-Examples/nnnfrom __future__ import print_functionnn# Import MNIST datanfrom tensorflow.examples.tutorials.mnist import input_datanmnist = input_data.read_data_sets("MNIST_data/", one_hot=True)nnimport tensorflow as tfnn# Parametersnlearning_rate = 0.001nbatch_size = 100ndisplay_step = 1nmodel_path = "/tmp/model.ckpt"nn# Network Parametersnn_hidden_1 = 256 # 1st layer number of featuresnn_hidden_2 = 256 # 2nd layer number of featuresnn_input = 784 # MNIST data input (img shape: 28*28)nn_classes = 10 # MNIST total classes (0-9 digits)nn# tf Graph inputnx = tf.placeholder("float", [None, n_input])ny = tf.placeholder("float", [None, n_classes])nnn# Create modelndef multilayer_perceptron(x, weights, biases):n # Hidden layer with RELU activationn layer_1 = tf.add(tf.matmul(x, weights[h1]), biases[b1])n layer_1 = tf.nn.relu(layer_1)n # Hidden layer with RELU activationn layer_2 = tf.add(tf.matmul(layer_1, weights[h2]), biases[b2])n layer_2 = tf.nn.relu(layer_2)n # Output layer with linear activationn out_layer = tf.matmul(layer_2, weights[out]) + biases[out]n return out_layernn# Store layers weight & biasnweights = {n h1: tf.Variable(tf.random_normal([n_input, n_hidden_1])),n h2: tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),n out: tf.Variable(tf.random_normal([n_hidden_2, n_classes]))n}nbiases = {n b1: tf.Variable(tf.random_normal([n_hidden_1])),n b2: tf.Variable(tf.random_normal([n_hidden_2])),n out: tf.Variable(tf.random_normal([n_classes]))n}nn# Construct modelnpred = multilayer_perceptron(x, weights, biases)nn# Define loss and optimizerncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)nn# Initializing the variablesninit = tf.global_variables_initializer()nn# Saver op to save and restore all the variablesnsaver = tf.train.Saver()nn# Running first sessionnprint("Starting 1st session...")nwith tf.Session() as sess:n # Initialize variablesn sess.run(init)nn # Training cyclen for epoch in range(3):n avg_cost = 0.n total_batch = int(mnist.train.num_examples/batch_size)n # Loop over all batchesn for i in range(total_batch):n batch_x, batch_y = mnist.train.next_batch(batch_size)n # Run optimization op (backprop) and cost op (to get loss value)n _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,n y: batch_y})n # Compute average lossn avg_cost += c / total_batchn # Display logs per epoch stepn if epoch % display_step == 0:n print("Epoch:", %04d % (epoch+1), "cost=", n "{:.9f}".format(avg_cost))n print("First Optimization Finished!")nn # Test modeln correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))n # Calculate accuracyn accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))n print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))nn # Save model weights to diskn save_path = saver.save(sess, model_path)n print("Model saved in file: %s" % save_path)nn# Running a new sessionnprint("Starting 2nd session...")nwith tf.Session() as sess:n # Initialize variablesn sess.run(init)nn # Restore model weights from previously saved modeln saver.restore(sess, model_path)n print("Model restored from file: %s" % save_path)nn # Resume trainingn for epoch in range(7):n avg_cost = 0.n total_batch = int(mnist.train.num_examples / batch_size)n # Loop over all batchesn for i in range(total_batch):n batch_x, batch_y = mnist.train.next_batch(batch_size)n # Run optimization op (backprop) and cost op (to get loss value)n _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,n y: batch_y})n # Compute average lossn avg_cost += c / total_batchn # Display logs per epoch stepn if epoch % display_step == 0:n print("Epoch:", %04d % (epoch + 1), "cost=", n "{:.9f}".format(avg_cost))n print("Second Optimization Finished!")nn # Test modeln correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))n # Calculate accuracyn accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))n print("Accuracy:", accuracy.eval(n {x: mnist.test.images, y: mnist.test.labels}))n

推薦閱讀:

1.4 卷積神經網路初探
譯文 | 與TensorFlow的第一次接觸 第六章:並發
NLP(2) Tensorflow 文本- 價格建模 Part2
【博客存檔】風格畫之最後一彈MRF-CNN

TAG:TensorFlow |