知識布局-tensorflow-bp神經網路

本文環境:

centos 7.0 python 3.5 tensorflow 1.5 jupyter notebook

前言

最近在學tensorflow。由於這個框架大部分人都是使用python,我也就從java轉到了python。由於對python的api不太了解,學習起來確實是挺慢的。不過這也不妨礙我繼續進步。我前一篇文章寫了使用梯度下降演算法進行擬合。這一篇使用bp神經網路演算法來進行擬合。本文不涉及到BP NN的原理介紹(想看原理,forideal:知識布局-神經網路-數學原理)。

1.數據情況

import matplotlib.pyplot as pltimport numpy as npx_data = np.linspace(-1,1,100)[:,np.newaxis] # 轉為列向量 noise = np.random.normal(0,0.05,x_data.shape) y_data = np.square(x_data)+0.5+noise%matplotlib inlineplt.plot(x_data,y_data,"ro")

2.編寫網路

import tensorflow as tf #模擬數據x_data = np.linspace(-1,1,100)[:,np.newaxis] noise = np.random.normal(0,0.05,x_data.shape) y_data = np.square(x_data)+0.5+noise#placeholder,用來存放數據的容器xs = tf.placeholder(tf.float32,[None,1]) # 樣本數未知,特徵數為1,佔位符最後要以字典形式在運行中填入 ys = tf.placeholder(tf.float32,[None,1]) #第一層隱層weights1 = tf.Variable(tf.random_normal([1,10])) basis = tf.Variable(tf.zeros([1,10])+0.1) weights_plus_b = tf.matmul(xs,weights1)+basis l1=tf.nn.relu(weights_plus_b)#輸出層#l2 = addLayer(l1,10,1,activity_function=None) weights2 = tf.Variable(tf.random_normal([10,1])) basis2 = tf.Variable(tf.zeros([1,1])+0.1) weights_plus_b2 = tf.matmul(l1,weights2)+basis2 l2=weights_plus_b2#lossloss = tf.reduce_mean(tf.reduce_sum(tf.square((ys-l2)),reduction_indices = [1]))#需要向相加索引號,redeuc執行跨緯度操作 train = tf.train.GradientDescentOptimizer(0.1).minimize(loss) # 選擇梯度下降法 init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) #開始訓練for i in range(10000): sess.run(train,feed_dict={xs:x_data,ys:y_data}) if i%500 == 0: print(sess.run(loss,feed_dict={xs:x_data,ys:y_data})) #進行預測weights_plus_c = tf.matmul(xs,weights1)+basis l3=tf.nn.relu(weights_plus_c)weights_plus_b3 = tf.matmul(l3,weights2)+basis2 result=sess.run(weights_plus_b3,feed_dict={xs:x_data})#計算誤差subtract_op=tf.square(tf.subtract(result,ys))/2error=sess.run(add_op,feed_dict={ys:y_data})#可視化plt.plot(x_data,result)plt.plot(x_data,y_data,"ro")plt.plot(x_data,error)sess.close()

3.總結

雖然BP NN的原理不是那麼簡單(如果你數學好的話,估計一會就弄懂了),使用google 提供的tensorflow來進行訓練,真不是一件複雜的事情。

4.完整代碼

import matplotlib.pyplot as pltimport numpy as npimport tensorflow as tf #模擬數據x_data = np.linspace(-1,1,100)[:,np.newaxis] noise = np.random.normal(0,0.05,x_data.shape) y_data = np.square(x_data)+0.5+noise#placeholder,用來存放數據的容器xs = tf.placeholder(tf.float32,[None,1]) # 樣本數未知,特徵數為1,佔位符最後要以字典形式在運行中填入 ys = tf.placeholder(tf.float32,[None,1]) #第一層隱層weights1 = tf.Variable(tf.random_normal([1,10])) basis = tf.Variable(tf.zeros([1,10])+0.1) weights_plus_b = tf.matmul(xs,weights1)+basis l1=tf.nn.relu(weights_plus_b)#輸出層#l2 = addLayer(l1,10,1,activity_function=None) weights2 = tf.Variable(tf.random_normal([10,1])) basis2 = tf.Variable(tf.zeros([1,1])+0.1) weights_plus_b2 = tf.matmul(l1,weights2)+basis2 l2=weights_plus_b2#lossloss = tf.reduce_mean(tf.reduce_sum(tf.square((ys-l2)),reduction_indices = [1]))#需要向相加索引號,redeuc執行跨緯度操作 train = tf.train.GradientDescentOptimizer(0.1).minimize(loss) # 選擇梯度下降法 init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) #開始訓練for i in range(10000): sess.run(train,feed_dict={xs:x_data,ys:y_data}) if i%500 == 0: print(sess.run(loss,feed_dict={xs:x_data,ys:y_data})) #進行預測weights_plus_c = tf.matmul(xs,weights1)+basis l3=tf.nn.relu(weights_plus_c)weights_plus_b3 = tf.matmul(l3,weights2)+basis2 result=sess.run(weights_plus_b3,feed_dict={xs:x_data})#計算誤差subtract_op=tf.square(tf.subtract(result,ys))/2error=sess.run(add_op,feed_dict={ys:y_data})#可視化plt.plot(x_data,result)plt.plot(x_data,y_data,"ro")plt.plot(x_data,error)sess.close()

推薦閱讀:

爬蟲入門到精通-headers的詳細講解(模擬登錄知乎)
Win環境下Pycharm安裝igraph
一句一句讀Pytorch(更新中)
運維人都想要的基於Python開發的100+Stars的CMDB開源項目
day9-哈夫曼樹

TAG:TensorFlow | 神經網路 | Python |