知識布局-tensorflow-梯度下降

本文環境:

centos 7.0 python 3.5 tensorflow 1.5

模型:y=wx+b

原始數據情況

#部分python代碼M=100x_data=np.linspace(-1.0, 1.0, M)y_data= 2.0 * x_data + np.random.randn(*x_data.shape) * 0.33 + 10.0

畫圖如下

數據圖

使用tensorflow進行擬合

import tensorflow as tfimport numpy as npM=100x_data=np.linspace(-1.0, 1.0, M)y_data= 2.0 * x_data + np.random.randn(*x_data.shape) * 0.33 + 10.0#y_data= 2.0 * x_data + 10.0print(x_data)print(y_data)X_p=tf.placeholder(float,name=X_p)Y_p=tf.placeholder(tf.float32,name=Y_p)b=tf.Variable(0.0,name=b)W=tf.Variable(0.0,name=W)with tf.device(/cpu:0): y_a=tf.add(tf.multiply(W,X_p),b)loss=tf.reduce_mean(tf.square(Y_p-y_a))optimizer=tf.train.GradientDescentOptimizer(0.1)train=optimizer.minimize(loss)init=tf.global_variables_initializer()logs_path="/tmp/mytask"sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())sess.run(init)skt=0batch_size=10for step in range(0,100): itk=0 while itk<M: _,tttw,tttb,tttlos=sess.run([train,W,b,loss],feed_dict={X_p:x_data[itk:itk+batch_size],Y_p:y_data[itk:itk+batch_size]}) itk+=batch_size if itk>=M : print(step,tttw,tttb,tttlos) sess.close()writer.close()

進行訓練

訓練得到

y=2.00436x+10.01

畫圖如下:

#上圖是使用R語言畫的,R代碼如下> testx <- c(-1.0,-0.97979798,-0.95959596,-0.93939394,-0.91919192,-0.8989899,-0.87878788,-0.85858586,-0.83838384,-0.81818182,-0.7979798,-0.77777778,-0.75757576,-0.73737374,-0.71717172,-0.6969697,-0.67676768,-0.65656566,-0.63636364,-0.61616162,-0.5959596,-0.57575758,-0.55555556,-0.53535354,-0.51515152,-0.49494949,-0.47474747,-0.45454545,-0.43434343,-0.41414141,-0.39393939,-0.37373737,-0.35353535,-0.33333333,-0.31313131,-0.29292929,-0.27272727,-0.25252525,-0.23232323,-0.21212121,-0.19191919,-0.17171717,-0.15151515,-0.13131313,-0.11111111,-0.09090909,-0.07070707,-0.05050505,-0.03030303,-0.01010101,0.01010101,0.03030303,0.05050505,0.07070707,0.09090909,0.11111111,0.13131313,0.15151515,0.17171717,0.19191919,0.21212121,0.23232323,0.25252525,0.27272727,0.29292929,0.31313131,0.33333333,0.35353535,0.37373737,0.39393939,0.41414141,0.43434343,0.45454545,0.47474747,0.49494949,0.51515152,0.53535354,0.55555556,0.57575758,0.5959596,0.61616162,0.63636364,0.65656566,0.67676768,0.6969697,0.71717172,0.73737374,0.75757576,0.77777778,0.7979798,0.81818182,0.83838384,0.85858586,0.87878788,0.8989899,0.91919192,0.93939394,0.95959596,0.97979798,1.0)> testy <- c(8.28458759,7.79629265,8.17362304,8.27213101,7.69840653,8.08413119,8.26397948,8.18440599,8.51797938,7.88470418,8.63187917,8.12530851,8.66959248,8.5950761,8.61479325,8.89172848,8.10881392,8.83420417,9.20803741,8.96552946,9.12184878,9.01081236,9.3172948,8.8636621,8.40602432,8.73580997,8.46409313,9.0357428,9.35182371,9.11227591,8.79920517,9.12357337,9.45124231,9.00803146,9.12859124,9.41245596,9.17902763,9.8411853,9.8383521,9.35003574,9.69524735,9.21022084,9.61968701,9.80953522,10.16345299,9.49373811,9.84567657,10.18017292,9.58573027,9.94295931,10.0085984,9.83045808,10.05767504,10.12830055,10.37431194,10.44830402,11.12232404,10.75952115,9.91965398,10.3830691,10.36633372,10.72959958,10.24180925,10.69414025,10.5483207,10.39357757,10.55100368,10.89397831,10.86793964,10.75825462,10.78171771,10.86481307,10.36753103,11.02888703,10.6672981,10.89903609,11.01425273,10.79040527,11.13896229,10.95034015,11.57064916,10.57261039,11.97065347,11.5783904,11.65392091,11.65558255,11.3809545,11.55613567,11.42849825,11.50935929,11.90990357,11.92703731,11.45075855,11.61631159,11.74878793,11.1548748,11.93497609,11.8041486,12.1716475,11.87808539)> plot(testx,testy)> testy1 <- 2*testx+10> lines(testx,testy1)

推薦閱讀:

Kaggle比賽教你最快速度入門文本分類(經典方法篇)
《機器學習基石》課程學習總結(三)
Python基礎_103.數據結構與演算法_查找
數據集列歸一化與聚類示例
技術宅如何進化為女裝大佬

TAG:梯度下降 | tensor | 機器學習 |