強化學習(十三):logistic regression

強化學習(十三):logistic regression

來自專欄 reinforcement learning

logistic regression

使用tensorflow構建logistic regression如下

class LogisticRegression: def _build(self): self.x = tf.placeholder(tf.float32, [None, self.feature_num]) self.y = tf.placeholder(tf.float32, [None, 1]) self.w = tf.Variable(tf.zeros([self.feature_num, 1])) self.b = tf.Variable(tf.zeros([1])) self.y_ = tf.nn.sigmoid(tf.matmul(self.x, self.w) + self.b) self.loss = -tf.reduce_mean(tf.reduce_sum( self.y * tf.log(self.y_) + (1 - self.y) * tf.log(1 - self.y_), reduction_indices=1 )) self.optimizer = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss)

在iris數據集上效果如下

tf.nn.sigmoid_cross_entropy_with_logits

使用tf.nn.sigmoid_cross_entropy_with_logits構建logistic regression如下

class LogisticRegression: def _build(self): self.x = tf.placeholder(tf.float32, [None, self.feature_num]) self.y = tf.placeholder(tf.float32, [None, 1]) self.w = tf.Variable(tf.zeros([self.feature_num, 1])) self.b = tf.Variable(tf.zeros([1])) self.logits = tf.matmul(self.x, self.w) + self.b self.loss = tf.reduce_mean(tf.reduce_sum( tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=self.y), reduction_indices=1 )) self.optimizer = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss)

在iris數據集上效果如下

總結

本文主要介紹了如何使用tensorflow實現logistic regression,本文代碼位於logistic regression。

推薦閱讀:

1.3 無監督學習
關聯分析:第一大數據挖掘演算法
kaggle實戰-房價預測

TAG:機器學習 | TensorFlow |