使用tensorflow训练mnist数据并可视化训练过程

作者: admin 日期: 2017-10-12 10:54:28 人气: - 评论: 0
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
mnist = input_data.read_data_sets("/tmp/tensorflow/mnist/input_data", one_hot=True)

# Create the model
x = tf.placeholder(tf.float32, [None, 784],name="x")
W = tf.Variable(tf.zeros([784, 10]) ,name="W")
b = tf.Variable(tf.zeros([10]),name="b")
y = tf.nn.softmax(tf.matmul(x, W) + b,name="y")
y_ = tf.placeholder(tf.float32, [None, 10],name="y_")
cross_entropy = -tf.reduce_mean(y_*tf.log(y))
tf.summary.scalar("loss",cross_entropy) #把loss加入summary中
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()

merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter("C:\\tmp\\tensorflow\mnist\log",
                                  sess.graph)

for _ in range(10000):
 batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
result = sess.run(merged, feed_dict={x: batch_xs, y_: batch_ys})
train_writer.add_summary(result, _)

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images,
                                   y_: mnist.test.labels}))

使用tensorflow官方的demo改写的程序运行后输出0.9104表示识别率为91.4%


使用tensorboard观察训练过程中loss的变化


发现上下跳跃很厉害,遂调低学习率把

train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)

改为

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

下降曲线平稳了很多

相关内容

发表评论
更多 网友评论0 条评论)
暂无评论

Copyright © 2012-2014 我的代码板 Inc. 保留所有权利。

页面耗时0.0436秒, 内存占用1.83 MB, 访问数据库13次

闽ICP备15009223号-1