ElkanaTheGreat
ElkanaTheGreat

Reputation: 91

MNIST tensorflow - cant figure out whats wrong

I've been trying to figure out why this is not working for hours but I am getting nowhere. Would really appreciate some help.

It's basically a copy of the tutorial found on the tensorflow website with a few tweaks for using a local data set. But I only get 10% accuracy, which is the same as guessing!

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf

df = pd.read_csv('train.csv')
yi = df['label']
df = df.drop('label',1)

labels=[]
for i in range(len(yi)):
    #convert to one hot 
    label = [0,0,0,0,0,0,0,0,0,0]
    label[yi[i]]= 1
    labels.append(label)

labels = np.array(labels)
df = df.as_matrix()

df_train, df_test, y_train, y_test = train_test_split(df,labels)




x = tf.placeholder('float', [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder('float', [None, 10])

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)



sess = tf.Session()

init = tf.global_variables_initializer()
sess.run(init)

def next_batch(num, data, labels):

    #get batches for training 

    idx = np.arange(0 , len(data))
    np.random.shuffle(idx)
    idx = idx[:num]
    data_shuffle = [data[ i] for i in idx]
    labels_shuffle = [labels[ i] for i in idx]

    return np.asarray(data_shuffle), np.asarray(labels_shuffle)

for _ in range(1000):
    df_train0, y_train0 = next_batch(100, df_train, y_train)
    sess.run(train_step, feed_dict={ x: df_train0, y_: y_train0})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
print(sess.run(accuracy, feed_dict={x:df_test, y_:y_test}))

Upvotes: 0

Views: 137

Answers (2)

Jarad
Jarad

Reputation: 18953

I don't know why this helps improve accuracy so if anyone can give a better answer, please do!

I changed:

y = tf.nn.softmax(tf.matmul(x, W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

to be:

y = tf.matmul(x, W) + b
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))

Full Code Example:

import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MultiLabelBinarizer
import tensorflow as tf
from scipy.stats import entropy

def next_batch(num, data, labels):
  '''get batches for training'''

  idx = np.arange(0 , len(data))
  np.random.shuffle(idx)
  idx = idx[:num]
  data_shuffle = [data[ i] for i in idx]
  labels_shuffle = [labels[ i] for i in idx]

  return np.asarray(data_shuffle), np.asarray(labels_shuffle)

df = pd.read_csv('train.csv')
df_X = df.iloc[:, 1:]
df_y = df['label']

y_one_hot = MultiLabelBinarizer().fit_transform(df_y.values.reshape(-1, 1))

df_train, df_test, y_train, y_test = train_test_split(df_X.values, y_one_hot)

x = tf.placeholder('float', [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder('float', [None, 10])

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)

sess = tf.Session()

init = tf.global_variables_initializer()
sess.run(init)

for _ in range(1000):
  df_train0, y_train0 = next_batch(100, df_train, y_train)
  sess.run(train_step, feed_dict={ x: df_train0, y_: y_train0})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
print(sess.run(accuracy, feed_dict={x:df_test, y_:y_test}))

Resulting accuracy: around 0.88

Upvotes: 1

Manolo Santos
Manolo Santos

Reputation: 1913

Your problem is that you are initializing W with 0s, therefore there is no gradient to modify and all the logits will be 0s

W = tf.Variable(tf.zeros([784, 10]))

You should initialize it randomly in order to break the symmetry.

W = tf.Variable(tf.random_normal([784, 10]))

Edit: It isn't necessary to randomize as the target logit will break the symmetry. Nevertheless, if there were a hidden layer, it would be necessary. The real problem seems to be in the scale of the input. Dividing by 255 should solve the problem.

Upvotes: 1

Related Questions