Thorsten
Thorsten

Reputation: 3

Preparing my own picture data into Tensorflow

I am new to Tensorflow and NN. I played something with MNIST, but now I want to use own pictures to build my own Network. I made black pictures with white points on it and I want to train the Network to count the points.

My Problem is to put get my image data into Tensorflow. I googeld a lot and found some informations and put it together to my own code. Put it didn't work as I expected. So do you have a tip for me to get my picture data into tensorflow?

Traceback (most recent call last):

File "C:...", line 490, in apply_op preferred_dtype=default_dtype)

File "C:...", line 741, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)

File "C:...", line 614, in _TensorTensorConversionFunction % (dtype.name, t.dtype.name, str(t)))

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype

float32: 'Tensor("Variable_2/read:0", shape=(5000, 100), dtype=float32)'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "...", line 83, in trainnetwork(x)

File "...", line 74, in trainnetwork prediction = neuralnetworkmodel(x)

File "...", line 69, in neuralnetworkmodel output = tf.matmul (11,output_layer['weights']) + output_layer['biases']

File "...", line 1816, in matmul a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)

File "C:...", line 1217, in _mat_mul transpose_b=transpose_b, name=name)

File "C:...", line 526, in apply_op inferred_from[input_arg.type_attr]))

TypeError: Input 'b' of 'MatMul' Op has type float32 that does not match type int32 of argument 'a'.

        import numpy as np
    import glob
    import scipy.ndimage
    import tensorflow as tf


    n_nodes_hl= 5000
    n_classes = 101

    x=tf.placeholder(tf.float32, [None, 22400])
    y=tf.placeholder(tf.int32,[None, n_classes])

    label_dataset = []
    img_dataset = []
    for image_file_name in glob.glob("C:\\Users\\Thorsten\\Desktop\\InteSystem\\Seg\\Baum???_sw_*.png"):
        print("loading ... ", image_file_name)
        # filename for the correct label
        label = int(image_file_name[-6:-4])

        # load image data from png files into an array
        img_array = scipy.ndimage.imread(image_file_name, flatten=True)

        img_dataset.append(img_array)

        # One Hot Encoding
        label +1
        i = 0
        label_data = []
        while i < 101:
            if i == label:
                label_data.extend([1])

            else:
                label_data.extend([0])
            i += 1

        label_dataset.append(label_data)  

    label_dataset = np.asarray((label_dataset), dtype=float)
    img_dataset = np.asarray((img_dataset), dtype=float)
   # feed_x = {x: img_dataset}
   # feed_y = {y: label_dataset}

   # print(feed_x)
   # print(feed_y)



    def neuralnetworkmodel(data):

        hidden_1_layer = {'weights' : tf.Variable(tf.random_normal([22400, n_nodes_hl])),
                           'biases' : tf.Variable(tf.random_normal([n_nodes_hl]))}

        output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl, n_classes])),
                         'biases': tf.Variable(tf.random_normal([n_classes]))}

        l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']),hidden_1_layer['biases'])
        l1 = tf.nn.relu(l1)

        output = tf.matmul (l1,output_layer['weights']) + output_layer['biases']

        return output

    def trainnetwork(x):
        prediction = neuralnetworkmodel(x)
        cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y))
        optimizer = tf.train.GradientDescentOptimizer(.5).minimize(cost)
        runs=10
        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            sess.run([optimizer, cost], feed_dict={x: img_dataset, y: label_dataset})


    trainnetwork(x)

Upvotes: 0

Views: 107

Answers (1)

Akshay Agrawal
Akshay Agrawal

Reputation: 937

It looks like the problem is in the line

output = tf.matmul (11,output_layer['weights']) + output_layer['biases']

Replace it with

output = tf.matmul (l1,output_layer['weights']) + output_layer['biases']

and the particular error you're seeing should go away.

Upvotes: 1

Related Questions