khyc
khyc

Reputation: 23

TypeError: Image data of dtype object cannot be converted to float

I am using the code from GAN tutorial on generating MNIST digits in tensorflow.

(Link here: https://www.tensorflow.org/beta/tutorials/generative/dcgan)

I got

Traceback (most recent call last):
  File "GAN_MNIST_tutorial.py", line 66, in <module>
    plt.imshow(np.array(generated_image[0, :, :, 0]), cmap='gray')
  File "C:\venv\lib\site-packages\matplotlib\pyplot.py", line 2677, in imshow
    None else {}), **kwargs)
  File "C:\venv\lib\site-packages\matplotlib\__init__.py", line 1589, in inner
    return func(ax, *map(sanitize_sequence, args), **kwargs)
  File "C:\venv\lib\site-packages\matplotlib\cbook\deprecation.py", line 369, in wrapper
    return func(*args, **kwargs)
  File "C:\venv\lib\site-packages\matplotlib\cbook\deprecation.py", line 369, in wrapper
    return func(*args, **kwargs)
  File "C:\venv\lib\site-packages\matplotlib\axes\_axes.py", line 5660, in imshow
    im.set_data(X)
  File "C:\venv\lib\site-packages\matplotlib\image.py", line 678, in set_data
    "float".format(self._A.dtype))
TypeError: Image data of dtype object cannot be converted to float

when I ran it.

Here is my code:

from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow as tf 
tf.__version__

import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time

def make_generator_model():
    model = tf.keras.Sequential()
    model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())
    model.add(layers.Reshape((7, 7, 256)))

    assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size

    model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
    assert model.output_shape == (None, 7, 7, 128)

    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
    assert model.output_shape == (None, 14, 14, 64)

    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
    assert model.output_shape == (None, 28, 28, 1)

    return model

generator = make_generator_model()

noise = tf.random.normal([1, 100])

generated_image = generator(noise, training=False)

plt.imshow(generated_image[0, :, :, 0], cmap='gray')

I've tried adding dtype = 'float32 in generated_image, and converting generated_image into an numpy array, but to no avail. What is the problem?

Upvotes: 0

Views: 12451

Answers (2)

Vishnuvardhan Janapati
Vishnuvardhan Janapati

Reputation: 3278

I think you ran the code with Tensorflow 1.x. In TF1.x (without eager enabled) the operations (Ops) generates symbolic tensors which do not contain any value until you run those Ops in a session. The session executes those graph mode symbolic tensors and return real tensors (can access numpy array from tensors).

So, I added couple of lines in the end of your code to execute those symbolic tensors.

from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow as tf 
tf.__version__

import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time

def make_generator_model():
    model = tf.keras.Sequential()
    model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())
    model.add(layers.Reshape((7, 7, 256)))

    assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size

    model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
    assert model.output_shape == (None, 7, 7, 128)

    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
    assert model.output_shape == (None, 14, 14, 64)

    model.add(layers.BatchNormalization())
    model.add(layers.LeakyReLU())

    model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
    assert model.output_shape == (None, 28, 28, 1)

    return model

generator = make_generator_model()

noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  generated_image = sess.run(generated_image)
plt.imshow(generated_image[0, :, :, 0], cmap='gray')

Upvotes: 3

Kevin Ling
Kevin Ling

Reputation: 550

This problem is resolved by upgrading from TF1 to TF2. I ran it on the tf1.14.X doesnt work, after upgrading to tf 2.0 the code works.

Upvotes: -1

Related Questions