jason
jason

Reputation: 2106

TypeError: unsupported operand type(s) for /: 'Dimension' and 'float' in TensorFlow

I have an code for implementing in deep learning in Tensorflow. I use the Keras module

self.n_clusters = 10
self.alpha = 0.01
clustering_layer = ClusteringLayer(self.n_clusters, alpha=self.alpha, name='clustering')(hidden)

my error is mainly from above, so I on attach it.

It give me the following error:

--> 118         clustering_layer = ClusteringLayer(self.n_clusters, alpha=self.alpha, name='clustering')(hidden)
    119         self.model = Model(inputs=self.autoencoder.input, outputs=clustering_layer)
    120 

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/keras/engine/base_layer.pyc in __call__(self, inputs, *args, **kwargs)
    694         if all(hasattr(x, 'get_shape') for x in input_list):
    695           input_shapes = nest.map_structure(lambda x: x.get_shape(), inputs)
--> 696         self.build(input_shapes)
    697 
    698       # Check input assumptions set after layer building, e.g. input shape.

<ipython-input-13-8890754cc8a3> in build(self, input_shape)
     73         input_dim = input_shape[1]
     74         self.input_spec = InputSpec(dtype=K.floatx(), shape=(None, input_dim))
---> 75         self.clusters = self.add_weight(shape=(self.n_clusters, input_dim), initializer='glorot_uniform', name='clusters')
     76         if self.initial_weights is not None:
     77             self.set_weights(self.initial_weights)

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/keras/engine/base_layer.pyc in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint, partitioner, use_resource, getter)
    532         trainable=trainable and self.trainable,
    533         partitioner=partitioner,
--> 534         use_resource=use_resource)
    535 
    536     if regularizer is not None:

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/training/checkpointable/base.pyc in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter)
    495     new_variable = getter(
    496         name=name, shape=shape, dtype=dtype, initializer=initializer,
--> 497         **kwargs_for_getter)
    498 
    499     # If we set an initializer and the variable processed it, tracking will not

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/keras/engine/base_layer.pyc in make_variable(name, shape, dtype, initializer, partition_info, trainable, caching_device, validate_shape, constraint, use_resource, partitioner)
   1871       validate_shape=validate_shape,
   1872       constraint=constraint,
-> 1873       use_resource=use_resource)
   1874   return v

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.pyc in variable(initial_value, trainable, collections, validate_shape, caching_device, name, dtype, constraint, use_resource)
   2232                          name=name, dtype=dtype,
   2233                          constraint=constraint,
-> 2234                          use_resource=use_resource)
   2235 
   2236 

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.pyc in <lambda>(**kwargs)
   2222              constraint=None,
   2223              use_resource=None):
-> 2224   previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
   2225   for getter in ops.get_default_graph()._variable_creator_stack:  # pylint: disable=protected-access
   2226     previous_getter = _make_getter(getter, previous_getter)

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.pyc in default_variable_creator(next_creator, **kwargs)
   2194         collections=collections, validate_shape=validate_shape,
   2195         caching_device=caching_device, name=name, dtype=dtype,
-> 2196         constraint=constraint)
   2197   elif not use_resource and context.executing_eagerly():
   2198     raise RuntimeError(

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/ops/resource_variable_ops.pyc in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint)
    310           name=name,
    311           dtype=dtype,
--> 312           constraint=constraint)
    313 
    314   # pylint: disable=unused-argument

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/ops/resource_variable_ops.pyc in _init_from_args(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, constraint)
    415               with ops.name_scope("Initializer"), ops.device(None):
    416                 initial_value = ops.convert_to_tensor(
--> 417                     initial_value(), name="initial_value", dtype=dtype)
    418               self._handle = _eager_safe_variable_handle(
    419                   shape=initial_value.get_shape(),

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/keras/engine/base_layer.pyc in <lambda>()
   1858         initializer = initializer(dtype=dtype)
   1859       init_val = lambda: initializer(  # pylint: disable=g-long-lambda
-> 1860           shape, dtype=dtype, partition_info=partition_info)
   1861       variable_dtype = dtype.base_dtype
   1862   if use_resource is None:

/usr/local/python/2.7-conda5.2/lib/python2.7/site-packages/tensorflow/python/ops/init_ops.pyc in __call__(self, shape, dtype, partition_info)
    466       scale /= max(1., fan_out)
    467     else:
--> 468       scale /= max(1., (fan_in + fan_out) / 2.)
    469     if self.distribution == "normal":
    470       stddev = math.sqrt(scale)

TypeError: unsupported operand type(s) for /: 'Dimension' and 'float'

line 118 is my code location. The error seems to occur in tensorflow package. it give me the TypeError: unsupported operand type(s) for /: 'Dimension' and 'float'. I try both python 2.7 and python 3.6 but with same problem.

How to deal with this situation?

An very similar situation is in github, its error can be addressed in its code, but my error seems to be happen in init_ops.pyc

Upvotes: 2

Views: 5433

Answers (2)

pitfall
pitfall

Reputation: 2621

I encountered a very similar issue -- I have a customized Layer, which

  • works perfectly fine when it is in keras (keras.version=2.24) with the tf backend (tf.version=1.14)
  • raises up the same error message as described in the question when I switched to tf.keras (tf.version=1.14)

What suggested by @Alexandre Passos works, but it took me a while to figure out what exactly did he mean. I, therefore, share my experience below to help future friends.

Below is an example customized layer,


class CustomizedLayer(Layer) :
  def __init__(self, num_feats, **kwargs) :
    self.num_feats = num_feats
    super(CustomizedLayer, self).__init__(**kwargs)
  def build(self, input_shape) :
    weight_shape = (input_shape[-1], self.num_feats) # <----- problem
    self.weight = self.add_weight(shape=weight_shape, 
                                 ...)
    super(CustomizedLayer, self).build(input_shape)
  def call(self, inputs) :
    ...

the issue happens in tf.keras, because the input_shape passed is of the tf.Dimension type but not the int type, which is required by add_weight and also is the type passed when using keras.

For this reason, the same layer definition works in keras but not tf.keras.

Consequently, all you need to do is to cast the tf.Dimension object to int, namely, rewriting the line of weight_shape as

weight_shape = (int(input_shape[-1]), self.num_feats)

Upvotes: 3

Alexandre Passos
Alexandre Passos

Reputation: 5206

The issue here seems to be that ClusteringLayer is passing a tf.Dimension object as the number of clusters, which is used to initialize the weights. Do int(self.n_clusters) instead to bypass the dimension object issues.

Upvotes: 6

Related Questions