agupta231
agupta231

Reputation: 1171

Changing the scale of a tensor in tensorflow

Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values.

Is there any way to do this?

Thanks!

Upvotes: 24

Views: 33802

Answers (6)

whitebrow
whitebrow

Reputation: 2005

If you want the maximum value to be the effective upper bound of the 0-1 range and there's a meaningful zero then using this:

import tensorflow as tf
tensor = tf.constant([0, 1, 5, 10])
tensor = tf.divide(tensor, tf.reduce_max(tensor))
tf.print(tensor)

would result in:

[0 0.1 0.5 1]

Upvotes: 0

Lawhatre
Lawhatre

Reputation: 1450

Let the input be

X = tf.constant([[0.65,0.61, 0.59, 0.62, 0.6 ],[0.25,0.31, 0.89, 0.52, 0.6 ]])

We can define a scaling function

def rescale(X, a=0, b=1):
  repeat = X.shape[1]
  xmin = tf.repeat(tf.reshape(tf.math.reduce_min(X, axis=1), shape=[-1,1]), repeats=repeat, axis=1)
  xmax = tf.repeat(tf.reshape(tf.math.reduce_max(X, axis=1), shape=[-1,1]), repeats=repeat, axis=1)
  X = (X - xmin) / (xmax-xmin)
  return X * (b - a) + a

This outputs X in range [0,1]

>>rescale(X)

<tf.Tensor: shape=(2, 5), dtype=float32, numpy=
array([[1.        , 0.333334  , 0.        , 0.5000005 , 0.16666749],
       [0.        , 0.09375001, 1.        , 0.42187497, 0.54687506]],
      dtype=float32)>

To scale in range [0, 255]

>> rescale(X, 0, 255) 
<tf.Tensor: shape=(2, 5), dtype=float32, numpy=
array([[255.      ,  85.00017 ,   0.      , 127.50012 ,  42.50021 ],
       [  0.      ,  23.906252, 255.      , 107.57812 , 139.45314 ]],
      dtype=float32)>

Upvotes: 3

Carl24k
Carl24k

Reputation: 26

In some contexts, you need to normalize each image separately - for example adversarial datasets where each image has noise. The following normalizes each image according to its own min and max, assuming the inputs have typical size Batch x YDim x XDim x Channels:

    cast_input = tf.cast(inputs,dtype=tf.float32)     # e.g. MNIST is integer
    input_min = tf.reduce_min(cast_input,axis=[1,2])  # result B x C
    input_max = tf.reduce_max(cast_input,axis=[1,2])
    ex_min = tf.expand_dims(input_min,axis=1)         #  put back inner dimensions
    ex_max = tf.expand_dims(input_max,axis=1)
    ex_min = tf.expand_dims(ex_min,axis=1)            # one at a time - better way?
    ex_max = tf.expand_dims(ex_max,axis=1)            # Now Bx1x1xC
    input_range = tf.subtract(ex_max, ex_min)
    floored = tf.subtract(cast_input,ex_min)          # broadcast
    scale_input = tf.divide(floored,input_range)

I would like to expand the dimensions in one short like you can in Numpy, but tf.expand_dims seems to only accept one dimension at a a time - open to suggestions here. Thanks!

Upvotes: 1

Will Gl&#252;ck
Will Gl&#252;ck

Reputation: 1282

You are trying to normalize the data. A classic normalization formula is this one:

normalize_value = (value − min_value) / (max_value − min_value)

The implementation on tensorflow will look like this:

tensor = tf.div(
   tf.subtract(
      tensor, 
      tf.reduce_min(tensor)
   ), 
   tf.subtract(
      tf.reduce_max(tensor), 
      tf.reduce_min(tensor)
   )
)

All the values of the tensor will be betweetn 0 and 1.

IMPORTANT: make sure the tensor has float/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first:

tensor = tf.to_float(tensor)

Update: as of tensorflow 2, tf.to_float() is deprecated and instead, tf.cast() should be used:

tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough

Upvotes: 47

Lerner Zhang
Lerner Zhang

Reputation: 7150

According to the feature scaling in Wikipedia you can also try the Scaling to unit length:

enter image description here

It can be implemented using this segment of code:

In [3]: a = tf.constant([2.0, 4.0, 6.0, 1.0, 0])                                                                                                                                                                     
In [4]: b = a / tf.norm(a)
In [5]: b.eval()
Out[5]: array([ 0.26490647,  0.52981293,  0.79471946,  0.13245323,  0.        ], dtype=float32)

Upvotes: 6

Albert
Albert

Reputation: 68360

sigmoid(tensor) * 255 should do it.

Upvotes: 5

Related Questions