Reputation: 1244
Suppose I am implementing a linear layer on some training data that looks like
The following code
import tensorflow as tf
import numpy as np
weights = tf.Variable(np.random.uniform(0.0, 1.0, 3))
bias = tf.Variable(0.0)
trainingData = np.array(np.arange(15).astype(float).reshape(3,5))
output = tf.expand_dims(weights, 0) @ trainingData + bias
produces
This can be fixed by instead changing the last line to say
tf.cast(tf.expand_dims(weights, 0) @ trainingData, tf.float32) + bias
OK, so it doesn't like adding a float32_ref
to a float64,
but it's OK with adding a float32_ref
to a float32.
But I must be doing something wrong, because I'm doing something very simple, and it's throwing an error. (I'm new to TensorFlow.) I understand why it didn't like what I wrote, but what basic mistake am I making that's causing this problem?
I'm looking for an answer like "Oh, you should never initialize bias with a float like 0.0, because..." "That will lead to typecasting errors more generally."
Upvotes: 1
Views: 100
Reputation: 4460
Oh, you should never use tf.Variable
unless you have a very good reason. You should use tf.get_variable
instead to avoid issues.
Oh, you should never use float64 as the data type, unless you do have a good reason. NumPy uses float64 as a default, so you should write something like
W = tf.get_variable("w", initializer=np.random.randn().astype(np.float32))
Upvotes: 1