gokul_uf
gokul_uf

Reputation: 760

Using float64 in tf.layers

I know that I can set the data type of placeholders and tensors using the dtype=tf.<DTYPE> argument.

Is there a way to explicitly force weights inside tf.layers (say tf.layers.conv2d) to be float64 or do the layer's weights always take the exact data type of their inputs?

I am trying to do the following training settings

  1. Input: float32, weights: float32
  2. Input: float32, weights: float64
  3. Input: float64, weights: float32
  4. Input: float64, weights: float64

And would like to know if the above combinations are possible and how to explicitly prevent TensorFlow from changing the data type of one to match the other's data type

Upvotes: 0

Views: 715

Answers (1)

P-Gn
P-Gn

Reputation: 24581

I don't think you can do that efficiently. Most operations such as tf.matmul requires their operands to have the same type. So you will end up upcasting your tf.float32 into tf.float64 whenever you want the computation to happen with this precision.

From a computational point of view, consider that it is common for graphics card to be much less gifted for FP64 operations than for FP32. For example P5000, P6000 or GTX 1080 graphics card have only 1/32 FP64 cores than FP32. The Titan V with a ratio of 1/2 is one of the best you can get.

Finally, specifically in deep learning, precision of the computation has never been a problem. Actually, adding noise to the computation (mostly via stochastic gradient descent) is what most people think make learning work, and one can actually successfully train models with half-precision floating points.

Upvotes: 1

Related Questions