Nuno Miguel
Nuno Miguel

Reputation: 43

Missing 0th output from node ... When trying to use bfloat16 in tensorflow 2

So I'm trying to convert an existing project to use bfloat16 as this lets the code run on tensor cores. I'm using mixed_precision.set_global_policy('mixed_bfloat16') which according to keras documentation is enough and should just work. However, when using this line I get the following error:

2 root error(s) found.
  (0) INTERNAL:  Missing 0-th output from node model/sequential/conv2d_transpose/conv2d_transpose
 (defined at C:\devlibs\Python\lib\site-packages\keras\backend.py:5530)

     [[model/sequential/conv2d_transpose_4/BiasAdd/_76]]
  (1) INTERNAL:  Missing 0-th output from node model/sequential/conv2d_transpose/conv2d_transpose
 (defined at C:\devlibs\Python\lib\site-packages\keras\backend.py:5530)

0 successful operations.
0 derived errors ignored. [Op:__inference_predict_function_1627]

Errors may have originated from an input operation.
Input Source operations connected to node model/sequential/conv2d_transpose/conv2d_transpose:
In[0] model/sequential/conv2d_transpose/stack (defined at C:\devlibs\Python\lib\site-packages\keras\layers\convolutional.py:1333)   
In[1] model/sequential/conv2d_transpose/conv2d_transpose/Cast (defined at C:\devlibs\Python\lib\site-packages\keras\mixed_precision\autocast_variable.py:146)   
In[2] model/sequential/reshape/Reshape (defined at C:\devlibs\Python\lib\site-packages\keras\layers\core\reshape.py:126) 

This error is thrown on the first layer during training, but if I force that layer to use float32 the error passes on to the next layer (essentially all layers are incompatible with bfloat16). It may be worth pointing out that my input data is in uint8.

Upvotes: 3

Views: 300

Answers (0)

Related Questions