AndrewJaeyoung
AndrewJaeyoung

Reputation: 428

TensorFlow 2.5.0 incompatibility with NumPy 1.21+? (2021-10-05)

To everyone who stumbles upon this:

I was recently doing image classification (fit a CNN onto some labeled data) and I wanted to do data augmentation using keras's modules. However, I get thrown a NotImplementedError. More specifically, it says the following verbatim:

NotImplementedError: Cannot convert a symbolic Tensor (sequential_3/sequential/random_rotation/rotation_matrix/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

Here's what I coded for my augmentation layer:

angle = 15
data_augmentation = keras.Sequential([
    layers.experimental.preprocessing.RandomFlip('horizontal'),
    layers.experimental.preprocessing.RandomRotation(angle/360)
])

So I wanted a horizontal flip of all my images and a random rotation by 15 degrees. I directly plugged this into my CNN:

layers_2 = [
    #image augmentation layer
    data_augmentation,
    
    #convolution layer
    keras.layers.Conv2D(16, 3, padding = 'same', activation = 'relu'),
    keras.layers.MaxPooling2D(),
    keras.layers.Conv2D(32, 3, padding = 'same', activation = 'relu'),
    keras.layers.MaxPooling2D(),
    keras.layers.Conv2D(64, 3, padding = 'same', activation = 'relu'),
    keras.layers.MaxPooling2D(),
    keras.layers.Flatten(),
    
    #dropout for regularization
    keras.layers.Dropout(0.2),
    
    #MLP layer
    keras.layers.Dense(128, activation = 'relu'),
    keras.layers.Dense(64, activation = 'relu'),
    keras.layers.Dense(3, activation = 'softmax')
]

model_2 = keras.Sequential(layers_2)

model_2.compile(optimizer = tf.optimizers.Adam(),
              loss = tf.keras.losses.SparseCategoricalCrossentropy(),
              metrics = [tf.metrics.SparseCategoricalAccuracy()]
)

epochs_2 = 15

#fitting

history_2 = model_2.fit(
    normalized_train_ds,
    validation_data = normalized_val_ds,
    epochs = epochs_2
)

Where normalized_train_ds and normalized_val_ds are both normalized tensorflow.data.Dataset objects.

Necessary context: I am running this on my local machine, in a Python 3.9.7-configured environment. My NumPy version is 1.21.2 and my TensorFlow version is 2.5.0. There have been similar issues to mine back in February 2021, except they noticed the same issue with someone running Python 3.9.1 and TensorFlow 2.4.1, with NumPy 1.20+ (link to that issue: https://github.com/tensorflow/tensorflow/issues/47360).

Actual Question: Did I just write some bad code, or are my versions of Python, TensorFlow, NumPy not compatible? I tried installing a previous version of numpy (1.20+) but it throws me the same issue. If I run this on google Colab notebooks, this ceases to be an issue.

Upvotes: 3

Views: 9234

Answers (2)

user11530462
user11530462

Reputation:

Agree this was an issue with TF2.5 or before. But, this was resolved recently.

With TF2.7 and tf-nightly, I successfully trained some models that had this numpy incompatibility issue. You can also check here that master branch now requires 'numpy >= 1.20'.

Also, check this GitHub issue that was resolved. Thanks!

Upvotes: 1

Konqi
Konqi

Reputation: 103

According to this issue on github, tensorflow 2.5 and numpy 1.20+ are not compatible. https://github.com/tensorflow/tensorflow/issues/47691

Upvotes: 2

Related Questions