Reputation: 3659
I just installed tensorflow v2.3 on anaconda python. I tried to test out the installation using the python command below;
$ python -c "import tensorflow as tf; x = [[2.]]; print('tensorflow version', tf.__version__); print('hello, {}'.format(tf.matmul(x, x)))"
I got the following message;
2020-12-15 07:59:12.411952: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
hello, [[4.]]
From the message, it seems that the installation was installed successfully. But what does This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX AVX2
mean exactly?
Am I using a tensorflow version with some limited features? Any side effects?
I am using Windows 10.
Upvotes: 159
Views: 306818
Reputation: 7
I ran the below commands to install keras and tensorflow on CPU and GPU:
conda create --name py36 python==3.6.13
conda install tensorflow
conda install keras
conda install tensorflow-gpu
conda install tensorflow-estimator==2.1.0
Upvotes: -13
Reputation: 13855
if you are using GPU (like nvidia), you can install tensorflow gpu using
conda install tensorflow-gpu
Upvotes: 0
Reputation: 37
You have to create new environment or else try to install tensorflow in gpu on currrent base environment, for that use following commands...
creating new environment:
conda create --name py36 python==3.6.13 or any latest version
installing tensorflow in CPU:
conda install tensorflow
conda install keras
installing tensorflow in GPU:
conda install tensorflow-gpu
conda install tensorflow-estimator==2.1.0 or any latest version
I Hope it will help you, Thank You...
Upvotes: 2
Reputation: 311
I have compiled the Tensorflow library a few times and if you have got something like the following:
kosinkie_l@Fedora ~/project/build $ python -c "import tensorflow as tf; x = [[2.]]; print('tensorflow version', tf.__version__); print('hello, {}'.format(tf.matmul(x, x)))"
2022-08-09 15:31:03.414926: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
tensorflow version 2.10.0-rc0
hello, Tensor("MatMul:0", shape=(1, 1), dtype=float32)
kosinkie_l@Fedora ~/project/build $
this meant that the CPU can use, but Tensorflow library does not used these.
The messages might be confused - so I have picked to source code (tensorflow/core/platform/cpu_feature_guard.cc:193) and there are the following:
131 #ifndef __AVX__
132 CheckIfFeatureUnused(CPUFeature::AVX, "AVX", missing_instructions);
133 #endif // __AVX__
134 #ifndef __AVX2__
135 CheckIfFeatureUnused(CPUFeature::AVX2, "AVX2", missing_instructions);
136 #endif // __AVX2__
...
192 if (!missing_instructions.empty()) {
193 LOG(INFO) << "This TensorFlow binary is optimized with "
194 << "oneAPI Deep Neural Network Library (oneDNN) "
195 << "to use the following CPU instructions in performance-"
196 << "critical operations: " << missing_instructions << std::endl
197 << "To enable them in other operations, rebuild TensorFlow "
198 << "with the appropriate compiler flags.";
199 }
The method CheckIfFeatureUnused(CPUFeature::AVX, "AVX", missing_instructions)
checks if the CPU can execute AVX and puts the "AVX"
to missing_instructions collection
, what is printed out.
Upvotes: 10
Reputation: 19300
The message
This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)
to use the following CPU instructions in performance-critical operations: AVX AVX2
means that in places where performance matters (eg matrix multiplication in deep neural networks), certain optimized compiler instructions will be used. Installation seems to be successful.
The oneDNN GitHub repository says:
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. The library is optimized for Intel Architecture Processors, Intel Processor Graphics and Xe architecture-based Graphics. oneDNN has experimental support for the following architectures:
- Arm* 64-bit Architecture (AArch64)
- NVIDIA* GPU
- OpenPOWER* Power ISA (PPC64)
- IBMz* (s390x)
Upvotes: 18
Reputation: 6216
You can disable these messages using:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
import tensorflow as tf
Source: https://stackoverflow.com/a/42121886
Upvotes: 23
Reputation: 19
when I used "verbose=0" in Model.fit() it occurred then I remove that and it solved
Upvotes: -3
Reputation: 4869
An important part of Tensorflow is that it is supposed to be fast. With a suitable installation, it works with CPUs, GPUs, or TPUs. Part of going fast means that it uses different code depending on your hardware. Some CPUs support operations that other CPUs do not, such as vectorized addition (adding multiple variables at once). Tensorflow is simply telling you that the version you have installed can use the AVX and AVX2 operations and is set to do so by default in certain situations (say inside a forward or back-prop matrix multiply), which can speed things up. This is not an error, it is just telling you that it can and will take advantage of your CPU to get that extra speed out.
Note: AVX stands for Advanced Vector Extensions.
Upvotes: 302