joke
joke

Reputation: 43

There isn‘t much different between AVX2 and AVX512 when using MKL?

CPU Environment:Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz

Fisrt,I install tensorflow with pip install tensorflow==1.12.0, and download tensorflow-benchmark

Run 1:export MKL_VERBOSE=0;export MKL_ENABLE_INSTRUCTIONS=AVX512;python tf_cnn_benchmarks.py --device=cpu --data_format=NHWC --model=alexnet --batch_size=8

Run 2:export MKL_VERBOSE=0;export MKL_ENABLE_INSTRUCTIONS=AVX2;python tf_cnn_benchmarks.py --device=cpu --data_format=NHWC --model=alexnet --batch_size=8

The speed almost same!!! I also change different model and batch_size.

Second, I also test caffe compile with mkl. I found that MKL_ENABLE_INSTRUCTIONS=AVX512 does not work much than MKL_ENABLE_INSTRUCTIONS=AVX2.

Why?

Upvotes: 2

Views: 3451

Answers (1)

Preethi Venkatesh
Preethi Venkatesh

Reputation: 61

I assume your intentions are to test TensorFlow accelerated with MKLDNN. Unlike traditional MKL lib, this lib features math accelerations just for DL operations. However, the terms MKL and MKLDNN are apparently used interchangeably in Intel-optimized-TensorFlow, although accelerated with Intel MKLDNN. So now to answer your question, MKLDNN lib don't support the functionality to control ISA dispatching as of yet.

By the way, pip install Tensorflow installs Google's official tensorflow lib that doesn't come with MKL accelerations. To get Intel-optimized-TensorFlow, Please refer to the install guide: https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide. To check if MKLDNN is enabled in your build use the command export MKLDNN_VERSBOSE=1 instead on MKL_VERBOSE=1

Upvotes: 3

Related Questions