Reputation: 93
We are trying to implement our own backend to the TensorFlow XLA Compiler, but we've run into a somewhat embarrassing situation: We can't seem to make the existing CPU compiler do anything. Using the TensorFlow XLA Example found here:
We build TensorFlow from source with TF_ENABLE_XLA=1
at the configure step and debug information on. We then run the softmax example in LLDB and attempt to break on the call to CpuCompiler::Compile()
. As seen below, the program runs to completion without breaking at the Compile() function, or apparently any of the functions that would be expected to call it.
~/tflow$ lldb -- python3 mnist_softmax_xla.py
(lldb) target create "python3"
Current executable set to 'python3' (x86_64).
(lldb) settings set -- target.run-args "mnist_softmax_xla.py"
(lldb) b CpuCompiler::Compile
Breakpoint 1: no locations (pending).
WARNING: Unable to resolve breakpoint to any actual locations.
(lldb) r
Process 10331 launched: '/usr/bin/python3' (x86_64)
2 locations added to breakpoint 1
Process 10331 stopped and restarted: thread 1 received signal: SIGCHLD
Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz
2017-04-13 15:35:52.612498: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 8 visible devices
2017-04-13 15:35:52.616628: I tensorflow/compiler/xla/service/service.cc:183] XLA service 0x2b38780 executing computations on platform Host. Devices:
2017-04-13 15:35:52.616642: I tensorflow/compiler/xla/service/service.cc:191] StreamExecutor device (0): <undefined>, <undefined>
0.9195
Process 10331 exited with status = 0 (0x00000000)
Upvotes: 3
Views: 1733
Reputation: 93
Answered on GitHub:
https://github.com/tensorflow/tensorflow/issues/9194
CPU JIT is not enabled in TensorFlow tip of development yet. It should work for GPU.
Upvotes: 1