Reputation: 830
I'm running a TF application for inference with a given models. However, it's not running on GPU, but on CPU although tensorflow library is built with CUDA enabled. To have insight in TF models, does tensorflow model (.pb) has device information like tf.device(/cpu:0) or tf.device(/gpu:0) ???
Upvotes: 1
Views: 1253
Reputation: 73
After loading the GraphDef to tf.Graph, move all the ops to CPU using _set_device API. https://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/framework/ops.py#L2255
gf = tf.GraphDef()
gf.ParseFromString(open('graph.pb','rb').read())
with tf.Session() as sess:
tf.import_graph_def(gf, name='')
g = tf.get_default_graph()
ops = g.get_operations()
for op in ops:
op._set_device('/device:CPU:*')
Upvotes: 1
Reputation: 24621
From the docs (emphasis mine):
Sometimes an exported meta graph is from a training environment that the importer doesn't have. For example, the model might have been trained on GPUs, or in a distributed environment with replicas. When importing such models, it's useful to be able to clear the device settings in the graph so that we can run it on locally available devices. This can be achieved by calling
import_meta_graph
with theclear_devices option
set toTrue
.with tf.Session() as sess: new_saver = tf.train.import_meta_graph('my-save-dir/my-model-10000.meta', clear_devices=True) new_saver.restore(sess, 'my-save-dir/my-model-10000')
Upvotes: 3