Manuel Reis
Manuel Reis

Reputation: 147

What really happens when Inter Op Parallelism is increased on Tensorflow?

I have read Tensorflow's documentation regarding InterOp and IntraOp parallelism. However, I have not fully understand how the InterOp parallelism influences Tensorflow under the hood.

My question is: The threads from InterOp thread pool actually train the model in parallel (i.e., each one trains on a different subset of the training batch, by splitting the training iterations among the threads), or they just parallelise non conflicting branches of the execution graph?

Upvotes: 2

Views: 1212

Answers (1)

Yaroslav Bulatov
Yaroslav Bulatov

Reputation: 57903

Inter op parallelism constrains how many ops can be launched in parallel by the executor. Intra op parallelism constraints the number of CPU threads that are used by Eigen to execute a single kernel. Splitting data into batches is a higher level functionality that's handled by client (ie, Python libraries like tf.Estimator). Runtime can't distinguish between data and parameters -- both are just tensors that flow through the computational graph.

Upvotes: 1

Related Questions