empty
empty

Reputation: 5444

Simple TensorFlow example loading one copy of a model onto each GPU that's available

I'm looking at porting from a different production machine learning framework to TensorFlow. In our current system for both training and inference we load copies of our model onto as many GPUs as are on the machine.

I would like to keep this way of load-balancing for now. Where can I find a simple example of loading one copy of a TF model onto each GPU that's available on a machine?

Upvotes: 1

Views: 251

Answers (1)

Yaroslav Bulatov
Yaroslav Bulatov

Reputation: 57953

Here's an example from https://github.com/rafaljozefowicz/lm/blob/master/language_model.py#L21

You wrap your model creation code into _forward function, and then call it once for each GPU

    for i in range(hps.num_gpus):
        with tf.device(assign_to_gpu(i, ps_device)), tf.variable_scope(tf.get_variable_scope(),
                                                                       reuse=True if i > 0 else None):
            loss = self._forward(i, xs[i], ys[i], ws[i])
            losses += [loss]
            if mode == "train":
                cur_grads = self._backward(loss, summaries=(i == hps.num_gpus - 1))
                tower_grads += [cur_grads]

    self.loss = tf.add_n(losses) / len(losses)

Upvotes: 1

Related Questions