MatthewScarpino
MatthewScarpino

Reputation: 5936

Advantage of using experiments in TensorFlow

Many of TensorFlow's example applications create Experiments and run one of the Experiment's methods by calling tf.contrib.data.learn_runner.run. It looks like an Experiment is essentially a wrapper for an Estimator.

The code needed to create and run an Experiment looks more complex than the code needed to create, train, and evaluate an Estimator. I'm sure there's an advantage to using Experiments, but I can't figure out what it is. Could someone fill me in?

Upvotes: 7

Views: 1290

Answers (1)

Maxim
Maxim

Reputation: 53768

tf.contrib.learn.Experiment is a high-level API for distributed training. Here's from its doc:

Experiment is a class containing all information needed to train a model.

After an experiment is created (by passing an Estimator and inputs for training and evaluation), an Experiment instance knows how to invoke training and eval loops in a sensible fashion for distributed training.

Just like tf.estimator.Estimator (and the derived classes) is a high-level API that hides matrix multiplications, saving checkpoints and so on, tf.contrib.learn.Experiment tries to hide the boilerplate you'd need to do for distributed computation, namely tf.train.ClusterSpec, tf.train.Server, jobs, tasks, etc.

You can train and evaluate the tf.estimator.Estimator on a single machine without an Experiment. See the examples in this tutorial.

Upvotes: 7

Related Questions