AWS SageMaker on GPU

I am trying to train a neural network (Tensorflow) on AWS. I have some AWS credits. From my understanding AWS SageMaker is the one best for the job. I managed to load the Jupyter Lab console on SageMaker and tried to find a GPU kernel since, I know it is the best for training neural networks. However, I could not find such kernel.

Would anyone be able to help in this regard?

Upvotes: 12

Views: 26725

Answers (2)

user1689987
user1689987

Reputation: 1546

If you want to train your model in a Sagemaker Studio notebook make sure you choose both a GPU instance type and GPU Image type: https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-images.html https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html

For example for Tensorflow GPU: enter image description here

Upvotes: -2

Olivier Cruchant
Olivier Cruchant

Reputation: 4037

You train models on GPU in the SageMaker ecosystem via 2 different components:

  1. You can instantiate a GPU-powered SageMaker Notebook Instance, for example p2.xlarge (NVIDIA K80) or p3.2xlarge (NVIDIA V100). This is convenient for interactive development - you have the GPU right under your notebook and can run code on the GPU interactively and monitor the GPU via nvidia-smi in a terminal tab - a great development experience. However when you develop directly from a GPU-powered machine, there are times when you may not use the GPU. For example when you write code or browse some documentation. All that time you pay for a GPU that sits idle. In that regard, it may not be the most cost-effective option for your use-case.

  2. Another option is to use a SageMaker Training Job running on a GPU instance. This is a preferred option for training, because training metadata (data and model path, hyperparameters, cluster specification, etc) is persisted in the SageMaker metadata store, logs and metrics stored in Cloudwatch and the instance automatically shuts down itself at the end of training. Developing on a small CPU instance and launching training tasks using SageMaker Training API will help you make the most of your budget, while helping you retain metadata and artifacts of all your experiments. You can see here a well documented TensorFlow example

Upvotes: 28

Related Questions