Alireza Akhavan
Alireza Akhavan

Reputation: 679

killed error in tensorflow when I try load convolutional pretrained model in jetson tx1

I have a face recognition model trained on a inception_resnet Model.

When I run my tensorflow code to load trained model on Nvidia Jetson TX1, it just outputs "killed". How do I debug this?

What can I do? I think it's because memory problem!

Upvotes: 3

Views: 1690

Answers (3)

Alireza Akhavan
Alireza Akhavan

Reputation: 679

Finally I find the answer!

If you don't set the maximum fraction of GPU memory, it allocates almost the whole free memory. My problem was lack of enough memory for GPU.

You can pass the session configuration.

I set the per_process_gpu_memory_fraction configuration in tf.GPUOptions to 0.8 and the problem is solved.

gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.8)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

Upvotes: 2

Jinguang
Jinguang

Reputation: 1

You could try to reduce the batch_size number, for example from 32 to 16, it will reduce the memory consumption and will increase the training time.

Upvotes: 0

Mike Vella
Mike Vella

Reputation: 10575

According to this issue 'killed' on the jetson means it ran out of memory. It may not be possible to run the inception_resnet model on the TX1.

Upvotes: 3

Related Questions