Ajinkya Ambatwar
Ajinkya Ambatwar

Reputation: 121

Train Deep learning Models with AMD

I am currently using Lenovo Ideapad PC with AMD Radeon graphics in it. I am trying to run an image classifier model using convolutional neural networks. The dataset contains 50000 images and it takes too long to train the model. Can someone tell me how can I use my AMD GPU to fasten the process. I think AMD Graphics does not support CUDA. So is there any way around?

PS: I am using Ubuntu 17.10

Upvotes: 1

Views: 1551

Answers (1)

David Parks
David Parks

Reputation: 32111

What you're asking for is OpenCL support, or in more grandiose terms: the democratization of accelerated devices. There seems to be tentative support for OpenCL, I see some people testing it as of early 2018, but it doesn't appear fully baked yet. The issue has been tracked for quite some time here:

https://github.com/tensorflow/tensorflow/issues/22

You should also be aware of development on XLA, an attempt to virtualize tensorflow over an LLVM (or LLVM-like) virtualization layer making it more portable. It's currently cited as being in alpha as of early 2018.

https://www.tensorflow.org/performance/xla/

There isn't yet a simple solution, but these are the two efforts to follow along these lines.

Upvotes: 2

Related Questions