Reputation: 455
I have noticed while looking at tensorflow serving's guide for setting up tf serving with GPU support that it involves using nvidia's docker which is currently only available for Linux based systems. Since the docker tf serving container that I am using currently using is a linux machine itself, is it possible to configure nvidia-docker within the tfserving container so I can use my GPU's for model inference while running Docker on a host machine that runs Windows? Or is it a case where I need my host machine to be running Linux?
I'm under the impression that I can't run a docker instance from within a Linux virtual machine itself due to the virtualization requirements so I was wondering if there is a workaround by extending docker itself in some capacity.
Thanks in advance, I couldn't find any resources that go into detail about this in my thorough search of the problem and am relatively new to using Docker & tensorflow serving.
Upvotes: 3
Views: 338
Reputation: 352
I don't believe NVIDIA has any intention on creating nvidia-docker
for Windows, or at least not in the foreseeable future, since it would require creating Windows containers that would directly use host drivers. What I mean by that is that you won't be able to access the GPUs from a linux environment via a Windows host.
Besides, I think docker as a platform works best on Linux, especially when it comes to production, assuming that's what you are after. All in all, my advice would be sticking to battle-tested setups like nvidia-docker
+ TF Serving on Linux. You have a community of people that have done it before you, hence troubleshooting and solving issues is easier.
Upvotes: 1