Reputation: 3450
Right now I'm trying to test out a new deep learning model on my local server using Locust. My local server uses a A6000 NVIDIA GPU which has around 46GB of VRAM. The problem is our production model is being served on an AWS g4dn.large
instance which uses a Telsa T4 GPU. The T4 has significantly less VRAM than an A6000 does, along with less power.
Is there a way to artificially alter the environment to emulate the production environment? I'm assuming that Locust itself may not have such functionality and that I may need to write a separate shell script, but am still curious if this is possible.
Upvotes: 0
Views: 12