Reputation: 11
I'm relatively new to Vertex AI Pipeline and I'm facing an issue where I'm unable to enable GPU utilization within my pipeline. Despite specifying the hardware type for a particular component, it still runs on the CPU, as indicated in the figure below. I suspect there might be a missing configuration in my pipeline setup.
embeddings_task = (
generate_article_embeddings(
transformer_model=TRANSFORMER_MODEL,
article_dataset=articles.output,
)
.set_cpu_limit("32")
.set_memory_limit("208G")
.add_node_selector_constraint("NVIDIA_TESLA_T4")
.set_gpu_limit("4")
)
Could someone please provide guidance on how to correctly enable GPU utilization in a Vertex AI Pipeline?
Any help would be greatly appreciated.
I've installed all the necessary GPU libraries to enable GPU acceleration, but when I run the code, it still defaults to using the CPU.
Upvotes: 1
Views: 1436
Reputation: 269
You need to make sure you have installed the cuda dependencies in your base container.
I made this component for testing if the GPU was available using tensor:
@dsl.component
def simple(
word: str,
number: int,
) -> None:
log.info("Running Simple")
if torch.cuda.is_available():
device = torch.device("cuda") # set the device to be the first cuda device
log.info("PyTorch is using the GPU")
log.info("Current device: %s", torch.cuda.current_device())
log.info("Device name: %s", torch.cuda.get_device_name(device))
else:
log.info("PyTorch is using the CPU")
# Create a tensor on the CPU
x = torch.randn(100, number)
if torch.cuda.is_available():
# Move the tensor to the GPU
x = x.to(device)
# Perform some computations on the tensor on the GPU
y = x * 10**4
# Move the tensor back to the CPU for printing
y = y.to("cpu")
Upvotes: 0