user11953315
user11953315

Reputation: 33

Google data fusion Execution error "INVALID_ARGUMENT: Insufficient 'DISKS_TOTAL_GB' quota. Requested 3000.0, available 2048.0."

I am trying load a Simple CSV file from GCS to BQ using Google Data Fusion Free version. The pipeline is failing with error . it reads

com.google.api.gax.rpc.InvalidArgumentException: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Insufficient 'DISKS_TOTAL_GB' quota. Requested 3000.0, available 2048.0.
    at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:49) ~[na:na]
    at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72) ~[na:na]
    at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60) ~[na:na]
    at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97) ~[na:na]
    at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68) ~[na:na]

same error is repeated for both Mapreduce and Spark execution pipeline. Appreciate any help in fixing this issue . Thanks

Regards KA

Upvotes: 3

Views: 6935

Answers (2)

Ali Anwar
Ali Anwar

Reputation: 431

It means that the requested total compute disks would put the project over the GCE quota for the project. There are both project wide and regional quotas. You can see that documentation here: https://cloud.google.com/compute/quotas

To resolve this, you should increase the quota in your GCP project.

Upvotes: 6

Blake Enyart
Blake Enyart

Reputation: 115

@Ksign provided the following answer to a similar question which can be seen here.

The specific quota related to DISKS_TOTAL_GB is the Persistent disk standard (GB) as you can see in the Disk quotas documentation.

You can edit this quota by region in the Cloud Console of your project by going to the IAM & admin page => Quotas and select only the metric Persistent Disk Standard (GB).

Upvotes: 4

Related Questions