Reputation: 987
AWS Glue job not initiating and failed with exception failed to execute with exception Task allocated capacity exceeded limit
.
Using AWS Glue 3.0 for running PySpark jobs with 10 executors. File size to process between 2 to 5 GB.
Glue jobs not initiating and failing with this error immediately after 3 to 5 seconds. There are not All or Error logs available. What can be the possible error.
Upvotes: 2
Views: 1287
Reputation: 987
In my AWS account where I am initiating the Glue job, the below quota set as 'Zero' which is the reason of failure.
In Service quotas for account (click the dropdown on the username at top right corner in AWS console), search for Glue and check Max task dpus per account
value for Applied Quota value
. If the value is 0, increase that to at least minimum count of DPUs used for Glue job or more.
This is just limit. Increasing this value does not cause any cost. It is just count of the maximum DPUs can be used in the account.
Zero value not set by us. After the job ran for few days successfully, jobs getting failed because of the limit change automatically.
I see this post giving information about setting 0 limit for different parameter automatically because of AWS issue (this issue also may or may not be related). Giving below URL just for reference.
Upvotes: 2