Reputation: 2527
I'm running many small jobs with AWS batch, the jobs can run with just 200MB of memory which I have tested using AWS Lambda. But when specifying the minimum memory for the job, I find if I use any value smaller than 1024MB, the job will simply fail without ever starting. Does this mean I can only use memory at least 1024MB for this case? I thought I can use 512MB due to the presence of t2.nano.
P.S. I find t2.nano is only available in us-east-1 while I'm working with us-east-2, maybe that is the cause?
Upvotes: 0
Views: 515
Reputation: 995
If you specify 512MB for the job, and none of your compute resources have 512MB or greater of memory available to satisfy this requirement, then the job cannot be placed in your compute environment.
Because of platform memory overhead and memory occupied by the system kernel, this number is different than the installed memory amount that is advertised for Amazon EC2 instances. For example, an m4.large instance has 8 GiB of installed memory. However, this does not always translate to exactly 8192 MiB of memory available for jobs when the compute resource registers.
For more information, please check:
https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html
Upvotes: 1