Adam
Adam

Reputation: 442

serverless deployment error - Code storage limit exceeded

Deployment of my serverless project has started returning the following error

An error occurred while provisioning your stack... [lambda name][GUID] - Code storage limit exceeded..

I've tried deleting zip packages thinking there a limit to how many upload packages can be stored in the S3 bucket, but no luck.

As mentioned by Trent below, I've looked at the deployment limits, but with a compressed packaged of 2.1MB (8MB uncompressed) I can't see what limits I would be exceeding.

Any suggestions on what might be causing this.

(My) Solution:

I was hoping to get a better understanding of the underlying problem, and was hoping to do this as a last resort. But it appears by deleting the stack from cloudformation and redeploying the serverless project the problem has fixed itself.

Upvotes: 6

Views: 9799

Answers (3)

Frank
Frank

Reputation: 710

For anybody else finding this through google, hope this helps.

What is causing it?

AWS has a limit of 75GB on the size of all the deployment packages that can be uploaded per region. This includes all of your Lambda functions and all their historical versions combined in a given region.

The error could happen if you have a large number of Lambda functions that have been deployed many times. Each deployment creates a version and this can add up over time.

Solution 1

If you do not need to version your Lambda functions, you can turn off Lambda versioning by setting it in your serverless.yml.

provider:
  name: aws
  versionFunctions: false

Solution 2

Alternatively, you can manually remove older Lambda versions. You can use the serverless-prune-plugin to automate the process for you. The plugin can be used to do a one-time clean up, or can be configured in your serverless.yml to auto prune older Lambda versions after each deployment.

Here's more details about this error - https://seed.run/docs/serverless-errors/code-storage-limit-exceeded

Upvotes: 8

ozbey
ozbey

Reputation: 160

Lambda creates a version of your functions on each deployment, so frequent deploys can cause storage problems. Your solution is correct, however you can also remove other unused versions of your functions by writing a simple script.

First, you would like to get the versions of your functions

`const params = {
  FunctionName: 'functionName'
};
lambda.listVersionsByFunction(params, function(err, data) {
  if (err) console.log(err, err.stack);
  else     console.log(data);
});`

then decide which ones you want to delete

`const params2 = {
  FunctionName: 'functionName',
  Qualifier: '1' //version of your function you want to delete
};
lambda.deleteFunction(params2, function(err, data) {
  if (err) console.log(err, err.stack);
  else     console.log(data);
});`

Upvotes: 0

Trent Bartlem
Trent Bartlem

Reputation: 2253

http://docs.aws.amazon.com/lambda/latest/dg/limits.html

Every Lambda function is allocated with a fixed amount of specific resources regardless of the memory allocation, and each function is allocated with a fixed amount of code storage per function and per account.

Lambdas have invocation limits, but also deployment limits, which is what your problem is. Look through the limits and work out which one has been breached.

Upvotes: 2

Related Questions