Reputation: 539
I have around 50 AWS Lambda functions, I have a gulp tasks to deploy it, the script zip the functions and upload it to S3, then using Lambda JS SDK I call this task to create/update the functions:
gulp.task('upload', function (callback) {
var AWS = require('aws-sdk');
var lambda = Promise.promisifyAll(new AWS.Lambda(), {
filter: function (name) {
return name.indexOf('Async') === -1;
}
});
var promises = require('./lambda-config.js').lambda.map(function (lambdaConfig) {
return lambda.getFunctionConfigurationAsync({
FunctionName: lambdaConfig.FunctionName
}).then(function () {
return lambda.updateFunctionCodeAsync({
FunctionName: lambdaConfig.FunctionName,
S3Bucket: lambdaConfig.Code.S3Bucket,
S3Key: lambdaConfig.Code.S3Key
});
}).catch(function () {
return lambda.createFunctionAsync(lambdaConfig);
});
});
Promise.all(promises).then(() => callback()).catch(callback);
});
I am getting TooManyRequestsException error
, the size of the zip is 13MB and unzipped version is 50MB. I don't think the size is the problem but the concurrent calls to the SDK.
Where can I find the information about how many concurrent calls I can perform for AWS SDK ? and how do you suggest I can solve the TooManyRequestsException error
? A code sample is appreciated.
Upvotes: 1
Views: 3850
Reputation: 106
It looks like there is a problem with concurrency here. I had the same problem saving many promises and then executing them with Promise.all
This was solved by using the Bluebird Promise library that has a similar function, Promise.map, that also handles concurrency.
The code would be something in line with this:
const Promise = require('bluebird');
Promise.map(arrayWithLambdas, lambda => {
//Deploy each lambda
}, { concurrency: 5}) // Control concurrency
.then(() => {
// Handle successful deploy of lambdas
})
.catch(() => {
// Handle unsuccessful deploy of lambdas
})
};
Upvotes: 0
Reputation: 4045
Size of zip doesn't seem to be an issue (see http://docs.aws.amazon.com/lambda/latest/dg/limits.html).
I have received the TooManyRequestsException when exceeding the number of Lambda processes that AWS has authorized that account. Here are a few things to consider around that kind of limit:
You start with a limit of about 100.See https://aws.amazon.com/lambda/faqs/ for details. As far as I understand, that means that best case, if nothing else is going on, you can only have at most 100 Lambda processes running at any given time.
There is a lag between the time a Lambda process ends and the time that ended process is credited back to your limit. I'm not sure if there is clear guidance as to how long that time is; in my experience it ranges from a few seconds to maybe a minute or so. So if you have a process that uses 50 lamda's and each lambda takes 60 seconds to run, then best case, with a 100 limit, you can run that process twice per minute, but in reality it will probably be a little more constrained than that.
Your limit can be increased if you send Amazon a service limit increase request (Support -> Create Case -> Service limit increase). You will have to provide info such as number of requests per second, duration of request, etc etc.
VERY IMPORTANT: Lambda processes may automatically retry-- see https://aws.amazon.com/lambda/faqs/, esp "What happens if my account exceeds the default throttle limit on concurrent executions?" and "What happens if my Lambda function fails during processing an event?". This means that if you've exceeded your your limit, and you keep testing, you may have a backlog of processes that are still retrying (and using up your limit).
Based on this, you might want to do the following:
Manage retries yourself, and don't use Amazon's built-in retry mechanism, especially for anything interactive and/or for testing purposes. e.g. in Node:
var lambda = new AWS.Lambda({
region: REGION,
maxRetries:0,
....
});
you can then manage using something like:
lambda.invoke(lambda_params, function(err, obj){
if(err){
if(err.toString().match(/TooManyRequestsException/)) ...
If you want to be sure to stop any Lambdas that may still be running out of control (whether due to retries or because you may have had a bug in your code), delete your lambda function. e.g. from console:
aws lambda delete-function --function-name my_outofcontrol_func
Use the testing functionality a lot (AWS Dashboard -> Lambda -> choose your function -> Test) before you try to scale. The logs are also helpful-- so if you invoke 50 lambdas for a test, you can go to the log and see what happen to those 50 lambdas.
When you need to scale, request it from Amazon ahead of time. It will take several days and for me, and others I have heard about, they always want to give you less than you ask for. It helps if you have a history of having already used the capacity you already have. Beware, once you get to around 1000 nodes, it doesn't take a lot of testing before you'll blow through the free tier and start having to pay for each use (more reason to do a lot of #3).
Upvotes: 2