mrksbnch
mrksbnch

Reputation: 1842

SQS and Lambda: Limit max. amount of processed messages

If using SQS as an event source for a Lambda function, is there a way to limit the maximum amount of "active" messages to x. So, imagine there's a SQS queue with 1000 messages but instead of trying to process as many messages as possible (up to the default concurrency limit of 1000) we only want to process up to x messages at the same time. This obviously means that it'll take more time to process all messages but it would give us a possibility to better control e.g. writes to a database.

Also, in case a message can't be processed (due to e.g. an error that occurred in the Lambda function), is the message appended to the end of the queue (so all other messages are coming first) or is there a way to prioritise them after a certain waiting time (visibility timeout)?

Many thanks

Upvotes: 3

Views: 3082

Answers (2)

John Rotenstein
John Rotenstein

Reputation: 269091

From Reserving Concurrency for a Lambda Function - AWS Lambda:

You can configure a function with reserved concurrency to guarantee that it can always reach a certain level of concurrency. Reserving concurrency also limits the maximum concurrency for the function.

...

Your function can't scale out of control – Reserved concurrency also limits your function from using concurrency from the unreserved pool, capping it's maximum concurrency. Reserve concurrency to prevent your function from using all the available concurrency in the region, or from overloading downstream resources.

If a message is not processed within the invisibility timeout period, it is placed back on the queue. There is no guarantee of ordering of messages in Amazon SQS unless you are using a FIFO queue, which has further limitations on in-flight messages.

Upvotes: 1

JohnB
JohnB

Reputation: 948

As for throttling a queue, you could of added a Delivery Delay time or make it long polling but as yours is event driven this isn't a choice. So this leaves you with throttling your lambda to x many you want done a concurrently.

As for the messages which cant be processed that depends whether you are using - standard queue, which wont hold any prioritization which message is picked up next. - a .fifo queue Which will try to process it again as it would be next in line chronologically.

But if you caught the error you should send it straight to a dead letter queue to prevent unnecessary retries.

Although by throttling it you're removing all scalability of AWS, which is against its native architecture. Id recommend going back to the Database and seeing if any work can be improved there instead to avoid throttling.

Upvotes: 2

Related Questions