Reputation: 5194
I have a lambda eventFetcher
which calls external API to get the data and sends the response to SQS
.
The API call happens in a loop until the response keeps returning the data.
The problem is the data is so huge that it is taking more than 15 minutes(900s)
to complete the transaction.
Is there any hook
or lifecycle
method which I can attach to the lambda so that it gets called before the lambda is terminated? (in that hook
, I want to perform some cleanup activities like closing the open connection, etc).
Any other alternative is also appreciated.
Thanks!!!
Upvotes: 0
Views: 1535
Reputation: 9605
I tried to answer the same in this question Attach additional info to Lambda time-out message?.
Below is an example:
import signal
import logging
import time
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)
def handler(event, context):
# Setup alarm for remaining runtime minus a second
signal.alarm((int(context.get_remaining_time_in_millis() / 1000) - 1))
time.sleep(10)
def timeout_handler(_signal, _frame):
print("handle the timeout here")
raise Exception('Time exceeded')
signal.signal(signal.SIGALRM, timeout_handler)
And the corresponding log:
handle the timeout here
[ERROR] Exception: Time exceeded
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 13, in handler
time.sleep(10)
File "/var/task/lambda_function.py", line 19, in timeout_handler
raise Exception('Time exceeded')
END RequestId: 7792ab80-20bc-4759-8613-088511b7c933
REPORT RequestId: 7792ab80-20bc-4759-8613-088511b7c933 Duration: 1002.68 ms Billed Duration: 1003 ms Memory Size: 128 MB Max Memory Used: 48 MB Init Duration: 119.91 ms
Upvotes: 0
Reputation: 238081
There is no such hook or lifecycle event. But it is something that you can relatively simply implement yourself.
Since your function iteratively is checking for completion of your API call, you can measure execution time in each iteration and when you will be close to 15 minutes, then you can send notification to a dedicated SNS topic.
Alternatively, you could use SQS dead-letter queue for your SQS. If message failed to be processed by your lambda function, it will go to DLQ. This could trigger other process/lambda function which would do all the clean up for you.
Upvotes: 2