Garvit Jain
Garvit Jain

Reputation: 635

AWS Lambda S3 event infinite loop

I want to use S3 events to publish to AWS Lambda whenever a video file (.mp4) gets uploaded, so that it can be compressed. The problem is that the path to the video file is stored in RDS, so I want the path to remain the same after compression. From what I've read, replacing the file will again call the Object Created event leading to an infinite loop.

Is there any way to replace the file without triggering any event? What are my options?

Upvotes: 3

Views: 1872

Answers (2)

Garvit Jain
Garvit Jain

Reputation: 635

There is an ungraceful solution to this problem, which is not documented anywhere.

The event parameter in the Lambda function contains a userIdentity dict which contains principalId. For an event which originated because of AWS Lambda (like updating S3 object as mentioned in question), this principalId contains the name of the lambda function appended at the end.

Therefore, by checking the principalId one can deduce whether the event came from Lambda or not, and accordingly compress or not.

Upvotes: 5

Chris Williams
Chris Williams

Reputation: 35238

You are correct that you cannot completely distinguish. From the documentation the following events are supported:

  • s3:ObjectCreated:Put – An object was created by an HTTP PUT operation.
  • s3:ObjectCreated:Post – An object was created by HTTP POST operation.
  • s3:ObjectCreated:Copy – An object was created an S3 copy operation.
  • s3:ObjectCreated:CompleteMultipartUpload – An object was created by the completion of a S3 multi-part upload.
  • s3:ObjectCreated:* – An object was created by one of the event types listed above or by a similar object creation event added in the future.
  • s3:ReducedRedundancyObjectLost – An S3 object stored with Reduced Redundancy has been lost.

The architecture that I would generally see for this type of problem is having 2 S3 buckets

  • 1 S3 Bucket stores the source material without any modification, this would trigger the Lambda event.
  • 1 S3 Bucket stores the processed artifact, from the compressed output.

By doing this you can store the original, and rerun if needed to autocorrect.

Upvotes: 4

Related Questions