Reputation: 4192
Using AWS Cloud Services, I am using an S3 trigger to monitor a bucket and invoke a Lambda function. This function then picks up the S3 object to populate a DynamoDB table.
The problem is that I now need to monitor multiple directories for changes and each directory has meta data (not available in the object) that needs to be passed to the DynamoDB. I do not know of a way to pass this meta information from the trigger to the lambda. I currently have the Lambda duplicated for each directory with the meta information saved as environment variables for each Lambda. This works, but feels like a terrible hack.
How can I go about using a single Lambda to monitor multiple directories passing the meta arguments from the trigger to the Lambda?
Upvotes: 1
Views: 1717
Reputation: 238937
Sadly, you can't add extra information to the S3 notification records. But, if the folders are part of the same bucket, then having one lambda could be enough in my opinion.
This is based on the fact that you could differentiate between different directories based on the prefixes of the S3 objects.
For example if you upload the following to files to the bucket:
dir1/file1.csv
dir2/file3.txt
your lambda would be triggered for each of them. In the lambda you could use basic if-else
matching to check if your objects prefixes are dir1
or dir2
. Based on this you could choose different metadata to be written to the dynamodb.
Very roughly, based on two folders, in your lambda function you could have (pseudo-code):
if object_key.beinswith('dir1'):
metadata = {some_metatadata_for_dir1}
elif object_key.beinswith('dir2'):
metadata = {some_metatadata_for_dir2}
dynamodb.put_item({object_key + metadata})
You are basically doing this anyway, but passing the metadata though env variables to different lambda functions. Obviously, if you have many folders to monitor you can store the metadata outside of the lambda, e.g. in other dynamodb table, or parameter store if the metadata would change often.
Hope this helps.
Upvotes: 1