Reputation: 4629
I have a problem regarding cache on S3. Basically I have a lambda that reads a file on S3 which is used as configuration. This file is a JSON. I am using python with boto3 to extract the needed info.
Snippet of my code:
s3 = boto3.resource('s3')
bucketname = "configurationbucket"
itemname = "conf.json"
obj = s3.Object(bucketname, itemname)
body = obj.get()['Body'].read()
json_parameters = json.loads(body)
def my_handler(event, context):
# using json_paramters data
The problem is that when I change the json content and I upload the file again on S3, my lambda seems to read the old values, which I suppose is due to S3 doing caching somewhere.
Now I think that there are two ways to solve this problem:
I do prefer the first solution, because I think it will reduce computation time (reloading the file is an expensive procedure). So, how can I flush my cache? I didn't find on console or on AWS guide the way to do this in a simple manner
Upvotes: 0
Views: 2347
Reputation: 8593
problem is , the code outside of function handler is initialized only once. It won't be re-initialised when the lambda is warm
def my_handler(event, context):
# read from S3 here
obj = s3.Object(bucketname, itemname)
body = obj.get()['Body'].read()
json_parameters = json.loads(body)
# use json_paramters data
Upvotes: 4