Reputation: 949
I'm writing some end-to-end tests that reuse and mutate test data, and thus require files to be stored and modified in an external, central location. Because of the use-case, concurrent access will be limited, and I decided to go with an optimistic concurrency solution on s3 rather than creating a database, which seemed like overkill for a small amount of test data.
My plan is to create a list of small files on s3, and each time a test needs data, get one file, delete it from the server, and then write back a new file after the data is mutated.
To minimize the chances of two users accessing the same file, I'd like to atomically get and delete the file. Is this possible? Or is the lag time on s3 object deletion large enough that it's not meaningful anyway?
Upvotes: 2
Views: 3178
Reputation: 179374
Delete operations are not exactly atomic in the sense that you need.
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions.
http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel
When an object is deleted, there is no guarantee that subsequent requests for the object will not succeed, particularly over the next few seconds. This is normally very quick, but it's still distinctly possible. S3's design places a priority on newly created objects being immediately available, while the bucket index changes representing deletes and overwrites might be best described (though somewhat naïvely, I assume) as being handled with a lower priority.
Additionally, you delete an object that someone else is in the middle of downloading, the deletion operation succeeds, but that download-in-progress will never be interrupted, regardless of the size of the file or how long the download takes.
Upvotes: 3