Reputation: 565
I am working on a setup that goes something like this. Server1 and Server2 are EC2 instances, each of them having an EBS volume attached. These servers are supposed to run an application (python, flask based) and configs are different on server1 and server2, which would be on theri respective EBS volumes. But apart from the config, the codebase is common and I need to place it on a common location accessible to both server1 and server2. Can I use an S3 bucket for it?
This might be a very silly question but since I could not find a definitive answer, am asking it anyway. Would an S3 bucket be visible as a drive where the codebase can be hosted which could be picked by application running on servers 1 and 2? I found some utilities like tntdrive but want to know if there's a better/elegant way to get it done? Simply put, can I make this S3 bucket be usable as a shared drive on servers 1 and 2?
+-----------+ +-----------+ | Server 1 | | Server 2 | +---------+-+ +--+--------+ +----+ | | +----+ |EBS1| | | |EBS2| +----+ | | +----+ | +------------+ | | | | | +->| S3 bucket |<-+ | | | | +------------+
Thanks!
Upvotes: 0
Views: 113
Reputation: 19563
Attempting to use S3 as a NFS or SAN volume will not behave as expected. Its not a block device so block operations don't actually work on the "volume". The S3 layers like s3fs work by copying files to a temp directory before working on them.
The best route here is to automatically deploy your code to the instances. I use a process which downloads and extracts a zip file from s3. It checks s3 every 5 minutes or so for new code.
There are other options to handle deployment. Choose whatever works best for your case. Just don't attempt to run the code directly from S3.
Upvotes: 1