Reputation: 846
I've looked at several tutorials on mounting S3 buckets using S3fs, but I sense these are geared towards key-based authentication. Right now if I "aws s3 ls" (or use other "aws s3 ls bucketname") I can see a lot of directories and within those directories due to the instance profile (IAM). Thus since I already have access and could technically cp using the AWS CLI, I want to "mount" one of those directories so that when I'm accessing my EC2 instance remotely via an sftp program, or via remote python interpreter, that S3 bucket and all it's contents appear just as another directory attached to the system.
Is this possible? If so what set of commands can I run to mount the S3 bucket that I already have access to via AWS CLI commands so that it acts as just another directory on the local system.
Upvotes: 0
Views: 1889
Reputation: 4532
It you are on Debian you can use these commands to get setup:
cd ~/;
sudo apt-get install s3fs;
echo ACCESS_KEY:SECRET_KEY > ~/.passwd-s3fs;
# confirm entry was added
cat ~/.passwd-s3fs;
chmod 600 .passwd-s3fs;
mkdir ~/BUCKETNAME-s3-drive;
s3fs BUCKETNAME ~/BUCKETNAME-s3-drive;
Fully explained here - https://impressto.net/connect-to-s3-from-your-local-file-system/
Upvotes: 0
Reputation: 4340
This question addresses using instance profile credentials with s3fs instead of access keys: s3fs with aws ec2 instance and using instance profiles
Upvotes: 1