ThomasVdBerge
ThomasVdBerge

Reputation: 8140

Auto create S3 Buckets on localstack

Using localstack in my docker-compose mainly to mimic S3.

I know I can create buckets, that's not the issue. What I would like to do is automatically create the buckets when I run a docker-compose up.

Is there something built in already for localstack?

Upvotes: 83

Views: 53800

Answers (5)

Dmitri Algazin
Dmitri Algazin

Reputation: 3456

Just spend a day making it right. Here we go!

The goal is: run docker-compose with localstack, create S3 bucket s3://this-is-my-bucket and copy over some json files from project source

LOCALSTACK_BUILD_VERSION: latest today, 2.1.1.dev

Docker compose:

localstack:
  image: localstack/localstack:latest
  networks:
    - hmpps
  container_name: localstack
  ports:
    - "4566-4597:4566-4597"
    - 8999:8080
    - 9080:9080
  environment:
    - SERVICES=s3
    - PORT_WEB_UI=9080
    - DEBUG=${DEBUG- }
    - DATA_DIR=/tmp/localstack/data
    - DOCKER_HOST=unix:///var/run/docker.sock
    - DEFAULT_REGION=eu-west-2
  volumes:
    - "./src/test/resources/test-data:/tmp/localstack/test-data"
    - "$PWD/src/test/resources/localstack/setup-s3.sh:/etc/localstack/init/ready.d/init-aws.sh"
    - "/var/run/docker.sock:/var/run/docker.sock"

Two important things here:

  1. src/test/resources/test-data - is the source folder for your json files
  2. src/test/resources/localstack/setup-s3.sh - is a script file to run during docker-compose

enter image description here

Script file setup-s3.sh

#!/usr/bin/env bash
set -e
export TERM=ansi
export AWS_ACCESS_KEY_ID=foobar
export AWS_SECRET_ACCESS_KEY=foobar
export AWS_DEFAULT_REGION=eu-west-2
export PAGER=

echo "S3 Configuration started"

aws --endpoint-url=http://localhost:4566 s3 mb s3://this-is-my-bucket
aws --endpoint-url=http://localhost:4566 s3 cp /tmp/localstack/test-data/ s3://this-is-my-bucket/chart --recursive

echo "S3 Configured"

Look at "--recursive" param for "aws cp" command. That took us 4 hours!

Does not work without "--recursive". Crazy!

logstack docker logs:

2023-06-22 17:07:00 S3 Configuration started
2023-06-22 17:07:01 2023-06-22T16:07:01.070  INFO --- [   asgi_gw_0] localstack.request.aws     : AWS s3.CreateBucket => 200
2023-06-22 17:07:01 make_bucket: this-is-my-bucket
2023-06-22 17:07:01 2023-06-22T16:07:01.431  INFO --- [   asgi_gw_0] localstack.request.aws     : AWS s3.PutObject => 200
upload: ../../../tmp/localstack/test-data/1a.json to s3://this-is-my-bucket/chart/1a.json
2023-06-22 17:07:01 S3 Configured

UI for localstack S3 buckets is here http://s3.localhost.localstack.cloud:4566/this-is-my-bucket enter image description here

Not sure if required, we have application-test.yml properties

aws:
  region-name: eu-west-2
  endpoint-url: "http://localhost:4566"
  access_key: foobar
  secret_access_key: foobar
  s3:
    access_key: foobar
    secret_access_key: foobar
    bucket_name: this-is-my-bucket

Next goal is to write an integration test, point to localstack s3://this-is-my-bucket bucket, load "chart/1a.json" file, and run the assertion magic.

Thanks!

Upvotes: 3

Matthew Warman
Matthew Warman

Reputation: 3442

A change that came in with this commit since version 0.10.0.

When a container is started for the first time, it will execute files with extensions .sh that are found in /docker-entrypoint-initaws.d. Files will be executed in alphabetical order. You can easily create aws resources on localstack using awslocal (or aws) cli tool in the initialization scripts.

Before v1.1.0

version: '3.7'
services:
  localstack:
    image: localstack/localstack
    environment:
      - SERVICES=s3
    ports:
      - "4566:4566"
      # - "4572:4572" Old S3 port
    volumes:
      - ./aws:/docker-entrypoint-initaws.d

After v1.1.0

The /docker-entrypoint-initaws.d directory usage has now been deprecated. The Pluggable initialization hooks in /etc/localstack/init/<stage>.d have replaced the legacy Init scripts, and the former will be removed entirely in the 2.0 release.

Reference: localstack/localstack/issues/7257

Init hooks official documentation.

version: '3.7'
services:
  localstack:
    image: localstack/localstack
    environment:
      - SERVICES=s3
    ports:
      - "4566:4566"
    volumes:
      - ./aws:/etc/localstack/init/ready.d

With a script in directory ./aws/buckets.sh:

#!/usr/bin/env bash
awslocal s3 mb s3://bucket

Will produce this output:

Before v1.1.0

...
localstack_1  | Starting mock S3 (http port 4572)...
localstack_1  | Waiting for all LocalStack services to be ready
localstack_1  | Ready.
localstack_1  | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initaws.d/buckets.sh
localstack_1  | make_bucket: bucket
localstack_1  |

After v1.1.0

localstack-1  | Ready.
localstack-1  | 2023-01-05T18:07:50.163  INFO --- [-functhread8] l.services.motoserver      : starting moto server on http://0.0.0.0:40443
localstack-1  | 2023-01-05T18:07:50.165  INFO --- [   asgi_gw_0] localstack.services.infra  : Starting mock S3 service on http port 4566 ...
localstack-1  | 2023-01-05T18:07:50.324  INFO --- [   asgi_gw_0] localstack.request.aws     : AWS s3.CreateBucket => 200
localstack-1  | make_bucket: bucket

Upvotes: 100

Ermiya Eskandary
Ermiya Eskandary

Reputation: 23682

Important notice for users of LocalStack v1.1.3 and above

Since the 1st of December 2022, LocalStack has announced the deprecation of legacy init scripts (/docker-entrypoint-initaws.d) with the release of v1.3.0. The replacement - plugggable initialisation hooks - was introduced in v1.1.0.

They will be fully removed in v2.0.0, posing a risk for anyone that would be using /docker-entrypoint-initaws.d with the latest version of LocalStack.

This would be a breaking change.

For future-proofing, the accepted answer is still valid but I would recommend replacing /docker-entrypoint-initaws.d with /etc/localstack/init/ready.d.

This would mimic the previous intended behaviour, as it would be hooking into the READY stage where LocalStack can actually create your S3 buckets, while making sure you can still continue to update LocalStack.

This would be the best current Dockerfile:

version: '3.8'
services:
  localstack:
    image: localstack/localstack:latest
    environment:
      - SERVICES=s3
    ports:
      - "4566:4566"
    volumes:
      - ./aws:/etc/localstack/init/ready.d

Upvotes: 12

SathOkh
SathOkh

Reputation: 836

I have been able to achieve this using Localstack with kind of "workaround":

  1. Start Localstack

  2. Create expected buckets, e.g.:

    aws --endpoint-url=http://localhost:4572 s3 mb s3://test1   
    
  3. Above line will update the s3_api_calls.json file in the Localstack directory (by default on Linux it's /tmp/localstack/data

  4. Backup the file

  5. Put the copied file in the Localstack directory ( /tmp/localstack/data by default) before starting the stack again

  6. You should be able to see something like 2019-03-21T08:38:28:INFO:localstack.utils.persistence: Restored 2 API calls from persistent file: /tmp/localstack/data/s3_api_calls.json the in startup log after you start Localstack again and the bucket should be available: aws --endpoint-url=http://localhost:4572 s3 ls s3://test1

Upvotes: 8

Bigo
Bigo

Reputation: 113

DATA_DIR: Local directory for saving persistent data (currently only supported for these services: Kinesis, DynamoDB, Elasticsearch, S3)

Upvotes: -2

Related Questions