Nio
Nio

Reputation: 527

docker-compose: how to use minio in- and outside of the docker network

I have the following docker-compose.yml to run a local environment for my Laravel App.

version: '3'
services:
  app:
    build:
      context: .
      dockerfile: .docker/php/Dockerfile
    ports:
      - 80:80
      - 443:443
    volumes:
      - .:/var/www:delegated
    environment:
      AWS_ACCESS_KEY_ID: minio_access_key
      AWS_SECRET_ACCESS_KEY: minio_secret_key
      AWS_BUCKET: Bucket
      AWS_ENDPOINT: http://s3:9000
    links:
      - database
      - s3
  database:
    image: mariadb:10.3
    ports:
      - 63306:3306
    environment:
      MYSQL_ROOT_PASSWORD: secret
  s3:
    image: minio/minio
    ports:
      - "9000:9000"
    volumes:
      - ./storage/minio:/data
    environment:
      MINIO_ACCESS_KEY: minio_access_key
      MINIO_SECRET_KEY: minio_secret_key
    command: server /data

As you can see, I use minio as AWS S3 compatible storage. This works very well but when I generate a url for a file (Storage::disk('s3')->url('some-file.txt')) obviously I get a url like this http://s3:9000/Bucket/some-file.txt which does not work outside of the Docker network.

I've already tried to set AWS_ENDPOINT to http://127.0.0.1:9000 but then Laravel can't connect to the Minio Server...

Is there a way to configure Docker / Laravel / Minio to generate urls which are accessible in- and outside of the Docker network?

Upvotes: 21

Views: 54102

Answers (6)

spaceemotion
spaceemotion

Reputation: 1546

Adding the "s3" alias to my local hosts file did not do the trick. But explicitly binding the ports to 127.0.0.1 worked like a charm:

s3:
    image: minio/minio:RELEASE.2022-02-05T04-40-59Z
    restart: "unless-stopped"
    volumes:
        - s3data:/data
    environment:
        MINIO_ROOT_USER: minio
        MINIO_ROOT_PASSWORD: minio123
    # Allow all incoming hosts to access the server by using 0.0.0.0
    command: server --address 0.0.0.0:9000 --console-address ":9001" /data
    ports:
        # Bind explicitly to 127.0.0.1
        - "127.0.0.1:9000:9000"
        - "9001:9001"
    healthcheck:
        test: ["CMD", "curl", "-f", "http://127.0.0.1:9000/minio/health/live"]
        interval: 30s
        timeout: 20s
        retries: 3

Upvotes: 1

Melchia
Melchia

Reputation: 24314

I didn't find a complete setup of minio using docker-compose. here it is:

version: '2.4'

services:
  s3:
    image: minio/minio:latest
    ports:
      - "9000:9000"
      - "9099:9099"
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    volumes:
      - storage-minio:/data
    command: server --address ":9099" --console-address ":9000" /data
    restart: always # necessary since it's failing to start sometimes

volumes:
  storage-minio:
    external: true

In command section we have the address which is the API address and we have console-address where you can connect to the console see the image below. Use to the MINIO_ROOT_USER & MINIO_ROOT_PASSWORD values to sign in.

minio console

Upvotes: 1

GnanaJeyam
GnanaJeyam

Reputation: 3170

For those who are looking for s3 with minio object server integration test. Specially for JAVA implementation.

docker-compose file:

version: '3.7'
services:
  minio-service:
    image: quay.io/minio/minio
    command: minio server /data
    ports:
      - "9000:9000"
    environment:
      MINIO_ROOT_USER: minio
      MINIO_ROOT_PASSWORD: minio123

The actual IntegrationTest class:

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import org.junit.jupiter.api.*;
import org.testcontainers.containers.DockerComposeContainer;

import java.io.File;

@TestInstance(TestInstance.Lifecycle.PER_CLASS)
class MinioIntegrationTest {

    private static final DockerComposeContainer minioContainer = new DockerComposeContainer<>(new File("src/test/resources/docker-compose.yml"))
            .withExposedService("minio-service", 9000);
    private static final String MINIO_ENDPOINT = "http://localhost:9000";
    private static final String ACCESS_KEY = "minio";
    private static final String SECRET_KEY = "minio123";
    private AmazonS3 s3Client;

    @BeforeAll
    void setupMinio() {
        minioContainer.start();
        initializeS3Client();
    }

    @AfterAll
    void closeMinio() {
        minioContainer.close();
    }

    private void initializeS3Client() {
        String name = Regions.US_EAST_1.getName();
        AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(MINIO_ENDPOINT, name);
         s3Client = AmazonS3ClientBuilder.standard()
                .withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
                .withEndpointConfiguration(endpoint)
                .withPathStyleAccessEnabled(true)
                .build();
    }

    @Test
    void shouldReturnActualContentBasedOnBucketName() throws Exception{
        String bucketName = "test-bucket";
        String key = "s3-test";
        String content = "Minio Integration test";
        s3Client.createBucket(bucketName);
        s3Client.putObject(bucketName, key, content);
        S3Object object = s3Client.getObject(bucketName, key);
        byte[] actualContent = new byte[22];
        object.getObjectContent().read(actualContent);
        Assertions.assertEquals(content, new String(actualContent));
    }
}

Upvotes: -1

guesswho
guesswho

Reputation: 532

how about binding address? (not tested)

...
  s3:
    image: minio/minio
    ports:
      - "9000:9000"
    volumes:
      - ./storage/minio:/data
    environment:
      MINIO_ACCESS_KEY: minio_access_key
      MINIO_SECRET_KEY: minio_secret_key
    command: server --address 0.0.0.0:9000 /data

Upvotes: 8

terrywb
terrywb

Reputation: 3956

I expanded on the solutions in this question to create a solution that is working for me on both a localhost and on a server with an accessible dns.

The localhost solution is essentially the solution described above.

Create localhost host mapping

sudo echo "127.0.0.1       my-minio-localhost-alias" >> /etc/hosts

Set HOSTNAME, use 'my-minio-localhost-alias' for localhost

export HOSTNAME=my-minio-localhost-alias

Create hello.txt

Hello from Minio!

Create docker-compose.yml

This compose file contains the following containers:

  • minio: minio service
  • minio-mc: command line tool to initialize content
  • s3-client: command line tool to generate presigned urls
version: '3.7'
networks:
  mynet:
services:
  minio:
    container_name: minio
    image: minio/minio
    ports:
    - published: 9000
      target: 9000
    command: server /data
    networks:
      mynet:
        aliases:
        # For localhost access, add the following to your /etc/hosts
        # 127.0.0.1       my-minio-localhost-alias
        # When accessing the minio container on a server with an accessible dns, use the following
        - ${HOSTNAME}
  # When initializing the minio container for the first time, you will need to create an initial bucket named my-bucket.
  minio-mc:
    container_name: minio-mc
    image: minio/mc
    depends_on:
    - minio
    volumes:
    - "./hello.txt:/tmp/hello.txt"
    networks:
      mynet:
  s3-client:
    container_name: s3-client
    image: amazon/aws-cli
    environment:
      AWS_ACCESS_KEY_ID: minioadmin
      AWS_SECRET_ACCESS_KEY: minioadmin
    depends_on:
    - minio
    networks:
      mynet:

Start the minio container

docker-compose up -d minio

Create a bucket in minio and load a file

docker-compose run minio-mc mc config host add docker http://minio:9000 minioadmin minioadmin
docker-compose run minio-mc mb docker/my-bucket
docker-compose run minio-mc mc cp /tmp/hello.txt docker/my-bucket/foo.txt

Create a presigned URL that is accessible inside AND outside of the docker network

docker-compose run s3-client --endpoint-url http://${HOSTNAME}:9000 s3 presign s3://my-bucket/hello.txt

Upvotes: 12

Kārlis Ābele
Kārlis Ābele

Reputation: 1021

Since you are mapping the 9000 port on host to that service, you should be able to access it via s3:9000 if you simply add s3 to your hosts file (/etc/hosts on Mac/Linux)

Add this 127.0.0.1 s3 to your hosts file and you should be able to access the s3 container from your host machine by using https://s3:9000/path/to/file

This means you can use the s3 hostname from inside and outside the docker network

Upvotes: 2

Related Questions