AEGG
AEGG

Reputation: 23

How to expand docker minio node for DISTRIBUTED_MODE?

Name and Version bitnami/minio:2022.8.22-debian-11-r1

The docker startup command is as follows, the initial node is 4, it is running well

docker run -d --restart=always --name minio --network host \
          --ulimit nofile=65536:65536 \
          -v "/etc/localtime":/etc/localtime:ro \
          -v "/data/minio/data":/data \
          -v "/data/minio/hosts":/etc/hosts \
          -v "/data/logs/minio":/opt/logs \
          -e LANG=C.UTF-8 \
          -e MINIO_ROOT_USER=xxxxxxxxxx  \
          -e MINIO_ROOT_PASSWORD=xxxxxxxxxxxxxx  \
          -e MINIO_DISTRIBUTED_MODE_ENABLED=yes \
          -e MINIO_DISTRIBUTED_NODES=minio-1,minio-2,minio-3,minio-4 \
          -e MINIO_SKIP_CLIENT=yes \
          -e MINIO_HTTP_TRACE=/opt/bitnami/minio/log/minio-http.log \
          -e MINIO_PROMETHEUS_AUTH_TYPE="public" \
          bitnami/minio:2022.8.22-debian-11-r1

I want to expand to 8 nodes, but the following configuration cannot be started

docker run -d --restart=always --name minio --network host \
          --ulimit nofile=65536:65536 \
          -v "/etc/localtime":/etc/localtime:ro \
          -v "/data/minio/data":/data \
          -v "/data/minio/hosts":/etc/hosts \
          -v "/data/logs/minio":/opt/logs \
          -e LANG=C.UTF-8 \
          -e MINIO_ROOT_USER=xxxxxxxxxx  \
          -e MINIO_ROOT_PASSWORD=xxxxxxxxxxxxxx  \
          -e MINIO_DISTRIBUTED_MODE_ENABLED=yes \
          -e MINIO_DISTRIBUTED_NODES=minio-1,minio-2,minio-3,minio-4,minio-5,minio-6,minio-7,minio-8 \
          -e MINIO_SKIP_CLIENT=yes \
          -e MINIO_HTTP_TRACE=/opt/bitnami/minio/log/minio-http.log \
          -e MINIO_PROMETHEUS_AUTH_TYPE="public" \
          bitnami/minio:2022.8.22-debian-11-r1

Got the following error in the log

API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-1:9000 offline temporarily; caused by Post "http://minio-1:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.89:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-4:9000 offline temporarily; caused by Post "http://minio-4:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.57:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-2:9000 offline temporarily; caused by Post "http://minio-2:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.139:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-6:9000 offline temporarily; caused by Post "http://minio-6:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.140:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-7:9000 offline temporarily; caused by Post "http://minio-7:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.159:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-8:9000 offline temporarily; caused by Post "http://minio-8:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.161:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()

API: SYSTEM()
Time: 17:44:21 UTC 09/23/2022
Error: Marking minio-5:9000 offline temporarily; caused by Post "http://minio-5:9000/minio/storage/data/v47/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 10.13.1.112:9000: connect: connection refused (*fmt.wrapError)
9: internal/logger/logger.go:259:logger.LogIf()
8: internal/logger/logonce.go:104:logger.(*logOnceType).logOnceIf()
7: internal/logger/logonce.go:135:logger.LogOnceIf()
6: internal/rest/client.go:243:rest.(*Client).Call()
5: cmd/storage-rest-client.go:152:cmd.(*storageRESTClient).call()
4: cmd/storage-rest-client.go:526:cmd.(*storageRESTClient).ReadAll()
3: cmd/format-erasure.go:396:cmd.loadFormatErasure()
2: cmd/format-erasure.go:332:cmd.loadFormatErasureAll.func1()
1: internal/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()
ERROR Unable to initialize backend: http://minio-2:9000/data drive is already being used in another erasure deployment. (Number of drives specified: 8 but the number of drives found in the 2nd drive's format.json: 4)

I know that there is a problem with my configuration, but I don't know how to change it to achieve the effect of expansion. I hope friends who have solved related problems can guide me

Upvotes: 1

Views: 2263

Answers (1)

Matt
Matt

Reputation: 11

It's not your configuration, you just can't expand MinIO in this manner. Once the drives are enrolled in the cluster and the erasure coding is configured, nodes and drives cannot be added to the same MinIO Server deployment.

Instead, you would add another Server Pool that includes the new drives to your existing cluster.

Alternatively, you could back up your data or replicate to S3 or another MinIO instance temporarily, then delete your 4-node configuration, replace it with a new 8-node configuration and bring MinIO back up.

Take a look at our multi-tenant deployment guide: https://docs.minio.io/docs/multi-tenant-minio-deployment-guide

Upvotes: 1

Related Questions