Reputation: 369
I am having trouble persisting dockerized RabbitMQ user accounts set up through the management panel. Upon restart they disappear and I believe it is related to new mnesia databases being created on each restart.
I tried binding a docker volume to /var/lib/rabbitmq
:
version: '3.1'
services:
rabbitmq:
image: rabbitmq:management-alpine
volumes:
- rabbitdata1:/var/lib/rabbitmq/
ports:
- "5672:5672"
- "15672:15672"
volumes:
rabbitdata1:
driver: local
When I look at the contents of the mounted directory, I get:
$ docker exec -ti local_rabbitmq_1 /bin/bash
bash-5.0# ls /var/lib/rabbitmq/mnesia/
rabbit@0eceaaa217c8 rabbit@0eceaaa217c8-plugins-expand
rabbit@0eceaaa217c8-feature_flags [email protected]
But when I restart the service, it seems like a new instance gets created for the new PID and all changes are lost:
$ docker exec -ti local_rabbitmq_1 /bin/bash
bash-5.0# ls /var/lib/rabbitmq/mnesia/
rabbit@0eceaaa217c8 rabbit@ac5afbef3c81
rabbit@0eceaaa217c8-feature_flags rabbit@ac5afbef3c81-feature_flags
rabbit@0eceaaa217c8-plugins-expand rabbit@ac5afbef3c81-plugins-expand
[email protected] [email protected]
I also tried setting the RABBITMQ_NODENAME
environment variable so that instead of the above rabbit@0eceaaa217c8
and rabbit@ac5afbef3c81
I get a constant string for .pid
and mnesia directories but then RabbitMQ would not even restart:
2020-04-10 10:41:34.657 [info] <0.309.0> Running boot step database defined by app rabbit
2020-04-10 10:41:34.685 [error] <0.308.0> CRASH REPORT Process <0.308.0> with 0 neighbours exited with reason: {{failed_to_cluster_with,[foo@190e6343c238],"Mnesia could not connect to any nodes."},{rabbit,start,[normal,[]]}} in application_master:init/4 line 138
2020-04-10 10:41:34.686 [info] <0.44.0> Application rabbit exited with reason: {{failed_to_cluster_with,[foo@190e6343c238],"Mnesia could not connect to any nodes."},{rabbit,start,[normal,[]]}}
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{{failed_to_cluster_with,[foo@190e6343c238],\"Mnesia could not connect to any nodes.\"},{rabbit,start,[normal,[]]}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{{failed_to_cluster_with,[foo@190e6343c238],"Mnesia could not connect to any nodes."},{rabbit,start,[normal,[]]}}})
Is there any other way to retain changes between RabbitMQ docker service restarts?
Maybe there is some other directory that could do the trick?
I checked for other potential candidates but only found /etc/rabbitmq/
and /opt/rabbitmq/
which seem like configuration and installation directories respectively:
bash-5.0# find . -name 'rabbitmq'
./etc/rabbitmq
./var/log/rabbitmq
./var/lib/rabbitmq
./opt/rabbitmq
./opt/rabbitmq/etc/rabbitmq
Upvotes: 4
Views: 2235
Reputation: 2953
Create Two folders, data and etc
enabled_plugins
[rabbitmq_management,rabbitmq_prometheus].
rabbitmq.conf
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
loopback_users.guest = false
listeners.tcp.default = 5672
#default_pass = admin
#default_user = admin
hipe_compile = false
#management.listener.port = 15672
#management.listener.ssl = false
management.tcp.port = 15672
management.load_definitions = /etc/rabbitmq/definitions.json
definitions.json
{
"users": [
{
"name": "admin",
"password": "admin",
"tags": "administrator"
}
],
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"vhost": "/",
"name": "ha",
"pattern": "",
"apply-to": "all",
"definition": {
"ha-mode": "all",
"ha-sync-batch-size": 256,
"ha-sync-mode": "automatic"
},
"priority": 0
}
],
"permissions": [
{
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
],
"queues": [
{
"name": "job-import.triggered.queue",
"vhost": "/",
"durable": true,
"auto_delete": false,
"arguments": {}
}
],
"exchanges": [
{
"name": "lob-proj-dx",
"vhost": "/",
"type": "direct",
"durable": true,
"auto_delete": false,
"internal": false,
"arguments": {}
}
],
"bindings": [
{
"source": "lob-proj-dx",
"vhost": "/",
"destination": "job-import.triggered.queue",
"destination_type": "queue",
"routing_key": "job-import.event.triggered",
"arguments": {}
}
]
}
Run Docker
docker run --restart=always -d -p 5672:5672 -p 15672:15672 --mount type=bind,source=E:\docker\rabbit\data,target=/var/lib/rabbitmq/ --mount type=bind,source=E:\docker\rabbit\etc,target=/etc/rabbitmq/ --name rabbitmq --hostname my-rabbit rabbitmq:3.7.28-management
Things would be persisted across restarts
Taken from here
Upvotes: 0
Reputation: 159403
In the documentation for the rabbitmq
Docker Hub image it notes (under "Running the daemon"):
One of the important things to note about RabbitMQ is that it stores data based on what it calls the "Node Name", which defaults to the hostname. What this means for usage in Docker is that we should specify
-h
/--hostname
explicitly for each daemon so that we don't get a random hostname and can keep track of our data.
The equivalent Docker Compose setting is hostname:
. It defaults to the container ID, which changes every time the container is recreated, which is why you're not seeing data persisted and why the filenames have 12-hex-digit IDs in their names.
services:
rabbitmq:
image: rabbitmq:management-alpine
hostname: rabbitmq # <-----
volumes:
- rabbitdata1:/var/lib/rabbitmq/
ports:
- "5672:5672"
- "15672:15672"
(The only thing hostname:
sets is what the container thinks its own host name is. It has no connection to the networking setup at all. Setting it usually isn't necessary, unless you have software like this that specifically looks at it.)
Upvotes: 7