Reputation: 13
I wanted a MongoDB container use a host json file to create a database, and once created, every container of mongo image later can share it.I thought bind mount
and volume
may work, these following steps indicated how I dealt with it.
doc_info_product.json
and a named volumemymongo_data
to mongo
,a container name of mongo image,in this struction:
docker run -d --name mongo --mount type=bind,source=V:\repository\How-to\doc_info_product.json,target=/data/db/doc_info.json -v mymongo_data:/data mongo:latest
mongo
and use mongoimport
to create a database with the json file:
docker exec -it mongo bash
/# mongoimport --db mydb --collection doc --type json --file /data/db/doc_info.json --jsonArray
Now in the mongo
container, I had a database name mydb
and a collection doc
in it when checked with show dbs
and show collections
.
Then another container called mongo2 lunched also mounted mymongo_data
:
docker run -d -v mymongo_data:/data --name mongo2 mongo:latest
But when I checked mongo2
and show dbs
, I don't get mydb
.So how can I get mydb
in another container and why I couldn't?
Upvotes: 0
Views: 154
Reputation: 2812
EDIT: I originally answered this from a non-mongo specific standpoint, but I've now realised that it is actually a mongo specific problem when using non-linux as the host: https://hub.docker.com/_/mongo/
WARNING (Windows & OS X): The default Docker setup on Windows and OS X uses a VirtualBox VM to host the Docker daemon. Unfortunately, the mechanism VirtualBox uses to share folders between the host system and the Docker container is not compatible with the memory mapped files used by MongoDB (see vbox bug, docs.mongodb.org and related jira.mongodb.org bug). This means that it is not possible to run a MongoDB container with the data directory mapped to the host.
The way around this is to use a docker volume instead of a mount from the host:
# Create a volume for persistent data
$ docker volume create mongodata
mongodata
# Start the mongo container, mount the db volume and
# also my downloads directory as a place to get a file to import
$ docker run -d --name mongo -v mongodata:/data/db \
-v ~/Downloads/:/json mongo:latest
0755cc15f7550dce7fc4bef28da90216a95d5763df98518786533b6314c231d7
# Exec into the container and do the import
$ docker exec -it mongo bash
root@0755cc15f755 :/# mongoimport --db mydb --collection doc \
--type json --file /json/test.json
2018-04-28T14:32:49.102+0000 connected to: localhost
2018-04-28T14:32:49.118+0000 imported 1 document
# Show the db is present and exit
root@0755cc15f755:/# mongo
> show dbs
admin 0.000GB
local 0.000GB
mydb 0.000GB
>
bye
root@0755cc15f755:/# exit
# After exiting the exec, stop the first container
$ docker stop mongo
mongo
# Start a new container using the same volume for data
$ docker run -d --name mongo2 -v mongodata:/data/db mongo:latest
b6bda766217c6fe4ed355c1faaa5880471b6841eb68c8dd75a3cb72aa5c39ff5
# Exec into this
$ docker exec -it mongo2 bash
# Show the data is still there!
root@b6bda766217c:/# mongo
> show dbs
admin 0.000GB
local 0.000GB
mydb 0.000GB
That's your best bet given that you're not on native Linux.
WARNING: I don't know enough about mongo to know for sure what would happen, but in general running two databases simultaneously with the same data directory is a really bad idea. mongo might use a lock in /data/db
and detect this is happening and act accordingly, or it might not. So if you were thinking of doing that it's always better to be explicit, make sure all but one of the containers have the data directory mounted read-only. You can do this by appending :ro
to the end of a volume mount, E.g. -v mongodata:/data/db:ro
.
Also: I don't have access to windows to test this, but I also thing there is a problem with mounting a file into the container which is what you do with your JSON file. I think this would result in a empty volume mount in the container. So instead do as I did and mount the directory which contains your JSON files, not the JSON file itself.
Upvotes: 1