marty_c
marty_c

Reputation: 6429

Allow docker container to connect to a local/host postgres database

I've recently been playing around with Docker and QGIS and have installed a container following the instructions in this tutorial.

Everything works great, although I am unable to connect to a localhost postgres database that contains all my GIS data. I figure this is because my postgres database is not configured to accept remote connections and have been editing the postgres conf files to allow remote connections using the instructions in this article.

I'm still getting an error message when I try and connect to my database running QGIS in Docker: could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections to port 5433? The postgres server is running, and I've edited my pg_hba.conf file to allow connections from a range of IP addresses (172.17.0.0/32). I had previously queried the IP address of the docker container using docker ps and although the IP address changes, it has so far always been in the range 172.17.0.x

Any ideas why I can't connect to this database? Probably something very simple I imagine!

I'm running Ubuntu 14.04; Postgres 9.3

Upvotes: 317

Views: 437750

Answers (18)

Manuel Lazo
Manuel Lazo

Reputation: 862

Connect to a external database (on your workstation) from a docker container on Mac OS

I experienced something like this on Mac OS and the fix was done performing the following:

  • Add the extra_hosts on docker-compose file:
extra_hosts:
 external_db: "192.168.0.128" # ip of your workstation
  • Stop the container and recreate it:
docker-compose stop api
docker-compose up -d --force-recreate api
  • Enable remote connections on postgres db on the file /opt/homebrew/var/postgresql\@12/postgresql.conf:
listen_addresses = '*'      # what IP address(es) to listen on;
  • Allow the user to be authenticated from remote connections on the file /opt/homebrew/var/postgresql\@12/pg_hba.conf:
host    all             all             0.0.0.0/0    trust
  • Finally, restart the postgres service:
brew services restart postgresql@12
  • Get inside the container and test the connection:
docker-compose exec api bash
psql -U username -W -h external_db -d database

I really wish the provided information can be helpful, happy coding!.

Manuel Lazo

Upvotes: 0

Sanaulla
Sanaulla

Reputation: 1609

Allow Docker Container to connect Local Database

1. Implement Docker Interface in Firewall:

sudo firewall-cmd --zone=docker --change-interface=docker0
sudo firewall-cmd --permanent --zone=docker --change-interface=docker0
sudo systemctl restart firewalld
Check the Changes

a. sudo ip addr show docker0

docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
link/ether 02:42:72:c5:a8:50 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

b. firewall-cmd --get-active-zones

docker
  interfaces: docker0
public
  interfaces: enp3s0

2. Add Database Port Number in Firewall

sudo firewall-cmd --add-port=3306/tcp --permanent
sudo firewall-cmd --list-all
sudo service firewalld restart
Check the Changes

a. sudo netstat -tnlp | grep mysql

  tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      10275/mysqld        
  tcp        0      0 127.0.0.1:33060         0.0.0.0:*               LISTEN      10275/mysqld

b. sudo firewall-cmd --list-all

  public (active)
    ports: 80/tcp 8080/tcp 7070/tcp 3306/tcp

3. Change bind-address in /etc/mysql/mysql.conf.d/mysqld.cnf

      bind-address            = 0.0.0.0

sudo service mysql restart

4. Add a Database User as @ %

ALTER USER 'admin'@'localhost' IDENTIFIED WITH mysql_native_password BY 'P@ssword123';
ALTER USER 'admin'@'%' IDENTIFIED WITH mysql_native_password BY 'P@ssword123';
FLUSH PRIVILEGES;

sudo service mysql restart

5. Create a Docker as --network="bridge"

  1. sudo docker run -d -it --restart=always --network="bridge" --name webapp -v /srv/code:/opt/code/ -p 81:80 webapp zsh
Now you can access local Database from webapp Docker Container by following credential -
host: 172.17.0.1
adapter: mysql
database: database_name
port: 3306
username: admin
password: P@ssword123

Allow local system to connect Docker Database (Alternative)

In Ubuntu:

First You have to check that is the Docker Database port is Available in your system by following command -

sudo iptables -L -n

Sample OUTPUT:

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.2           tcp dpt:3306
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:80
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:22
 

Here 3306 is used as Docker Database Port on 172.17.0.2 IP, If this port is not available Run the following command -

sudo iptables -A INPUT -p tcp --dport 3306 -j ACCEPT

Now, You can easily access the Docker Database from your local system by following configuration

  host: 172.17.0.2 
  adapter: mysql
  database: DATABASE_NAME
  port: 3307
  username: DATABASE_USER
  password: DATABASE_PASSWORD
  encoding: utf8

In CentOS:

First You have to check that is the Docker Database port is Available in your firewall by following command -

sudo firewall-cmd --list-all

Sample OUTPUT:

  target: default
  icmp-block-inversion: no
  interfaces: eno79841677
  sources: 
  services: dhcpv6-client ssh
  **ports: 3307/tcp**
  protocols: 
  masquerade: no
  forward-ports: 
  sourceports: 
  icmp-blocks: 
  rich rules:
 

Here 3307 is used as Docker Database Port on 172.17.0.2 IP, If this port is not available Run the following command -

sudo firewall-cmd --zone=public --add-port=3307/tcp

In server, You can add the port permanently

sudo firewall-cmd --permanent --add-port=3307/tcp
sudo firewall-cmd --reload

Now, You can easily access the Docker Database from your local system by the above configuration.

Upvotes: 3

Anil M
Anil M

Reputation: 17

If using Docker desktop on Windows , do this while running a container enter the port details enter image description here

Upvotes: -1

s3had
s3had

Reputation: 29

If you are unable to connect even if you followed all the above steps ,please check to enable firewall on 5433 or 5432. on host machine

sudo ufw allow 5433/tcp

Upvotes: 0

Escapola
Escapola

Reputation: 199

3 STEPS SOLUTION

1. Update docker-compose file

First, since the database is local we need to bind the host network to the container by adding the following configuration to the container service

services:
    ...
    my-web-app:
        ...
        extra_hosts:
          -  "host.docker.internal:host-gateway"
        ...
    ...

2. Update /etc/postgresql/12/main/pg_hba.conf

If the file doesn't exists under this dir use find / -name 'pg_hba.conf' to find it.

Add the following line under the comment tag # IPv4 local connections:

host    all             all             172.17.0.1/16           md5

3. Update /etc/postgresql/12/main/postgresql.conf

If the file doesn't exists under this dir use find / -name 'postgresql.conf' to find it.

find the following line (The line might be commented)

#listen_addresses = 'localhost'

And change it to the following line to be able to connect to postgres from localhost and 172.17.0.1

listen_addresses = 'localhost,172.17.0.1'

You can also change it to the following, But this is not recommended for production since it will expose the database to the world (meaning any IP address can connect to the database)

listen_addresses = '*'

Finally don't forget to:

  1. Restart the postgres service using sudo systemctl restart postgresql
  2. Update the connection string host to host.docker.internal

Upvotes: 10

Shubham
Shubham

Reputation: 2210

you can pass --network=host during docker run command to access localhost inside container.

Ex:

docker run --network=host docker-image-name:latest

In case you want to pass env variables with localhost use --env-file paramater to access environment variables inside container.

Ex:

docker run --network=host --env-file .env-file-name docker-image-name:latest

Note: pass the parameters before docker image name otherwise parameters will not work. (I faced this, so heads up!)

Upvotes: 16

c9s
c9s

Reputation: 1917

You can add multiple listening address for better security.

listen_addresses = 'localhost,172.17.0.1'

Adding listen_addresses = '*' isn't a good option, which is very dangerous and expose your postgresql database to the wild west.

Upvotes: 4

Chris
Chris

Reputation: 2384

Simple Solution

The newest version of docker (18.03) offers a built in port forwarding solution. Inside your docker container simply have the db host set to host.docker.internal. This will be forwarded to the host the docker container is running on.

Documentation for this is here: https://docs.docker.com/docker-for-mac/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host

Upvotes: 217

Igr Pn
Igr Pn

Reputation: 161

Let me try explain what i did.

Postgresql

First of all I did the configuration needed to make sure my Postgres Database was accepting connections from outside.

open pg_hba.conf and add in the end the following line:

host    all             all             0.0.0.0/0               md5

open postgresql.conf and look for listen_addresses and modify there like this:

listen_addresses = '*'

Make sure the line above is not commented with a #

-> Restart your database

OBS: This is not the recommended configuration for a production environment

Next, I looked for my host’s ip. I was using localhosts ip 127.0.0.1, but the container doesn’t see it, so the Connection Refused message in question shows up when running the container. After a long search in web about this, I read that the container sees the internal ip from your local network (That one your router attributes to every device that connects to it, i’m not talking about the IP that gives you access to the internet). That said, i opened a terminal and did the following:

Look for local network ip

Open a terminal or CMD

(MacOS/Linux)

$ ifconfig

(Windows)

$ ipconfig

This command will show your network configuration information. And looks like this:

en4: 
    ether d0:37:45:da:1b:6e 
    inet6 fe80::188d:ddbe:9796:8411%en4 prefixlen 64 secured scopeid 0x7 
    inet 192.168.0.103 netmask 0xffffff00 broadcast 192.168.0.255
    nd6 options=201<PERFORMNUD,DAD>
    media: autoselect (100baseTX <full-duplex>)
    status: active

Look for the one that is active.

In my case, my local network ip was 192.168.0.103

With this done, I ran the container.

Docker

Run the container with the --add-host parameter, like this:

$ docker run --add-host=aNameForYourDataBaseHost:yourLocalNetWorkIp --name containerName -di -p HostsportToBind:containerPort imageNameOrId

In my case I did:

$ docker run --add-host=db:192.168.0.103 --name myCon -di -p 8000:8000 myImage

I’m using Django, so the 8000 port is the default.

Django Application

The configuration to access the database was:

In settings.py

DATABASES = {
    'default': {
            'ENGINE': 'django.db.backends.postgresql',
            'NAME': ‘myDataBaseName',
            'USER': ‘username',
            'PASSWORD': '123456',
            'HOST': '192.168.0.103',
            'PORT': 5432,
    }
}

References

About -p flag: Connect using network port mapping

About docker run: Docker run documentation

Interesting article: Docker Tip #35: Connect to a Database Running on Your Docker Host

Upvotes: 9

singhpradeep
singhpradeep

Reputation: 1723

The solution posted here does not work for me. Therefore, I am posting this answer to help someone facing similar issue.

Note: This solution works for Windows 10 as well, please check comment below.

OS: Ubuntu 18
PostgreSQL: 9.5 (Hosted on Ubuntu)
Docker: Server Application (which connects to PostgreSQL)

I am using docker-compose.yml to build application.

STEP 1: Please add host.docker.internal:<docker0 IP>

version: '3'
services:
  bank-server:
    ...
    depends_on:
      ....
    restart: on-failure
    ports:
      - 9090:9090
    extra_hosts:
      - "host.docker.internal:172.17.0.1"

To find IP of docker i.e. 172.17.0.1 (in my case) you can use:

$> ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

OR

$> ip a
1: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

STEP 2: In postgresql.conf, change listen_addresses to listen_addresses = '*'

STEP 3: In pg_hba.conf, add this line

host    all             all             0.0.0.0/0               md5

STEP 4: Now restart postgresql service using, sudo service postgresql restart

STEP 5: Please use host.docker.internal hostname to connect database from Server Application.
Ex: jdbc:postgresql://host.docker.internal:5432/bankDB

Enjoy!!

Upvotes: 37

Just in case, above solutions don't work for anyone. Use below statement to connect from docker to host postgres (on mac):

psql --host docker.for.mac.host.internal -U postgres

Upvotes: 2

helmbert
helmbert

Reputation: 37934

TL;DR

  1. Use 172.17.0.0/16 as IP address range, not 172.17.0.0/32.
  2. Don't use localhost to connect to the PostgreSQL database on your host, but the host's IP instead. To keep the container portable, start the container with the --add-host=database:<host-ip> flag and use database as hostname for connecting to PostgreSQL.
  3. Make sure PostgreSQL is configured to listen for connections on all IP addresses, not just on localhost. Look for the setting listen_addresses in PostgreSQL's configuration file, typically found in /etc/postgresql/9.3/main/postgresql.conf (credits to @DazmoNorton).

Long version

172.17.0.0/32 is not a range of IP addresses, but a single address (namly 172.17.0.0). No Docker container will ever get that address assigned, because it's the network address of the Docker bridge (docker0) interface.

When Docker starts, it will create a new bridge network interface, that you can easily see when calling ip a:

$ ip a
...
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

As you can see, in my case, the docker0 interface has the IP address 172.17.42.1 with a netmask of /16 (or 255.255.0.0). This means that the network address is 172.17.0.0/16.

The IP address is randomly assigned, but without any additional configuration, it will always be in the 172.17.0.0/16 network. For each Docker container, a random address from that range will be assigned.

This means, if you want to grant access from all possible containers to your database, use 172.17.0.0/16.

Upvotes: 269

Harlin
Harlin

Reputation: 1139

To set up something simple that allows a Postgresql connection from the docker container to my localhost I used this in postgresql.conf:

listen_addresses = '*'

And added this pg_hba.conf:

host    all             all             172.17.0.0/16           password

Then do a restart. My client from the docker container (which was at 172.17.0.2) could then connect to Postgresql running on my localhost using host:password, database, username and password.

Upvotes: 6

Sarath Ak
Sarath Ak

Reputation: 8649

for docker-compose you can try just add

network_mode: "host"

example :

version: '2'
services:
  feedx:
    build: web
    ports:
    - "127.0.0.1:8000:8000"
    network_mode: "host"

https://docs.docker.com/compose/compose-file/#network_mode

Upvotes: 8

Max Malysh
Max Malysh

Reputation: 31545

Simple solution

Just add --network=host to docker run. That's all!

This way container will use the host's network, so localhost and 127.0.0.1 will point to the host (by default they point to a container). Example:

docker run -d --network=host \
  -e "DB_DBNAME=your_db" \
  -e "DB_PORT=5432" \
  -e "DB_USER=your_db_user" \
  -e "DB_PASS=your_db_password" \
  -e "DB_HOST=127.0.0.1" \
  --name foobar foo/bar

Upvotes: 68

baxang
baxang

Reputation: 3807

Docker for Mac solution

17.06 onwards

Thanks to @Birchlabs' comment, now it is tons easier with this special Mac-only DNS name available:

docker run -e DB_PORT=5432 -e DB_HOST=docker.for.mac.host.internal

From 17.12.0-cd-mac46, docker.for.mac.host.internal should be used instead of docker.for.mac.localhost. See release note for details.

Older version

@helmbert's answer well explains the issue. But Docker for Mac does not expose the bridge network, so I had to do this trick to workaround the limitation:

$ sudo ifconfig lo0 alias 10.200.10.1/24

Open /usr/local/var/postgres/pg_hba.conf and add this line:

host    all             all             10.200.10.1/24            trust

Open /usr/local/var/postgres/postgresql.conf and edit change listen_addresses:

listen_addresses = '*'

Reload service and launch your container:

$ PGDATA=/usr/local/var/postgres pg_ctl reload
$ docker run -e DB_PORT=5432 -e DB_HOST=10.200.10.1 my_app 

What this workaround does is basically same with @helmbert's answer, but uses an IP address that is attached to lo0 instead of docker0 network interface.

Upvotes: 87

Hrishi
Hrishi

Reputation: 454

The another solution is service volume, You can define a service volume and mount host's PostgreSQL Data directory in that volume. Check out the given compose file for details.

version: '2'
services:
  db:   
    image: postgres:9.6.1
    volumes:
      - "/var/lib/postgresql/data:/var/lib/postgresql/data" 
    ports:
      - "5432:5432"

By doing this, another PostgreSQL service will run under container but uses same data directory which host PostgreSQL service is using.

Upvotes: -5

Mikko P
Mikko P

Reputation: 477

One more thing needed for my setup was to add

172.17.0.1  localhost

to /etc/hosts

so that Docker would point to 172.17.0.1 as the DB hostname, and not rely on a changing outer ip to find the DB. Hope this helps someone else with this issue!

Upvotes: -1

Related Questions