Reputation: 231
Issue Details:
I have a custom pg_hba.conf
file on my local machine, and I want to copy it to my PostgreSQL Docker container to override the default configuration. Here are the two solutions I've tried:
Solution Attempt 1: Using a Volume Mount in docker-compose.yml
In my docker-compose.yml
, I added the following volume mount for my db service:
volumes:
- ./pg_config/pg_hba.conf:/etc/postgresql/12/main/pg_hba.conf
This solution should copy my custom pg_hba.conf file to the PostgreSQL container. However, when I start the container, it doesn't seem to use my custom file, and the configuration remains unchanged. However, my local pg_hba.conf is reverted back to the same file that is being generated on the container. Super weird. Am I understanding volumes incorrectly?
docker-compose.yml
:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
env_file:
- .env
restart: "on-failure"
depends_on:
db:
condition: service_healthy # Wait for the db service to be healthy
db:
image: kartoza/postgis:12.0
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql
- ./pg_config/pg_hba.conf:/etc/postgresql/12/main/pg_hba.conf
environment:
POSTGRES_DB: ${DATABASE_NAME}
POSTGRES_USER: ${DATABASE_USER}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
PGUSER: ${DATABASE_USER}
restart: "on-failure"
healthcheck:
test: [ "CMD-SHELL", "pg_isready" ]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
Solution Attempt 2: Creating a Separate Dockerfile for PostgreSQL
I also tried creating a separate Dockerfile for PostgreSQL (let's call it Dockerfile.postgres) where I set up the PostgreSQL image, added the custom pg_hba.conf file, and configured the database. However, even with this approach, my custom pg_hba.conf file is not being used when I start the container. The local file isn't getting reset this time, but the container file remains unchanged.
Dockerfile.postgres
:
# Use the official PostgreSQL image as the base image
FROM kartoza/postgis:12.0
# Copy the custom pg_hba.conf into the container
COPY ./pg_config/pg_hba.conf /etc/postgresql/12/main/pg_hba.conf
# Set environment variables
ENV POSTGRES_DB=${DATABASE_NAME} \
POSTGRES_USER=${DATABASE_USER} \
POSTGRES_PASSWORD=${DATABASE_PASSWORD} \
PGUSER=${DATABASE_USER}
# Expose the PostgreSQL port
EXPOSE 5432
# Set the command to run when the container starts
CMD ["postgres"]
docker-compose.yml
:
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
env_file:
- .env
restart: "on-failure"
depends_on:
db:
condition: service_healthy # Wait for the db service to be healthy
db:
build:
context: .
dockerfile: Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql
restart: "on-failure"
healthcheck:
test: [ "CMD-SHELL", "pg_isready" ]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
Question:
I need help understanding why neither of these solutions is working as expected. How can I copy my custom pg_hba.conf file into my PostgreSQL container and ensure it is used for configuration? Thanks
Upvotes: 1
Views: 895
Reputation: 26111
First, copy the pg_hba.conf
file into a different directory within the container and place the script named 50-hba.sh
(name anything) into the docker-entrypoint-initdb.d
directory. This script executes after the initiation of Postgres and runs the necessary commands.
The content on docker-compose.yml
is below:
postgres:
image: "postgres:16.2-alpine"
volumes:
- "my-postgres-data:/var/lib/postgresql/data"
- "./pg_hba.conf:/opt/postgres/pg_hba.conf"
- "./50-hba.sh:/docker-entrypoint-initdb.d/50-hba.sh:ro"
environment:
POSTGRES_USER: "${POSTGRES_USER}"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
POSTGRES_DB: "${POSTGRES_DB}"
ports:
- "5432:5432"
networks:
- my-db-network
restart: always
The content on 50-hba.sh
is below:
#!/bin/bash
SOURCE_FILE="/opt/postgres/pg_hba.conf"
TARGET_FILE="/var/postgresql/data/pg_hba.conf"
# Check if the symlink already exists
if [ -L "$TARGET_FILE" ]; then
echo "Symlink already exists, no action needed."
else
# Check if the target file already exists
if [ -e "$TARGET_FILE" ]; then
# Delete the existing file
rm "$TARGET_FILE"
echo "Existing file pg_hba.conf deleted."
fi
# Create the symlink
ln -s "$SOURCE_FILE" "$TARGET_FILE"
echo "Symlink for pg_hba.conf created."
fi
This script checks if the symlink for pg_hba.conf
exists. If it doesn't, it creates a symlink to the specified pg_hba.conf
file. Make sure to adjust file paths and permissions according to your setup.
Why did I create a symlink instead of directly overwriting the existing file? The answer lies within the documentation of Postgres on the Docker Hub, which states:
Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup. One common problem is that if one of your /docker-entrypoint-initdb.d scripts fails (which will cause the entrypoint script to exit) and your orchestrator restarts the container with the already initialized data directory, it will not continue on with your scripts.
So, the script only executes when a new data volume is initialized. If you intend to update the pg_hba.conf
in the future, overwriting the file won't suffice. However, by using a symlinked file, any updates will always be reflected, ensuring it stays up-to-date.
Upvotes: 0