Reputation: 13
I am trying to run a project in docker with nginx + php-fpm. Ubuntu 20.10.
There is docker-compose from the standard examples:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./hosts:/etc/nginx/conf.d
- ./sites:/var/www
- ./logs/nginx:/var/log/nginx
links:
- php
php:
build: ./images/php
links:
- mysql
volumes:
- ./sites:/var/www
mysql:
image: mysql
ports:
- "3306:3306"
volumes:
- /etc/mysql:/etc/mysql
- ./logs/mysql:/var/log/mysql
- ./mysql:/var/lib/mysql
- ./mysql-files:/var/lib/mysql-files
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: sait
Locally hosts are raised, in hosts I can specify virtual hosts and access them. How to make such a configuration refer to virtual hosts from the outside? A-records are registered, by the external IP-address I get correctly to the default project, and by the domain name - ERR_CONNECTION_REFUSED. The ports are open to ufw, and it enters normally via the external ip.
nginx domain.conf:
server {
listen 80;
index index.php index.html;
server_name ________ www.___________;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/________________;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
Dockerfile:
FROM php:7.4-fpm
RUN apt-get update && apt-get install -y \
curl \
wget \
git \
libfreetype6-dev \
libjpeg62-turbo-dev \
libpng-dev \
libonig-dev \
libzip-dev \
libmcrypt-dev \
&& pecl install mcrypt-1.0.3 \
&& docker-php-ext-enable mcrypt \
&& docker-php-ext-install -j$(nproc) iconv mbstring mysqli pdo_mysql zip \
&& docker-php-ext-configure gd --with-freetype --with-jpeg \
&& docker-php-ext-install -j$(nproc) gd
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
ADD php.ini /usr/local/etc/php/conf.d/40-custom.ini
WORKDIR /var/www
CMD ["php-fpm"]
sudo lsof -nP -i | grep LISTEN
systemd-r 605 systemd-resolve 13u IPv4 26775 0t0 TCP 127.0.0.53:53 (LISTEN)
cupsd 667 root 6u IPv6 29160 0t0 TCP [::1]:631 (LISTEN)
cupsd 667 root 7u IPv4 29161 0t0 TCP 127.0.0.1:631 (LISTEN)
container 777 root 8u IPv4 31394 0t0 TCP 127.0.0.1:43289 (LISTEN)
sshd 803 root 3u IPv4 30762 0t0 TCP *:22 (LISTEN)
sshd 803 root 4u IPv6 30764 0t0 TCP *:22 (LISTEN)
docker-pr 2342 root 4u IPv6 40932 0t0 TCP *:3306 (LISTEN)
docker-pr 2521 root 4u IPv6 43169 0t0 TCP *:443 (LISTEN)
docker-pr 2535 root 4u IPv6 44053 0t0 TCP *:80 (LISTEN)
Upvotes: 1
Views: 680
Reputation: 81
I suggest modifing your dockerfile to copy the vhosts.conf inside the container, then link it so nginx can use it:
RUN apt-get -y update \
&& apt-get -y upgrade \
&& apt-get install -y nginx fonts-noto-color-emoji \
&& rm -rf /var/lib/apt/lists/* \
&& groupadd -g 1001 nginx \
&& useradd -u 1001 -ms /bin/bash -g nginx nginx \
&& chown -R nginx:nginx /var /run \
&& ln -s /etc/nginx/sites-available/vhosts.conf /etc/nginx/sites-enabled/vhosts.conf \
&& chmod +x ./start.sh /usr/local/bin/install-php-extensions \
&& install-php-extensions curl mysqli memcache bz2 zip
Additionally, i see that you use - links
which is deperecated and shouldn't be used, instead create a external docker bridge network and put the containers inside that network.
Upvotes: 0