Uri Yakir
Uri Yakir

Reputation: 141

NGINX + Dockerized Gunicorn - How secure & efficient is my webserver setup?


I’m working on an ios app that needs to communicate with a server. As part of that communication, the app sends a private cookie that must be transferred **securely**.
After a ton of research and frustration, I’ve successfully managed to setup my webserver in the following manner:

  1. My entire setup is running on an AWS EC2 machine running linux.

  2. My routes are defined with FastAPI

  3. The webserver is deployed with GUNICORN launching multiple uvicorn workers, as recommended by the official uvicorn docs: gunicorn -w 4 -k uvicorn.workers.UvicornWorker example:app

  4. The webserver is launched on port 8080 using a docker container:
    Dockerfile

... docker setup ...
EXPOSE 8080
CMD ["gunicorn", "-b", "0.0.0.0:8080", "-w", "2", "-k", "uvicorn.workers.UvicornWorker", "main:app"]
  1. My webserver runs behind a NGINX reverse proxy. The proxy listens on port 80 and 443 and redirects the requests to my webserver (that sits on port 8080).
    My NGINX .conf file is very minimal and looks like that:
server {
    server_name example.*;

    location / {
            proxy_pass http://127.0.0.1:8080/;
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
    }
}
  1. NGINX is using certbot generated certificates to only support HTTPS communication.
    The certificates were generated using python-certbot-nginx, with the following command: sudo certbot --nginx -d example.club -d www.example.club

  2. Finally, to ensure that no one surpasses my PROXY and sends requests directly to my webserver, I’ve configured my machine to only allow communication to port 8080 from the machine’s IP address.
    Port 80 and port 443 are obviosuly open to any IP address.

Since I’m a newbie to webservers in general and webserver deployment in particular, I would like to know:
how efficient and secure is this setup?
Do you have recommendation or other stuff I should implement to make sure no private data leaks out, while also being to handle requests load?


Thanks!

Upvotes: 1

Views: 641

Answers (1)

Tim Dithmer
Tim Dithmer

Reputation: 438

Without knowing the exact configuration in detail, here some things to think about. At all the setup seems about right.

  1. I’ve configured my machine to only allow communication to port 8080 from the machine’s IP address. -> Have you really used the external IP of the machine, or are you using a localhost/127.0.0.1 value there? For proxy pass on 127.0.0.1 it is okay to only allow connections over the loopback adapter.
  2. I don't know the ssl params you use for the nginx. There is a lot which you can configure. Try https://www.ssllabs.com/ssltest/ to see if the config is good enough. A++ or A+ is nearly impossible to reach without giving lots of restrictions for your user base. But A is definitely a good point, which you want to reach.
  3. You might want to set up an http -> https forwarding for everything except the path certbot needs. And then set the HSTS Header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security

Hope this helps a little bit. But as I said. At all, I don't the a security breach there. The entrypoints I see are your nginx, certbot and your dockerized webapp. You have to trust nginx and certbot. You might want to make sure, that you automatically install security updates for those, maybe with unattended-upgrades to not "forget" them. Same is for docker, and all the rest of the os-based software which comes from your package manager.

Greetings

Upvotes: 1

Related Questions