Reputation: 2929
I am using nginx currently to run two node apps on one vps. It's working however, from either URL, if I manually enter the port of the other it will run the other app, ie going to mydomain1.com and mydomain2.com correctly routes to node apps running on ports 1337 and 1338. However I can do mydomain1.com:1337 or mydomain1.com:1338 and I can access either app, this doesn't seem correct. Can I, and if so how, prevent appending the port to cross access apps?
Here are my files located in etc/nginx/conf.d, mydomain1.conf and mydomain2.conf:
mydomain1.conf
server {
listen 80;
server_name mydomain1.com;
location / {
proxy_pass http://localhost:1337;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
mydomain2.conf
server {
listen 80;
server_name mydomain2.com;
location / {
proxy_pass http://localhost:1338;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I also added a /etc/nginx/nginx.conf file with the following contents:
server_names_hash_bucket_size 64;
Upvotes: 1
Views: 1960
Reputation: 1622
As other answers have noted, the best thing is to both set up a firewall and only bind your node application to listen on localhost
.
For the firewall you don't have to mess with iptables, you can use ufw (Uncomplicated Firewall). This makes securing a web server's ports pretty painless:
HTTP:80
, HTTPS:443
, SSH:22
and FTP:21
For a debian/ubuntu system, that process looks like this (first ensuring you are root
or sudo
):
$ apt install ufw
$ ufw default deny incoming
$ ufw allow ssh
$ ufw allow http
$ ufw allow https
$ ufw allow ftp
$ ufw enable
If you type ufw status
you can see what rules you have active.
For only binding your node process to localhost (so that anything that isn't on your server can't connect to that port), you can pass the hostname as the second parameter of the server.listen()
(or app.listen()
for express):
const PORT = process.env.PORT || 1337;
const HOST = "localhost" || "127.0.0.1"; // You can use either of these
app.listen(PORT, HOST, err => {
if (err) throw err;
console.log(`Listening on http://${HOST}:${PORT}`);
});
Upvotes: 2
Reputation: 5941
It was sensible to set up your firewall like this and it obviously provided a solution.
In case you are interested, the actual reason your node apps were accessible is that unless you bind them to an IP they are listening for traffic on all your network interfaces, so will respond to requests on your public IP.
Bind your node apps (or any server running on your machine) to 127.0.0.1 and it will only respond to internal requests, including those proxied by Nginx, but not the outside world.
Upvotes: 1
Reputation: 2929
I'm adding an answer to my own question which I accepted an answer that got me going in the right direction. This answer might prove useful to someone hitting this thread down the line.
There are definitely more robust ways out there, but for me using linux iptables with the following allows only port 80 and port 22 (for my ssh sessions), and therefore the user cannot enter mydomain1:1337 or mydomain1:1338, it also includes a few basic attack protections:
flush
iptables -F
block null packets
iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
syn-flood attack protection
iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
XMAS packets protection
iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
Allow local host
iptables -A INPUT -i lo -p all -j ACCEPT
Now we can allow web server traffic:
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
Allow related, established
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
Upvotes: 0
Reputation: 2463
This seems more of a firewall issue? The domain name just gets you to the machine, and if the node servers are running on 'exposed' ports, then of cause you can access them. You should probably lock down the exposed ports, and maybe enforce domain redirecting within the applications routes.
Web servers probably only need to expose ports 80 for http, 443 for https, 22 for ssh and maybe 25 for ftp
Upvotes: 2