Grateful
Grateful

Reputation: 10175

How does a Node.js "server" compare with Nginx or Apache servers?

I have been studying Node.js recently and came across some material on writing simple Node.js based servers. For example, the following.

var express = require("express"),
http = require("http"), app;

// Create our Express-powered HTTP server
// and have it listen on port 3000
app = express();
http.createServer(app).listen(3000);

// set up our routes
app.get("/hello", function (req, res) {
    res.send("Hello World!");
});

app.get("/goodbye", function (req, res) {
    res.send("Goodbye World!");
});

Now, although I seem to understand what's going on in the code, I am slightly confused by the terminology. When I hear the term server, I think about stuff like Apache or Nginx. I am used to thinking about them as being like a container that can hold my web applications. How does Node.js server differ from Nginx/Apache server? Isn't it true that a Node.js based server (i.e. code) can still be placed within something like Nginx to run? So why are both called "servers"?

Upvotes: 145

Views: 89273

Answers (4)

sjohn
sjohn

Reputation: 1

By definition of a server yes they are all servers.

Client/Server model is where a client (such as a web browser though this is only one type of client) connects to a server which provides services to the client which the client then does something with (in this case renders the HTML provided by the web server into a web page.) Another way to say it is that server is a specialized type of daemon or background process that loops over and over and listens for connections on a socket or sockets to answer requests. Apache does open multiple instances to listen for web requests from clients eg it is multi-threaded.

I wrote a C based cgi which connected by socket to a chat perl server which then passed the html output back to the cgi and then to apache.

You could also for example write a simple daemon in perl to listen for requests and return the remote external ip address for use in dynamic dns. Then write a client which connects to the server, gets the ip, then updates a DNS record by API call to somewhere.

Something like this...

The Server

#/usr/local/bin/perl
$ver = "0.10";

use Socket;

$dir = "$ENV{'HOME'}/bin/localdns";
if (fork) { exit(0); }
&init;
&socket;

sub webprint {
    send(SOCKET,"@_",0);
}

while (1) {
    my ($raddr,$remote_addr);
    # $time = time();
    if (($raddr = accept(SOCKET, SSOCKET))) {
    ($client_port,$client_addr) = unpack_sockaddr_in($raddr);
    $remote_addr = inet_ntoa ($client_addr);
    if (0) { # not reading the socket
        setsockopt(SOCKET, SOL_SOCKET, SO_LINGER, pack("ii", 0, 0));
        $rlen = sysread(SOCKET, $inp, 1024);
        if ($rlen < 1024) {
        if ($rlen < 1) {
            close(SOCKET);
            next;
        } else {
            $inp =~ s/\0+//;
            while ($inp !~ /\n/ && $rlen > 0) {
            $rlen = sysread(SOCKET, $temp, 1024);
            if ($rlen < 1024) { $temp =~ s/\0+//; }
            $inp .= $temp;
            }
        }
        } else {
        while ($inp !~ /\n/ && $rlen > 0) {
            $rlen = sysread(SOCKET, $temp, 1024);
            if ($rlen < 1024) { $temp =~ s/\0+//; }
            $inp .= $temp;
        }
        }
    } # we are not reading the socket
    } else {
    next;
    }
    #print STDRR "$remote_addr\n";
    #print STDOUT "$remote_addr\n";
    webprint "$remote_addr";
    # webprint "</b></nobr>";
    close(SOCKET);
}
close(SOCKET);
close(SSOCKET);

###################
# server routines #
###################

# things done once as the server start up
sub init {
    chop($host = `hostname`); 
 
    # prevent sigpipe crashes
    $SIG{PIPE} = 'IGNORE';
    
    # allow signal alarm to break some routines that lock
    $SIG{'ALRM'} = \&sig_alarm;
    
    # allow sighup to do routine things
    $SIG{HUP} = \&sig_hup;
    
    # server conditional code
    $mainhost = "myhost.com";
    $mysql_host = "localhost";
    $svip = "127.0.0.1";
    $port = "43636";

    open(STDERR, ">>$dir/error_ipserv") || die "Can't open $dir/error_ipserv";
    $oldfh = select STDERR; $|=1; select($oldfh);
}

sub socket {   
    getservbyport($port, "tcp");
    $proto = getprotobyname("tcp");
    socket(SSOCKET, PF_INET, SOCK_STREAM, $proto) || die "Socket $!\n";
    setsockopt(SSOCKET, SOL_SOCKET, SO_REUSEADDR, pack("l", 1)) || die "setsockopt: $!\n";
    # this should work for any local or external address
    bind(SSOCKET, sockaddr_in($port, INADDR_ANY)) || die "Bind $!\n";
    # for specific ip
    # bind(SSOCKET, sockaddr_in($port, inet_aton("$svip"))) || die "Bind $!\n";
    listen(SSOCKET,512) || die "Listen $!\n";
    vec ($rin, $sockweb_fd = fileno(SSOCKET), 1) = 1;
    print STDERR "Socket is set up and listening.\n";
}

# good for timeout of connects that do not connect fast enough
sub sig_alarm {
    return;
}

# good to tell the server to do things at routine
# intervals or to save scores or shutdown even
sub sig_hup {
    return;
}

The Client

#!/usr/local/bin/perl
use Socket;
$remote_hostname = "myhost.com";
$port = "43636";

socket(SHANDLE, PF_INET, SOCK_STREAM, getprotobyname('tcp')) || die $!;
my $dest = sockaddr_in ($port, inet_aton($remote_hostname));
connect (SHANDLE, $dest) || die $!;

## ONE WAY TO SEND
## $data = "Hello";
## send (SOCKET, $data);

## ANOTHER WAY TO SEND
#select (SHANDLE);
#print "\n";

# THIS
# recv (SHANDLE, $buffer);

# OR
$newip = <SHANDLE>;
close(SHANDLE);
print "$newip\n";

Upvotes: 0

slebetman
slebetman

Reputation: 113866

It's a server, yes.

A node.js web application is a full-fledged web server just like Nginx or Apache.

You can indeed serve your node.js application without using any other web server. Just change your code to:

app = express();
http.createServer(app).listen(80); // serve HTTP directly

Indeed, some projects use node.js as the front-end load balancer for other servers (including Apache).

Note that node.js is not the only development stack to do this. Web development frameworks in Go, Java and Swift also do this.

Why?

In the beginning was the CGI. CGI was fine and worked OK. Apache would get a request, find that the url needs to execute a CGI app, execute that CGI app and pass data as environment variables, read the stdout and serve the data back to the browser.

The problem is that it is slow. It's OK when the CGI app was a small statically compiled C program but a group of small statically compiled C programs became hard to maintain. So people started writing in scripting languages. Then that became hard to maintain and people started developing object oriented MVC frameworks. Now we started having trouble - EVERY REQUEST must compile all those classes and create all those objects just to serve some HTML, even if there's nothing dynamic to serve (because the framework needs to figure out that there's nothing dynamic to serve).

What if we don't need to create all those objects every request?

That was what people thought. And from trying to solve that problem came several strategies. One of the earliest was to embed interpreters directly in web servers like mod_php in Apache. Compiled classes and objects can be stored in global variables and therefore cached. Another strategy was to do pre-compilation. And yet another strategy was to run the application as a regular server process and talk with the web server using a custom protocol like FastCGI.

Then some developers started simply using HTTP as their app->server protocol. In effect, the app is also an HTTP server. The advantage of this is that you don't need to implement any new, possibly buggy, possibly not tested protocol and you can debug your app directly using a web browser (or also commonly, curl). And you don't need a modified web server to support your app, just any web server that can do reverse proxying or redirects.

Why use Apache/Nginx?

When you serve a node.js app note that you are the author of your own web server. Any potential bug in your app is a directly exploitable bug on the internet. Some people are (justifiably) not comfortable with this.

Adding a layer of Apache or Nginx in front of your node.js app means you have a battle-tested, security-hardened piece of software on the live internet as an interface to your app. It adds a tiny bit of latency (the reverse proxying) but most consider it worth it.

This used to be the standard advice in the early days of node.js. But these days there are also sites and web services that exposes node.js directly to the internet. The http.Server module is now fairly well battle-tested on the internet to be trusted.

Upvotes: 225

Vamshi Krishna
Vamshi Krishna

Reputation: 157

Assume there is a hotel named Apache Hotel which has a waiter for each customer.

As soon as the customer orders a salad, the waiter goes to the chef and tells him. While the chef prepares the food, the waiter waits. Here,

Chef => File System,

Waiter => Thread,

Customer => Event.

Even when the customer orders water the waiter brings only after serving the salad. The waiter keeps on waiting until the salad is prepared by the chef. This state is referred as blocking state. Even if the hotel grows each customer should have different waiters to serve. This increases the blocking of threads(waiters).

Now, coming to Node Hotel there is only one waiter for all the customers. If first customer orders soup the waiter tells the chef and goes to second customer. After the food is ready the waiter delivers to the customer. Here the customer will not wait. This state is referred as Non-Blocking state. The single waiter(Thread) servers all the customer and makes them happy.

Thus, Node which is a single threaded application is very fast.

Upvotes: 14

Naeem Shaikh
Naeem Shaikh

Reputation: 15715

NodeJs creates its own server. As you can see, terminology is quite clear:

http.createServer(app).listen(3000);

Create a server and listen for http requests on port 3000.

We used nginx in one of our project, but it was more like a loadbalancer for multiple nodejs instances.

Lets say you have two nodejs instances running on port 3000 and 3001, Now you can still use nginx as a server to listen your actual http calls on port 80, and may want to redirect your request to nodejs server or maybe some other server, more like a loadbalancer. So you can still use whatever nginx provides with nodejs.

A good question already asked here.

Upvotes: 23

Related Questions