David
David

Reputation: 5937

Optimize Ubunto and nginx to handle static files

I am experimenting with nginx (first time) to serve static files (400 kb). I have installed ubuntu 14.4 on linode servers (2gb ram, 2 core 3tb transfer) and nginx.

Open files are set to 9000 , gzip is on, processes =2, worker connections 4000

Using jmeter at 50 users and 10 sec ramp up I am achieving 800 ms sample times and cpu and mem obviously are not a factor, 100 users, this increases to 5/6 seconds, the transfer out speed should be 250 mbps which explains that.

But is there optimizations to make the process handle the load more gracefully? i.e. 2 seconds instead of 5/6?

nginx file:

        user www-data;
    worker_processes 2;
    pid /run/nginx.pid;

    events {
        worker_connections 4000;
        multi_accept on;
        use epoll;
    }

    http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 15;
        #types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;
        gzip_disable "msie6";

         gzip_vary on;
         gzip_proxied any;
         gzip_comp_level 9;
         gzip_buffers 16 8k;
         gzip_http_version 1.1;
         gzip_types text/plain text/css text/html application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # nginx-naxsi config
        ##
        # Uncomment it if you installed nginx-naxsi
        ##

        #include /etc/nginx/naxsi_core.rules;

        ##
        # nginx-passenger config
        ##
        # Uncomment it if you installed nginx-passenger
        ##

        #passenger_root /usr;
        #passenger_ruby /usr/bin/ruby;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
    }

Upvotes: 1

Views: 1555

Answers (2)

Marki555
Marki555

Reputation: 6850

As already mentioned - if you want to use gzip, use the http_gzip_static module, so that nginx doesn't have to gzip the file during each request. However you need to put the gzipped versions of files yourself, nginx will serve them only when it finds it (it won't create them).

There are more parameters which can be tuned for maximum performance while serving static files:

sendfile on;
open_file_cache         max=2500 inactive=120s;
open_file_cache_valid   10s;
open_file_cache_min_uses 2;
open_file_cache_errors  on;

Sendfile enables faster serving of static files by reducing the number of times its data are copied inside memory between user and kernel space (it says to the kernel don't copy the files contents to my nginx memory, but directly to network socket).

Open files cache prevents checking the filesystem for file changes on each request as there is no reason to check that 1000x per second. You can tune the values as per nginx manual. There is no much benefit in increasing it beyond few seconds.

Keepalive is very important if you serve multiple files to the browser (typically few javascript, css, few images). Without it, client would need to create a new TCP connection for each of them, which is quite slow (I see you have this enabled already). If you typically serve only 1 file to each user, you can disable keepalive, however with nginx it will bring you very little benefit, you will just not waste server memory with useless open sockets.

multi_accept on can have also negative performance effects, you need to benchmark it to see what works better for you. The same for accept_mutex. Or if you have nginx at least v1.9.1 you can use listen ...reuseport to have separate listen socket for each worker, which should have the best performance.

For really high performance you may need to adjust also TCP/IP stack parameters of the server.

Use buffering for log files, so that nginx doesn't have to write to them so often, for example access_log /var/log/nginx/access.log common buffer=1k;. Each nginx worker will write to logfile only when it has 1kbyte of data ready - note that if you use awstats or other log analysis software, it can have issues if the time in logfile goes backwards. In that case, estimate the buffer size based on req/s, so that each worker will fill it up under 0.5s (e.g. each line in log is 200 bytes, 1000 req/s with 2 workers would mean in 1s each worker produces 100 kb of data, so we can set the buffer to 64kb).

Upvotes: 3

Brad
Brad

Reputation: 163272

Gzipping on the fly is killing your performance. Since you're serving up static files, consider compressing them ahead of time.

There is a separate extension that enables this for you: http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html

Upvotes: 2

Related Questions