dotokija
dotokija

Reputation: 1147

.net core 3.1 web api hosted on nginx works with http POST, but not with HTTPS

This is my first .net core app. It is simple web API that receives POST updates formatted as JSON, parses it and stores into DB. When tested in VS, it works correctly for both http and https requests.
I decided to host in on Linux machine (Debian) with nginx as proxy server. I obtained letsencrypt certificates with the plan to allow only https requests (with redirects from http).
It works correctly with http (nginx proxies POST requests from http://mydomain -> http://127.0.0.1:5000), but it refuses to work with https:/mydomain.
The error is 500: internal server error.
I tried at least dozen different suggestions from increasing buffers, to number of open files in dotnet service and Linux.
I performed remote debugging by attaching the dotnet process on Linux to VS and I get to the first breakpoint when sending http. However, with https, I don't even reach VS, so I suspect that problem lies either in nginx config, or in Kestrel (I don't really understand how it works, I just read that's some sort of internal dotnet web server).

To make troubleshooting easier (solve problems one at a time), I don't use http redirecting (I allow 2 separate servers, 1 http and 1 https).

Here is the code from Startup.cs (commented app.UseHttpsRedirection, makes no difference when enabled, also tried commenting out authorization,usehths):

    {
        app.UseExceptionHandler("/Error");
        // Remove to use HTTP only
        app.UseHsts(); // HTTPS Strict mode
    }
    app.UseForwardedHeaders();
    //app.UseHttpsRedirection();
    app.UseRouting();
    app.UseAuthorization();
    app.UseEndpoints(endpoints =>
    {
        endpoints.MapControllers();
    });
}

from appsettings.json:

"https_port": 443,
"Logging": {
  "LogLevel": {
    "Default": "Information",
    "Microsoft": "Warning",
    "Microsoft.Hosting.Lifetime": "Information"
  }
},
"AllowedHosts": "*"

from launchSettings.json (I removed https://localhost:5001 as suggested by documentation, it makes no difference either):

  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": false,
      "launchUrl": "webapi",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }

"WebAppJson": {
  "commandName": "Project",
  "launchBrowser": false,
  "launchUrl": "webapi",
  "applicationUrl": "http://localhost:5000",
  "environmentVariables": {
    "ASPNETCORE_ENVIRONMENT": "Development"
  }

Here is the nginx config, first my sites default.conf file (I switched https redirection so I can test both cases):

server 
{
    root /var/www/webapi;
    listen 80;
    server_name  mydomain;
    #  return 301 https://mydomain$request_uri;
    access_log           /var/log/nginx/test-api-access.log;
    error_log            /var/log/nginx/test-api-error.log;
    
    location / {
        proxy_pass         http://127.0.0.1:5000;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection keep-alive;
        proxy_set_header   Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
    }

}

server 
{
    root /var/www/webapi;
    listen 443 ssl http2;
    server_name     mydomain;
    access_log            /var/log/nginx/test-api-access.log;
    error_log             /var/log/nginx/test-api-error.log;
    large_client_header_buffers 32 16k;
    ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem; # managed by Certbot

    location / 
    {
        proxy_pass       https://127.0.0.1;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection keep-alive;
        proxy_set_header   Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_buffering off;
        proxy_read_timeout 7200;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
    }

}

and nginx.conf file:

user www-data;
worker_processes auto;
#my setting below changes response to 400
#worker_rlimit_nofile 16000;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 16384;
    multi_accept on;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    #limit_conn_zone $binary_remote_addr zone=addr:10m;
    #limit_conn addr 1000;
    
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    ## my settings for more buffers for dotnet:
    proxy_buffer_size   128k;
    proxy_buffers   32 256k;
    proxy_busy_buffers_size   256k;
    large_client_header_buffers 4 16k;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    # my default config is there in default.conf


----------


    include /etc/nginx/sites.d/*.conf;
}

With this config I send http POST via postman to http://mydomain and get response 200. When I send https POST with the same config in Postman to https://mydomain I get response 500.

The /var/log/nginx/test-api-error.log shows this: [alert] 5735#5735: *71441 socket() failed (24: Too many open files) while connecting to upstream, client: 127.0.0.1, server: mydomain, request: "POST / HTTP/1.1", upstream: "https://127.0.0.1:443/", host: "127.0.0.1"

I tried changing and adding every possible parameter in nginx.conf. The only one where I see the difference is the one commented out:
** #worker_rlimit_nofile 16000; ** When I use it, I get different response:
400 Bad Request
Request Header Or Cookie Too Large
with no error written in log file.

Now, the question is: how to proceed further? Is there wrong config in nginx? Or is there Kestrel config to change. I don't know anything about Kestrel. Or is it app config? Any ideas, please. I'm banging my head for days with this. I just hope it is something simple that I overlooked.

What I didn't do so far is to check which certificates are allowed in app (I supposed that signed cert is always accepted), and I didn't check if app is really responding to port 443 at production (I see that something is listening at port 443, anyway, if there is nothing there, I would get response 502, so I guessed it's not that).

Upvotes: 0

Views: 1489

Answers (1)

Steffen Ullrich
Steffen Ullrich

Reputation: 123340

listen 80;
...
location / {
    proxy_pass         http://127.0.0.1:5000;


listen 443 ssl http2;
...
location / 
{
    proxy_pass       https://127.0.0.1;

Given that you want to reach the same internal backend from outside with both HTTP and HTTPS, the proxy_pass directive should be the same for both, i.e. http://127.0.0.1:5000. The HTTPS is terminated at nginx and it will use HTTP to the backend.

Upvotes: 3

Related Questions