Reputation: 3676
I am not finding help on this on the web, nor locate documentation for what I need done... The problem was a solution that caused almost as much as a problem. Now I need a solution to a solution. Any answer how or link to documentation on controlling nginx error output would be great.
The initial solution and circumstance: I have a PCI secured web server. Attacks against it are routine. Everything was fine, low processing load, 30-40MB log files until OpenVAS was released on the web.
Then my server's traffic spiked, the nginx usage went up 400% and the PHP backend usage went up 5000%. The logs jumped to 130+MB. The increase was entirely the result of many new IPS probing the box with a the new tool kit. It hosed the back-end and I was forced to respond with something drastic....
Now a cron routinely reviews logs; any IP requesting >X00 requests in an hour, requesting known URL hacks, or other logic; are added to a block list and nginx reloaded to brush off the attack. Most things went back to normal after this.
The new problem:
I need help with, is how nginx handles the deny.
When I used to go to my error log, I was finding 10-100 lines of errors or notices I need to review, correct, and push up to keep the box secure and tidy.
However, Nginx considers every deny an error... My weekly error log is now 300,000 lines of:
2015/11/15 23:01:30 [error] 22040#0: *432212 access forbidden by rule, client: z.z.z.z, server: secure.mydomain.com, request: "GET /cadcd01153160add.aspx HTTP/1.1", host: "x.x.x.x"
With 100 lines of errors some where in there I need to address...
yes, I can filter them out, but this function now duplicates the entries in the access-log to the error log so the original 35MB access + 100 error-line logs are now 270MB on a regular basis. If this continues I may have to get a separate drive for the logs! (because the attacks are increasing) So the single box can handle the load and thwart the attacks, but created an Achilles heel of log file size limitations.
How does one suppress or redirect "403" as not an error and prevent it from flooding the log files so I can reduce the duplicate entries, log size (error log) and go about my routines easier?
Upvotes: 2
Views: 1577
Reputation: 157
We faced such a problem when creating CMS Effcore. It was decided to use the construction "deny all; error_log off;". An example of its application is shown below.
server {
listen 127.0.0.1:80;
server_name 127.0.0.1;
# block access to "web.config"
location ~ /web.config$ {deny all; error_log off;}
# block access to ".htaccess", ".git" and etc.
location ~ /\. {deny all; error_log off;}
# single entry point to index.php?q={REQUEST_URI}
location / {
root %%_root;
fastcgi_index index.php;
fastcgi_pass 127.0.0.1:9000;
include %%_include;
fastcgi_param SCRIPT_NAME /index.php;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
}
}
Upvotes: 1
Reputation: 636
What you can do, and worked for me in the past, is to redefine the log level on the block that is filtering traffic. In my case I was using a location directive so all I had to do was:
location ~* /lucee/ {
error_log /var/log/nginx/error.log emerg;
deny all; # block access
}
Setting the log level to emerg will prevent an error level message from being logged, and will only apply for this specific location so all your other error messages should stay the same.
Upvotes: 0
Reputation: 3676
I just found this and it looks promising: https://www.nginx.com/resources/admin-guide/logging-and-monitoring/#syslog
I can filter which events are logged, or direct certain error status to a specific log file. This could separate the 403 from the errors I need to review.
Upvotes: 0