Florian
Florian

Reputation: 341

fail2ban detect bad website crawlers

Recently my server got attacked from a bad website crawler who searched for ~70.000 folders on my website in ~2 minutes. This caused some server-lags and it might also be a problem if it find some exploits in phpmyadmin or something else.

Now my question: Is there a fail2ban config which can block such scans?

I already found a older one (link), but it doesn't match the log strings in my log.

The strings in my log look like:

domain.tld:80 <Attacker IP (v4 & v6)> - - [17/Apr/2017:20:45:28 +0200] "GET /ph4/ HTTP/1.1" 404 4123 "http://www.google.com" "Googlebot/2.1 (+http://www.google.com/bot.html)"

Upvotes: 1

Views: 704

Answers (1)

Florian
Florian

Reputation: 182

My regex is not perfect. But it can help:

failregex = ^<HOST>.* \".*\" (400|401|403|404|405|413|414|429).*

This regex will match for the return code 400,401,403,404,405,413,414,429 You can find the meaning of these codes on this link

I hope it can help.

Upvotes: 2

Related Questions