Reputation: 51
I am using Ubuntu Server 14.04 32bit for the following.
I am trying to use blocklists to add regional blocks (China, Russia...) to my firewall rules and am struggling with the length it takes my script to complete and understanding why a different script fails to work.
I had originally used http://whatnotlinux.blogspot.com/2012/12/add-block-lists-to-iptables-from.html as an example and tidied up / changed parts of the script to pretty close to what's below:
#!/bin/bash
# Blacklist's names & URLs array
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
for key in ${!blacklists[@]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
iptables -D INPUT -j $key #Delete current iptables chain link
iptables -F $key #Flush current iptables chain
iptables -X $key #Delete current iptables chain
iptables -N $key #Create current iptables chain
iptables -A INPUT -j $key #Link current iptables chain to INPUT chain
#Read blacklist
while read line; do
#Drop description, keep only IP range
ip_range=`echo -n $line | sed -e 's/.*:\(.*\)-\(.*\)/\1-\2/'`
#Test if it's an IP range
if [[ $ip_range =~ ^[0-9].*$ ]]; then
# Add to the blacklist
iptables -A $key -m iprange --src-range $ip_range -j LOGNDROP
fi
done < <(zcat /tmp/blacklist_$key.gz | iconv -f latin1 -t utf-8 - | dos2unix)
done
# Delete files
rm /tmp/blacklist*
exit 0
This appears to work fine for short test lists, but manually adding many (200,000+) entries to iptables takes an EXORBITANT amount of time and I'm not sure why? Depending on the list I have calculated this taking upwards of 10 hours to complete which just seems silly.
After viewing the format of the iptables-save output I created a new script that uses iptables-save to save working iptables rules and then appends the expected format for blocks to this file, such as: -A bogon -m iprange --src-range 0.0.0.1-0.255.255.255 -j LOGNDROP
, and eventually uses iptables-restore to load the file as seen below:
#!/bin/bash
# Blacklist's names & URLs arrays
declare -A blacklists
blacklists[china]="http://www.example.com"
#blacklists[key]="url"
iptables -F # Flush iptables chains
iptables -X # Delete all user created chains
iptables -P FORWARD DROP # Drop all forwarded traffic
iptables -N LOGNDROP # Create LOGNDROP chain
iptables -A LOGNDROP -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied TCP: " --log-level 7
iptables -A LOGNDROP -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied UDP: " --log-level 7
iptables -A LOGNDROP -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied ICMP: " --log-level 7
iptables -A LOGNDROP -j DROP # Drop after logging
# Build first part of iptables-rules
for key in ${!blacklists[@]}; do
iptables -N $key # Create chain for current list
iptables -A INPUT -j $key # Link input to current list chain
done
iptables-save | sed '$d' | sed '$d' > /tmp/iptables-rules.rules # Save WORKING iptables-rules and remove last 2 liens (COMMIT & comment)
for key in ${!blacklists[@]}; do
#Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*:/-A\ $key\ -m\ iprange\ --src-range\ / | sed s/$/\ -j\ LOGNDROP/ >> iptables-rules.rules
done
echo 'COMMIT' >> /tmp/iptables-rules.rules
iptables-restore < /tmp/iptables-rules.rules
# Delete files
rm /tmp/blacklist*
rm /tmp/iptables-rules.rules
exit 0
This works great for most lists on the testbed however there are specific lists that if included will produce the iptables-restore: line 389971 failed error, which is always the last line (COMMIT). I've read that due to the way iptables works whenever there is an issue reloading rules the error will always say the last line failed.
The truly odd thing is that testing these same lists on Ubuntu Desktop 14.04 64bit the second script works just fine. I have tried running the script on the Desktop machine, then using iptables-save to save a "properly" formatted version of the ruleset, and then loading this file to iptables on the server using iptables-restore and still receive the error.
I am at a loss as to how to troubleshoot this, why the initial script takes so long to add rules to iptables, and what could potentially be causing problems with the lists in the second script.
Upvotes: 3
Views: 4924
Reputation: 6818
If you need to block a multitude of IP Addresses, use ipset
instead.
Step 1: Create the IPset:
# Hashsize of 1024 is usually enough. Higher numbers might speed up the search,
# but at the cost of higher memory usage.
ipset create BlockAddress hash:ip hashsize 1024
Step 2: Add the addresses to block into that IPset:
# Put this in a loop, the loop reading a file containing list of addresses to block
ipset add BlockAddress $IP_TO_BLOCK
Finally, replace all those lines to block with just one line in netfilter:
iptables -t raw -A PREROUTING -m set --match-set BlockAddress src -j DROP
Done. iptables-restore
will be mucho fasta.
IMPORTANT NOTE: I strongly suggest NOT using a domain name to be added into netfilter; netfilter needs to first do a DNS Resolve, and if DNS is not properly configured and/or too slow, it will fail. Rather, do a pre-resolve (or periodic resolve) of domain names to block, and feed the found IP addresses to the "file containing list of addresses to block". It should be an easy script, invoked from crontab every 5 minutes or so.
EDIT 1:
This is an example of a cronjob I use to get facebook.com
's address, invoked every 5 minutes:
#!/bin/bash
fbookfile=/etc/iptables.d/facebook.ip
for d in www.facebook.com m.facebook.com facebook.com; do
dig +short "$d" >> "$fbookfile"
done
sort -n -u "$fbookfile" -o "$fbookfile"
Every half hour, another cronjob feeds those addresses to ipset
:
#!/bin/bash
ipset flush IP_Fbook
while read ip; do
ipset add IP_Fbook "$ip"
done < /etc/iptables.d/facebook.ip
Note: I have to do this because doing dig +short facebook.com
, for instance, returns exactly ONE IP address. After some observation, the IP address returned changed every ~5 minutes. Since I'm too lazy occupied to make an optimized version, I took the easy way out and do a flush/rebuild only every 30 minutes to minimize CPU spikes.
Upvotes: 3
Reputation: 334
Donwload Zones
#!/bin/bash
# http://www.ipdeny.com/ipblocks/
zone=/path_to_folder/zones
if [ ! -d $zone ]; then mkdir -p $zone; fi
wget -c -N http://www.ipdeny.com/ipblocks/data/countries/all-zones.tar.gz
tar -C $zone -zxvf all-zones.tar.gz >/dev/null 2>&1
rm -f all-zones.tar.gz >/dev/null 2>&1
Edit your Iptables bash script and add the following lines:
#!/bin/bash
ipset=/sbin/ipset
iptables=/sbin/iptables
route=/path_to_blackip/
$ipset -F
$ipset -N -! blockzone hash:net maxelem 1000000
for ip in $(cat $zone/{cn,ru}.zone $route/blackip.txt); do
$ipset -A blockzone $ip
done
$iptables -t mangle -A PREROUTING -m set --match-set blockzone src -j DROP
$iptables -A FORWARD -m set --match-set blockzone dst -j DROP
example: where "blackip.txt" is your own ip blacklist and "cn,ru" china-russia"
Source: blackip
Upvotes: 0
Reputation: 51
The following is how I ended up solving this using ipsets as well.
#!/bin/bash
# Blacklist names & URLs array
declare -A blacklists
blacklists[China]="url"
# blacklists[key]="url"
# etc...
for key in ${!blacklists[@]}; do
# Download blacklist
wget --output-document=/tmp/blacklist_$key.gz -w 3 ${blacklists[$key]}
# Create ipset for current blacklist
ipset create $key hash:net maxelem 400000
# TODO method for determining appropriate maxelem
while read line; do
# Add addresses from list to ipset
ipset add $key $line -quiet
done < <(zcat /tmp/blacklist_$key.gz | sed '1,2d' | sed s/.*://)
# Add rules to iptables
iptables -D INPUT -m set --match-set $key src -j $key # Delete link to list chain from INPUT
iptables -F $key # Flush list chain if existed
iptables -X $key # Delete list chain if existed
iptables -N $key # Create list chain
iptables -A $key -p tcp -m limit --limit 5/min -j LOG --log-prefix "Denied $key TCP: " --log-level 7
iptables -A $key -p udp -m limit --limit 5/min -j LOG --log-prefix "Denied $key UDP: " --log-level 7
iptables -A $key -p icmp -m limit --limit 5/min -j LOG --log-prefix "Denied $key ICMP: " --log-level 7
iptables -A $key -j DROP # Drop after logging
iptables -A INPUT -m set --match-set $key src -j $key
done
I'm not wildly familiar with ipsets but this makes for a much faster method of downloading, parsing and adding blocks.
I've added individual chains for each list for more verbose logging that will log which blocklist the dropped ip is coming from should you have multiples. On my actual box I'm using around 10 lists and have added several hundred thousand addresses with no problem!
Upvotes: 2