Waqas Ikram
Waqas Ikram

Reputation: 71

Infinispan jgroups firewall ports

When using JGroups, with a component such as Infinispan, what ports i need to open in firewall??

My JGroups configration file is:

    <config xmlns="urn:org:jgroups"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">
<UDP
         mcast_addr="${test.jgroups.udp.mcast_addr:239.255.0.1}"
         mcast_port="${test.jgroups.udp.mcast_port:46655}"
         bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"
         bind_port="${test.jgroups.bind.port:46656}"
         port_range="${test.jgroups.bind.port.range:20}"
         tos="8"
         ucast_recv_buf_size="25M"
         ucast_send_buf_size="1M"
         mcast_recv_buf_size="25M"
         mcast_send_buf_size="1M"

         loopback="true"
         max_bundle_size="64k"
         ip_ttl="${test.jgroups.udp.ip_ttl:2}"
         enable_diagnostics="false"
         bundler_type="old"

         thread_naming_pattern="pl"

         thread_pool.enabled="true"
         thread_pool.min_threads="3"
         thread_pool.max_threads="10"
         thread_pool.keep_alive_time="60000"
         thread_pool.queue_enabled="true"
         thread_pool.queue_max_size="10000"
         thread_pool.rejection_policy="Discard"
         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="2"
         oob_thread_pool.max_threads="4"
         oob_thread_pool.keep_alive_time="60000"
         oob_thread_pool.queue_enabled="false"
         oob_thread_pool.queue_max_size="100"
         oob_thread_pool.rejection_policy="Discard"

         internal_thread_pool.enabled="true"
         internal_thread_pool.min_threads="1"
         internal_thread_pool.max_threads="4"
         internal_thread_pool.keep_alive_time="60000"
         internal_thread_pool.queue_enabled="true"
         internal_thread_pool.queue_max_size="10000"
         internal_thread_pool.rejection_policy="Discard"
         />

    <PING timeout="3000" num_initial_members="3"/>
    <MERGE2 max_interval="30000" min_interval="10000"/>
    <FD_SOCK bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}" start_port="${test.jgroups.fd.sock.start.port:9780}" port_range="${test.jgroups.fd.sock.port.range:10}" />
    <FD_ALL timeout="15000" interval="3000" />
    <VERIFY_SUSPECT timeout="1500" bind_addr="${test.jgroups.udp.bind_addr:match-interface:eth0}"/>

    <pbcast.NAKACK2
        xmit_interval="1000"
        xmit_table_num_rows="100"
        xmit_table_msgs_per_row="10000"
        xmit_table_max_compaction_time="10000"
        max_msg_batch_size="100"/>
    <UNICAST3
        xmit_interval="500"
        xmit_table_num_rows="20"
        xmit_table_msgs_per_row="10000"
        xmit_table_max_compaction_time="10000"
        max_msg_batch_size="100"
        conn_expiry_timeout="0"/>

   <pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
   <pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
   <tom.TOA/> <!-- the TOA is only needed for total order transactions-->

   <UFC max_credits="2m" min_threshold="0.40"/>
   <MFC max_credits="2m" min_threshold="0.40"/>
   <FRAG2 frag_size="30k"  />
   <RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>

Now, i have follows ports open in firewall (Chain INPUT (policy ACCEPT))

ACCEPT     udp  --  anywhere             anywhere            multiport dports 46655:46676 /* 205 ipv4 */ state NEW

ACCEPT     tcp  --  anywhere             anywhere            multiport dports 9780:9790 /* 204 ipv4 */ state NEW

But still after running infinispan embedded cache for few mins i'm getting

2014-11-05 18:21:38.025 DEBUG org.jgroups.protocols.FD_ALL - haven't received a heartbeat from <hostname>-47289 for 15960 ms, adding it to suspect list

it works fine if i turn off iptables Thanks in advance

Upvotes: 1

Views: 2497

Answers (4)

Waqas Ikram
Waqas Ikram

Reputation: 71

This was happening due to switch droping UDP packets because of limits defined on switch for UDP packets...

Upvotes: 0

Federico Sierra
Federico Sierra

Reputation: 5208

In my environment, i have two blades on same chassis with Ip bonding (mode 1 configured) when eth0 interface is active on both blades then everything work perfectly. When eth0 is active on one blade and eth1 is active on other then clustered get created and after a while it start throwing "haven't received a heartbeat" exception and no process leaves the cluster even a long period of time (hours)

There's this bug in JGroups can be associated with your problem: Don't receive heartbeat in Nic Teaming configuration after NIC switch

Workaround: switching to TCP.

See also:

Upvotes: 1

Bela Ban
Bela Ban

Reputation: 2186

What's mode 1 again ? Failover I assume ? Not load balancing right ? Perhaps you need to use iptables -i INTF where INTF is either eth0 or eth1 ? I'm not an expert in iptables, but perhaps you need to define the logical name, e.g. iptables -i bond0 ? I suggest use wireshark to see which packets are received and/or enable debugging of the DROPs in iptables on both boxes.

Upvotes: 0

Bela Ban
Bela Ban

Reputation: 86

How did you set up your iptables rules ? I used (on Fedora 18)

iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A INPUT  -p udp --dport 46655:46676  -j ACCEPT
iptables -A OUTPUT -p udp --dport 46655:46676  -j ACCEPT
iptables -A INPUT  -p tcp --dport 9780:9790    -j ACCEPT
iptables -A OUTPUT -p tcp --dport 9780:9790    -j ACCEPT
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -j DROP

This works for me, between 2 hosts.

Upvotes: 3

Related Questions