Reputation: 675
I'm new with activemq-replicatedLevelDB so I might assumed things wrong based on my limited understanding.
I'm setting up 3 activemq instances with zookeeper which then determine which among the activemq instances is the master in AWS. Zookeeper are deployed within a private subnet and activemq are deployed within a public subnet, there's no problem with zookeeper and activemq communication.
For security purposes:
Question/Issue: I can't find where I can configure the activemq intances to which port should these activemq instances communicate with each other.
Why the issue: I need to restrict the available ports that are open of these activemq instances. And I cannot simply allow all access coming from public subnet
example below of port restrictions
I am using security group to restrict these accesses in AWS. I tried allowing all ports accessible wihtin the public subnet which allows activemq to know that other activemq instances are alive, and they were capable of electing master/slaves. The port 45818 is not the same port after every setup from scratch. So I assume this is random.
sample logs below
Promoted to master
Using the pure java LevelDB implementation.
Master started: tcp://**.*.*.**:45818
Once I removed that port setup(allow all access), I got the below stacktrace
Not enough cluster members have reported their update positions yet.
org.apache.activemq.leveldb.replicated.MasterElector
If my understanding of the stacktrace above is right, it tells that the current activemq does not know the existence of other activemq instances. So I needed to know how I can configure the port of these activemq when checking of other activemq instances so I can restrict/allow access.
Here is the configuration of my activemq that points to zookeeper addresses. Other configuration are on default values.
activemq version: 5.13.4
<persistenceAdapter>
<replicatedLevelDB directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="testzookeeperip1:2181,testzookeeperip2:2181,testzookeeperip3:2181"
hostname="testhostnameofactivemqinstance"
/>
</persistenceAdapter>
Should there any information lacking, I'll update this question asap. thanks
Upvotes: 0
Views: 239
Reputation: 4602
This is rather a hint than a qualified answer, but too large for comment.
You configured dynamic ports with bind="tcp:0.0.0.0:0"
. I haven't used a fixed port on this configuration setting, but configuration doc says, you can set it.
The bind port will be used for the replication protocol with the master, so obviously, you cannot cut it off, but it should be ok to allow only the zk machines to communicate there.
I have not analyzed traffic between the brokers, but as I understand replicated LevelDB, the ZK decides over the active master, not the brokers. So there should be no communication between the brokers on that port.
The external broker address is configured on the transportConnectors
element in the <broker>
section of the config file, but I guess you already have that covered.
I suggest, you configure the bind to a fixed port and allow communication to that port from the ZK and if required from the cluster partners. Clients have only access to the transport ports. Allow communication to the ZKs and that should be it.
Upvotes: 0