Reputation: 769
I have a strange issue. I use apache 0.10.0 storm version and 3.5.1 zookeeper version. I have 4 different vms on the same network.
I start:
zookeeper at localhost:2181, 1st vm(ip XXX.XXX.5.60)
nimbus & ui, 2nd vm(ip XXX.XXX.5.61)
supervisor 1 on 3rd vm and supervisor 4 on 4th vm(ips XXX.XXX.5.67 & XXX.XXX.5.68).
This is the storm.yaml of the Nimbus:
storm.zookeeper.servers:
- "XXX.XXX.5.60"
nimbus.host: "XXX.XXX.5.61"
storm.local.dir: "/home/stresstest/data"
This is the storm.yaml of the supervisors:
storm.zookeeper.servers:
- "XXX.XXX.5.60"
nimbus.host: "XXX.XXX.5.61"
storm.local.dir: "/home/stresstest/data"
supervisor.slots.ports:
- 6700
As I saw zookeeper accepted the connections normally:
2015-11-27 04:16:06,438 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x1000000d4ad000b with negotiated timeout 20000 for client /XXX.XXX.5.67:41315 2015-11-27 04:16:06,439 [myid:] - INFO [SyncThread:0:ZooKeeperServer@678] - Established session 0x1000000d4ad000c with negotiated timeout 20000 for client /XXX.XXX.5.68:59833
As you see above each supervisor has 1 worker. From UI's site I see that I have 2 Supervisors and 2 Total slots. When I submit a topology to Nimbus it consumes 1 worker.
And the problem begins here. When I rebalance the topology to consume 2 workers it does this:
Id Host Uptime Slots Used slots Version
b38878ae-8eea-4265-9c98-2b6db1ef0bb0 vlan5-dhcp105.xxx.gr 18m 31s 1 1 0.10.0
d463df62-5d18-460f-86f4-18dff93f544a vlan5-dhcp105.xxx.gr 13m 55s 1 1 0.10.0
It appears that the topology uses 2 workers but its the same one. Worker host appears to be the same for both of the workers/supervisors. So when I send data to Nimbus, only 1 worker is processing and the other one is waiting for data(both workers have downloaded the topology). Why is this happening ?
Upvotes: 3
Views: 811
Reputation: 769
I managed to fix this. Both supervisors had the same hostname (it was passed through the initialization of Xen Hypervisor), so I believe that the vms were conflicted by each other. When I changed one vm's hostname it worked.
Upvotes: 0
Reputation:
May be that is because of using same storm.local.dir path for nimbus and supervisor just change the path in your supervisors use different paths and try for rebalance it I think it will work.
Upvotes: 0
Reputation: 94
I had the same kind of problem in our project and the finding is that, we cannot increase the number of workers by re-balancing command . Re-balancing is only used to decrease the number of workers we use. for example, in the Topology Launcher , provide the Number of worker as 2 , and u can re-balance the topology to 1 worker using re-balance -n 1 command. Also Number of parallelism hint(executors) can be increased or decreased using re-balance command.
Upvotes: 0