Reputation: 2134
I currently have a problem using Kafka with Storm. Until some days ago I used a windows computer but now swticthed to using a mac.
My Kafka Queue is filled with approx. 4,8 million messages.
Now what happen is, that after approx. 4600 processed messages I got 10240 open file descriptors which I don't get rid of.
When I reached 10240 open file descriptors, Storm tries to open a zookeeper file and fails with:
java.io.FileNotFoundException: /var/folders/2p/3xcy9hp10gd852_06v0dzg440000gn/T/c5a2bffa-ff9a-4093-9002-79cee98385dc/workers/5a809959-53b7-48f4-848e-ed585007d9ed/heartbeats/1416569332681 (Too many open files)
at java.io.FileOutputStream.open(Native Method) ~[na:1.8.0_25]
at java.io.FileOutputStream.<init>(FileOutputStream.java:213) ~[na:1.8.0_25]
at org.apache.commons.io.FileUtils.openOutputStream(FileUtils.java:367) ~[commons-io-2.4.jar:2.4]
at org.apache.commons.io.FileUtils.writeByteArrayToFile(FileUtils.java:2094) ~[commons-io-2.4.jar:2.4]
at org.apache.commons.io.FileUtils.writeByteArrayToFile(FileUtils.java:2078) ~[commons-io-2.4.jar:2.4]
at backtype.storm.utils.LocalState.persist(LocalState.java:86) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.utils.LocalState.put(LocalState.java:66) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.daemon.worker$do_heartbeat.invoke(worker.clj:68) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.daemon.worker$fn__4527$exec_fn__1096__auto____4528$heartbeat_fn__4529.invoke(worker.clj:357) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.timer$schedule_recurring$this__1639.invoke(timer.clj:99) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.timer$mk_timer$fn__1622$fn__1623.invoke(timer.clj:50) ~[storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at backtype.storm.timer$mk_timer$fn__1622.invoke(timer.clj:42) [storm-core-0.9.2-incubating.jar:0.9.2-incubating]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
After doing some googling, I found out to increase max file limits and I increased them as follows (launchctl):
launchctl limit
cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 1064 1064
maxfiles 1638400 2048000
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1064
virtual memory (kbytes, -v) unlimited
Still no good, it keeps crashing. As I already mentioned, Kafka Spout seems to emit more than my topology can acknowledge.
Any hint on that?
Thanks!
PS: I am using Storm 0.9.2-incubatin, for both core and kafka.
Upvotes: 2
Views: 2167
Reputation: 2134
After fiddlign around I found out that a wrong Http-Client usage was cause of this error. I'm using storm's hook feature and sometimes connections are waiting for a response but when the server does not, it idles hence the increasing amount of sockets for each http connection.
Upvotes: 1