Reputation: 1181
In my Storm topology, I transfer big batches of JSON data through the Kafka spout to ElasticSearch bolt.
The problem is that Log4j2 used by Apache Storm, uses UDP protocol in its config both for cluster
and for worker
:
Log4j2/Worker.xml:
<Syslog name="syslog" format="RFC5424" charset="UTF-8" host="localhost" port="514"
protocol="UDP" appName="[${sys:storm.id}:${sys:worker.port}]" mdcId="mdc" includeMDC="true"
facility="LOCAL5" enterpriseNumber="18060" newLine="true" exceptionPattern="%rEx{full}"
messageId="[${sys:user.name}:${sys:logging.sensitivity}]" id="storm" immediateFail="true"
immediateFlush="true"/>
As a result - I'm receiving the next error during my topology submission:
ERROR Unable to write to stream UDP:localhost:514 for appender syslog org.apache.logging.log4j.core.appender.AppenderLoggingException: Error flushing stream UDP:localhost:514
Which is because of the message length.
I wonder if it is possible to change the default protocol of the Apache Storm Syslog appender from UDP to TCP?
Upvotes: 0
Views: 1290
Reputation: 3651
This doesn't really have much to do with Storm, as Storm just uses whatever settings Log4j2 supports. I'd have a look at https://logging.apache.org/log4j/2.x/manual/appenders.html#SyslogAppender, in particular the example given there for a TCP syslog appender.
<Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514"
protocol="TCP" appName="MyApp" includeMDC="true"
facility="LOCAL0" enterpriseNumber="18060" newLine="true"
messageId="Audit" id="App"/>
Upvotes: 0