Reputation: 1224
I set up a three-node kafka
and zookeeper
cluster more than half a year ago and I am using the zookeeper
server that came with the kafka
package. As it turns out, the default configuration results in too much log information being gathered from zookeeper
(2GB from each node so far).
How can I turn the log level down, and is it OK for me to delete old logs from zookeeper
abruptly?
Upvotes: 1
Views: 2294
Reputation: 1961
It is possible to:
For instructions on configuring log4j: https://logging.apache.org/log4j/2.x/manual/configuration.html
Upvotes: 0
Reputation: 2913
zookeeper
uses log4j v1.2
for its logging infrastructure, as you can read here. You should edit your log4j.properties
file to set your logging level higher (so it'll only log "more important" events).
You should be able to delete old logs without any issue. Just make sure they're not being read or written to by other programs (including zookeeper
) first.
It may be easier to just set a task to periodically prune old logs, though, that way you don't lose the granularity provided by the default configuration in case something goes wrong.
Upvotes: 1