lingxiao
lingxiao

Reputation: 1224

How to reduce the amount of logging for `zookeeper`?

I set up a three-node kafka and zookeeper cluster more than half a year ago and I am using the zookeeper server that came with the kafka package. As it turns out, the default configuration results in too much log information being gathered from zookeeper (2GB from each node so far).

How can I turn the log level down, and is it OK for me to delete old logs from zookeeper abruptly?

Upvotes: 1

Views: 2294

Answers (2)

xmorera
xmorera

Reputation: 1961

It is possible to:

  1. Use rolling log so that old logs are deleted automatically
  2. Log only issues. This is a configuration in log4j. I've seen instances where it is set to INFO and logs grow substantially.

For instructions on configuring log4j: https://logging.apache.org/log4j/2.x/manual/configuration.html

Upvotes: 0

mech
mech

Reputation: 2913

zookeeper uses log4j v1.2 for its logging infrastructure, as you can read here. You should edit your log4j.properties file to set your logging level higher (so it'll only log "more important" events).

You should be able to delete old logs without any issue. Just make sure they're not being read or written to by other programs (including zookeeper) first.

It may be easier to just set a task to periodically prune old logs, though, that way you don't lose the granularity provided by the default configuration in case something goes wrong.

Upvotes: 1

Related Questions