Reputation: 177
I'm encountering an issue that a kafka logstash pipe consumes too much cpu (about 300% when starting, and 100% after a few seconds), but basically it works: the pipe can deliver events in kafka into elasticsearch, with no error message
the logstash is running in a docker container, with the latest version of lostash (2.1.1, pull from https://hub.docker.com/_/logstash/).
docker run --rm --link kafka:kafka --link elasticsearch:elasticsearch -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash-kafka-elasticsearch.conf
the configure file is something like this:
input {
kafka {
topic_id => 'mytopic'
zk_connect => 'kafka:2181'
}
}
output {
elasticsearch {
hosts=> ['elasticsearch:9200']
}
stdout { codec => rubydebug }
}
i have other logstash pipes which works well and cpu usage are also normal (for example a pipe use http as in, kafka as out, takes ~0% cpu). i tried to comment out the elasticsearch output, leave only the stdout, the issue still exists, so it's seems there is no problem for the elasticsearch.
any body would offer suggestions?
Upvotes: 3
Views: 1470
Reputation: 46
The logstash-input-kafka plugin had a bug in its tight loop which unnecessarily checked for an empty queue and skipped to the next iteration instead of blocking.
This has been fixed in this pull request and version 2.0.3 of the plugin has been released with it.
To test this, please update the plugin using:
bin/plugin install --version 2.0.3 logstash-input-kafka
Upvotes: 3