Reputation: 3019
Let's imagine that I have a Logstash instance running, but would like to stop it cleanly, to change it's configs for example.
How can I stop the Logstash instance, while ensuring that it finish sending the bulks to Elasticsearch? I don't want to loose any logs while stopping logstash.
Upvotes: 11
Views: 25105
Reputation: 5250
kill -TERM {logstash_pid}
also sends a SIGTERM
signal, such as service logstash stop
. Logstash 7.15
kill -SIGHUP {logstash_pid}
reload the config file and restart the pipeline.
If you run logstash using docker, the command docker stop <logstash_container>
sends a SIGTERM
signal. If you start the docker again variables like :sql_last_value
returns to "1970-01-01"
docker restart <logstash_container>
sends a SIGHUP
so it is usefull to reload the pipeline keeping the provious values of variables such as :sql_last_value
.
Upvotes: 1
Reputation: 10578
With Logstash 2.3:
Logstash keeps all events in main memory during processing. Logstash responds to a SIGTERM by attempting to halt inputs and waiting for pending events to finish processing before shutting down.
Source: https://www.elastic.co/guide/en/logstash/2.3/pipeline.html
Upvotes: 2
Reputation: 11571
Logstash 1.5 flushes the pipeline before shutting down in response to a SIGTERM signal, so there you should be able to shut it down with service logstash stop
, the init.d script, or whatever it is that you usually use.
With Logstash 1.4.x a SIGTERM signal shuts down Logstash abruptly without allowing the pipeline to flush all in-flight messages, but you can send SIGINT to force a flush. However, some plugins (like the redis input plugin) don't handle this gracefully and hang indefinitely.
Upvotes: 14