Reputation: 85
I am using Logstash 8.12 My logstash pipeline reads data from sql server and send to Azure ElasticSearch. It works fine when ES is available. When Elastic search node is unreachable, I get the error in my log file as expected but problem is my db connection to sql stays open (status is running or sleeping or suspended) for hours while ES is unreachable.
Sometimes it keeps retrying and I see this logs every minute or two : "Attempted to send a bulk request but Elasticsearch appears to be unreachable or down". So probably at times, it is not shuting logstash too.
I tried SocketTimeout ( set to 1 hour) in jdbc connection string to make sure db connection closes after a given time in such situaltions. It doesn't seem to work.
input {
jdbc {
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://#{DbServer};database=#{DbName};socketTimeout=3600000;encrypt=true;trustServerCertificate=true;"
jdbc_user => "#{DbUserName}"
jdbc_password => "#{DbPassword}"
type => "ABC_name"
schedule => "#{schedule_for_pipeline}"
statement => "
Select top 100 * from ABC"
}
}
I don't see a need to set up a DLQ just to close the connection. I don't have a use case to set up another pipeline to process the message from DLQ.
I tried max_retries and resurrect_delay ( set to 3 hours) in elasticsearch output plugin to make sure it attempts another API call after a certain time which also doesn't seem to work.
output {
elasticsearch {
document_id => "%{id}"
index => "#{IndexName.Text}"
manage_template => true
template => "#mapping.json"
template_name => "#{template}"
template_overwrite => true
action => "update"
doc_as_upsert => true
cloud_id => "#{CloudId.Text}"
api_key => "#{ApiKeyValue.Text}"
ssl => true
resurrect_delay => 10800
}
So I am looking for help on how to close db connection in such scenario when elastic search is unreachable for mutliple hours.
Upvotes: 0
Views: 17