Max
Max

Reputation: 538

Logstash Input -> JDBC in some properties or parameterizable file?

I am using logstash to ingest elasticsearch. I am using input jdbc, and I am urged by the need to parameterize the inputt jdbc settings, such as the connection string, pass, etc, since I have 10 .conf files where each one has 30 jdbc and 30 output inside.

So, since each file has the same settings, would you like to know if it is possible to do something generic or reference that information from somewhere?

I have this 30 times:...

input {
  # Number 1
  jdbc {
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
        jdbc_driver_class => "com.informix.jdbc.IfxDriver"
        jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        schedule => "*/1 * * * *"                    
        statement => "SELECT * FROM public.test ORDER BY id ASC"
        tags => "001"
  }

  # Number 2
  jdbc {
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
        jdbc_driver_class => "com.informix.jdbc.IfxDriver"
        jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        schedule => "*/1 * * * *"                    
        statement => "SELECT * FROM public.test2 ORDER BY id ASC"
        tags => "002"
  }


  [.........]

  # Number X
  jdbc {
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
        jdbc_driver_class => "com.informix.jdbc.IfxDriver"
        jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        schedule => "*/1 * * * *"                    
        statement => "SELECT * FROM public.testx ORDER BY id ASC"
        tags => "00x"
  }

}

filter { 

  mutate { 
    add_field => { "[@metadata][mitags]" => "%{tags}" }
  }

  # Number 1
  if "001" in [@metadata][mitags] {


        mutate { 
                  rename => [ "codigo", "[properties][codigo]" ] 
            }
  }

  # Number 2
  if "002" in [@metadata][mitags] {


        mutate { 
                  rename => [ "codigo", "[properties][codigo]" ] 
            }
  }

  [......]

  # Number x
  if "002" in [@metadata][mitags] {


        mutate { 
                  rename => [ "codigo", "[properties][codigo]" ] 
            }
  }


  mutate {
    remove_field => [ "@version","@timestamp","tags" ]
  }



} 

output {

  # Number 1
  if "001" in [@metadata][mitags] {        
        # Para ELK
        elasticsearch {
              hosts => "localhost:9200"
              index => "001"
              document_type => "001"
              document_id => "%{id}"

              manage_template => true
              template => "/home/user/logstash/templates/001.json"
              template_name => "001"
              template_overwrite => true
        }
  } 

   # Number 2
  if "002" in [@metadata][mitags] {        
        # Para ELK
        elasticsearch {
              hosts => "localhost:9200"
              index => "002"
              document_type => "002"
              document_id => "%{id}"

              manage_template => true
              template => "/home/user/logstash/templates/002.json"
              template_name => "002"
              template_overwrite => true
        }
  }

  [....]

   # Number x
  if "00x" in [@metadata][mitags] {        
        # Para ELK
        elasticsearch {
              hosts => "localhost:9200"
              index => "002"
              document_type => "00x"
              document_id => "%{id}"

              manage_template => true
              template => "/home/user/logstash/templates/00x.json"
              template_name => "00x"
              template_overwrite => true
        }
  }

}

Upvotes: 1

Views: 1198

Answers (1)

leandrojmp
leandrojmp

Reputation: 7473

You will still need one jdbc input for each query you need to do, but you can improve your filter and output blocks.

In your filter block you are using the field [@metadata][mitags] to filter your inputs but you are applying the same mutate filter to each one of the inputs, if this is the case you don't need the conditionals, the same mutate filter can be applied to all your inputs if you don't filter it.

Your filter block could be resumed to something as this one.

filter {
    mutate { 
        add_field => { "[@metadata][mitags]" => "%{tags}" }
    }
    mutate { 
        rename => [ "codigo", "[properties][codigo]" ] 
    }
    mutate {
        remove_field => [ "@version","@timestamp","tags" ]
    }
}

In your output block you use the tag just to change the index, document_type and template, you don't need to use conditionals to that, you can use the value of the field as a parameter.

output {
    elasticsearch {
        hosts => "localhost:9200"
        index => "%{[@metadata][mitags]}"
        document_type => "%{[@metadata][mitags]}"
        document_id => "%{id}"
        manage_template => true
        template => "/home/unitech/logstash/templates/%{[@metadata][mitags]}.json"
        template_name => "iol-fue"
        template_overwrite => true
    }
}

But this only works if you have a single value in the field [@metadata][mitags], which seems to be the case.

EDIT: Edited just for history reasons, as noted in the comments, the template config does not allow the use of dynamic parameters as it is only loaded when logstash is starting, the other configs works fine.

Upvotes: 0

Related Questions