Milor123
Milor123

Reputation: 555

Kibana dont create, the indice of my data

What I need?

I need be able of create a index with the data of my logs files.

logstash missing index

This is /etc/logstash/conf.d/apache-01.conf (I've tried using /dev/null for sincedb and delete .sincedb_xxx files from /var/lib/logstash/plugins/inputs/file ):

input {
    file {
        path => "/test/domainname*"
        start_position => "beginning"
        id => "NEWTRY2"
    }

}

filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
        match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }

    geoip {
        source => "clientip"
    }
}

output {
  elasticsearch {
    hosts =>  ["localhost:9200"]
    index => "new-index2"
  }
}

When I execute the command

>> curl http://localhost:9200/_cat/indices
green  open .kibana                     N2gR01kcSMaT74Pj93NqwA 1 0     1 0   4kb   4kb
yellow open metricbeat-6.4.3-2018.11.08 rpBMeq-XS7yGeOd49Wakhw 1 1 14285 0 7.9mb 7.9mb

Usually should return a stagged files as logstash-2018.11.01

In the log file, /var/log/logstash/logstash-plain.log shows this:

[2018-11-08T10:05:15,808][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2018-11-08T10:05:17,493][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-08T10:05:17,825][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-08T10:05:17,834][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-08T10:05:17,997][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-11-08T10:05:18,048][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-11-08T10:05:18,051][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-11-08T10:05:18,075][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-11-08T10:05:18,093][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-11-08T10:05:18,109][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-11-08T10:05:18,212][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-11-08T10:05:18,440][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_ae5e62ef229d5a1776eda86789823900", :path=>["/test/domainname*"]}
[2018-11-08T10:05:18,528][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x45edc3cf run>"}
[2018-11-08T10:05:18,566][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-11-08T10:05:18,576][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2018-11-08T10:05:18,784][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

The /test/ folder and files have permission 777 and the owners is the own "logstash" folder permission

When I execute the command curl -XGET 'http://localhost:9200/_cluster/state?pretty' in it there some lines like this: (I've deleted with curl -XDELETE localhost:9200/* the index, because I want change the logs files and update all)

"index-graveyard" : {
  "tombstones" : [
    {
      "index" : {
        "index_name" : "logstash-2018.11.01",
        "index_uuid" : "K8CPa4gYTSO-l4NfnrtTog"
      },
      "delete_date_in_millis" : 1541649663075
    },
    {
      "index" : {
        "index_name" : "logstash-2018.09.01",
        "index_uuid" : "-thB_LnfQlax6tLcS11Srg"
      },
      "delete_date_in_millis" : 1541649663075
    },
    {
      "index" : {
        "index_name" : "logstash-2018.10.31",
        "index_uuid" : "Fm8XcdcTTT2U-Xm1Vw0Gbw"
      },
      "delete_date_in_millis" : 1541649663075
    },
    {
      "index" : {
        "index_name" : "logstash-2018.08.31",
        "index_uuid" : "_FqmkcRNTKOx1oJbnpeyjw"
      },
      "delete_date_in_millis" : 1541649663075
    },
    {
      "index" : {
        "index_name" : "logstash-2018.11.02",
        "index_uuid" : "ZU04EZDaS_eeqD0auI9o5Q"
      },
      "delete_date_in_millis" : 1541649663075
    },
    {
      "index" : {
        "index_name" : ".kibana",
        "index_uuid" : "sZEoKhVlRRy7e8gAAnAEZw"
      },
      "delete_date_in_millis" : 1541653339359
    },
    {
      "index" : {
        "index_name" : "metricbeat-6.4.1-2018.11.06",
        "index_uuid" : "T5UZFMHiRJSMsBjTw40ztA"
      },

What things I've tried?

Important: when I executed it for first time, all work nice, but now not work

Note: I am beginner in this topic Thanks you.

Upvotes: 0

Views: 131

Answers (1)

mike b
mike b

Reputation: 514

When debugging Logstash pipelines using file inputs, I like to simplify it with stdin and stdout

input {
  stdin {}
}
filter {
  ...your filter
}
output {
  stdout { codec => rubydebug }
}

Then

cat mylogfile > logstash -f mypipeline.conf

The goal is to see if we can get data through. If so, then something is wrong with the file input config or the Elasticsearch output. Mess with each until you figure out which isn't working. Also, ensure you can indeed read the file by doing something like stat /path/to/your/file

Generally, the problem with the file input are permissions or sincedb, but sounds like you've been able to eliminate both. In that case, I'd expect stdin to succeed.

Upvotes: 1

Related Questions