Reputation: 7448
I am using ELK
and have the following document structure
{
"_index": "prod1-db.log-*",
"_type": "db.log",
"_id": "AVadEaq7",
"_score": null,
"_source": {
"message": "2016-07-08T12:52:42.026+0000 I NETWORK [conn4928242] end connection 192.168.170.62:47530 (31 connections now open)",
"@version": "1",
"@timestamp": "2016-08-18T09:50:54.247Z",
"type": "log",
"input_type": "log",
"count": 1,
"beat": {
"hostname": "prod1",
"name": "prod1"
},
"offset": 1421607236,
"source": "/var/log/db/db.log",
"fields": null,
"host": "prod1",
"tags": [
"beats_input_codec_plain_applied"
]
},
"fields": {
"@timestamp": [
1471513854247
]
},
"sort": [
1471513854247
]
}
I want to change the message
field to not_analyzed
. I am wondering how to use Elasticsedarch Mapping API
to achieve that? For example, how to use PUT Mapping API
to add a new type to the existing index?
I am using Kibana 4.5
and Elasticsearch 2.3
.
UPDATE
Tried the following template.json
in logstash
,
1 {
2 "template": "logstash-*",
3 "mappings": {
4 "_default_": {
5 "properties": {
6 "message" : {
7 "type" : "string",
8 "index" : "not_analyzed"
9 }
10 }
11 }
12 }
13 }
got the following errors when starting logstash
,
logstash_1 | {:timestamp=>"2016-08-24T11:00:26.097000+0000", :message=>"Invalid setting for elasticsearch output plugin:\n\n output {\n elasticsearch {\n # This setting must be a path\n # File does not exist or cannot be opened /home/dw/docker-elk/logstash/core_mapping_template.json\n template => \"/home/dw/docker-elk/logstash/core_mapping_template.json\"\n ...\n }\n }", :level=>:error}
logstash_1 | {:timestamp=>"2016-08-24T11:00:26.153000+0000", :message=>"Pipeline aborted due to error", :exception=>#<LogStash::ConfigurationError: Something is wrong with your configuration.>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/config/mixin.rb:134:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:63:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error}
logstash_1 | {:timestamp=>"2016-08-24T11:00:29.168000+0000", :message=>"stopping pipeline", :id=>"main"}
Upvotes: 4
Views: 16594
Reputation: 1070
If at all you haven't specified any mappings for your fields while index creation, the first time you index a document into your index, elastic search automatically chooses the best mapping for each of the fields based on the data provided. Looking at the document you have provided in the question, elasticsearch would have already assigned an analyser for the field message
. Once its assigned you cannot change it. Only way to do that is to create a fresh index.
Upvotes: 3
Reputation: 3770
You can't change the mapping of an index when it already exists, except when you create new fields to Objects or multi-fields.
If you want to use the Mapping API for that your request would look like this:
PUT /prod1-db.log-*/_mapping/log
{
"properties": {
"message": {
"type": "string",
"index": "not_analyzed"
}
}
}
However I would recommend you create a JSON file with your mappings and add it to your logstash config.
A template file might look like this (You need to customize this):
{
"template": "logstash-*",
"mappings": {
"_default_": {
"properties": {
"action" : {
"type" : "string",
"fields" : {
"raw" : {
"index" : "not_analyzed",
"type" : "string"
}
}
},
"ad_domain" : {
"type" : "string"
},
"auth" : {
"type" : "long"
},
"authtime" : {
"type" : "long"
},
"avscantime" : {
"type" : "long"
},
"cached" : {
"type" : "boolean"
}
}
}
}
}
And the elasticsearch
entry in your Logstash config looks like this:
elasticsearch {
template => "/etc/logstash/template/template.json"
template_overwrite => true
}
Upvotes: 6