Reputation: 19000
I have an index with a ts
field representing epoch ms:
"ts": {
"type": "long"
}
Note the type is long
, not an epoch_millis
date type.
I run date histogram aggregation on that field:
{
...
"aggs": {
"agg_name": {
"date_histogram": {
"field": "ts",
"interval": "1d",
"format": "yyyy-MM-dd",
"min_doc_count": 1
}
}
}
}
Trying to aggregate on a per-day documents basis. The operation runs ok, however a bucket output is as follows:
{
"key_as_string": "yyyy-MM-dd1577836800000",
"key": 1577836800000,
"doc_count": 3
}
My question is: how can I render key_as_string
to be something such as 2020-01-01
without changing the field type or so.
Upvotes: 1
Views: 2049
Reputation: 5135
You can use painless scripting language to achieve what you want but it will not be efficient since painless scripts have a lot of overhead in field manipulation. Here's a scripting way to do this
You can set a format and convert the long to instant of epoch and format it the way you want. As mentioned this is not efficient but will do the work.
{
"aggs": {
"dt_terms": {
"terms": {
"script": {
"source": """
DateTimeFormatter df = DateTimeFormatter.ofPattern("yyyy-MM-dd").withZone(ZoneId.systemDefault());
return df.format(Instant.ofEpochMilli(doc['ts'].value));
""",
"lang": "painless"
}
}
}
}
}
Since you do not want to modify original fields data type, another way this could be achieved is using copy_to fields. In your mapping, if you add a new copy_to field like below and use date datatype then you dont have to run the script and use the fields where its copied to for aggregation.
PUT /index_name/_mapping
{
"properties": {
"ts": {
"type": "long",
"copy_to": "ts_d"
},
"ts_d": {
"type": "date"
}
}
}
If you have above mapping, you can use reindex API to index with new mapping and then use below query which will now work on date datatype.
GET /_search
{
"aggs": {
"agg_name": {
"date_histogram": {
"field": "ts_d",
"interval": "1d",
"format": "yyyy-MM-dd",
"min_doc_count": 1
}
}
}
}
Upvotes: 3