ksrini
ksrini

Reputation: 1642

Elasticsearch analyzer tokens for alphanumeric value with dot

I have a text field that has this value-

term1-term2-term3-term4-term5-RWHPSA951000155.2013-05-27.log

When I check using the analyze API (default analyzer), I get this -

    {
    "tokens": [
        {
            "token": "text",
            "start_offset": 2,
            "end_offset": 6,
            "type": "<ALPHANUM>",
            "position": 1
        },
        {
            "token": "term1",
            "start_offset": 9,
            "end_offset": 14,
            "type": "<ALPHANUM>",
            "position": 2
        },
        {
            "token": "term2",
            "start_offset": 15,
            "end_offset": 20,
            "type": "<ALPHANUM>",
            "position": 3
        },
        {
            "token": "term3",
            "start_offset": 21,
            "end_offset": 26,
            "type": "<ALPHANUM>",
            "position": 4
        },
        {
            "token": "term4",
            "start_offset": 27,
            "end_offset": 32,
            "type": "<ALPHANUM>",
            "position": 5
        },
        {
            "token": "term5",
            "start_offset": 33,
            "end_offset": 38,
            "type": "<ALPHANUM>",
            "position": 6
        },
        {
            "token": "rwhpsa951000155.2013",
            "start_offset": 39,
            "end_offset": 59,
            "type": "<ALPHANUM>",
            "position": 7
        },
        {
            "token": "05",
            "start_offset": 60,
            "end_offset": 62,
            "type": "<NUM>",
            "position": 8
        },
        {
            "token": "27",
            "start_offset": 63,
            "end_offset": 65,
            "type": "<NUM>",
            "position": 9
        },
        {
            "token": "log",
            "start_offset": 66,
            "end_offset": 69,
            "type": "<ALPHANUM>",
            "position": 10
        }
    ]
}

I am particulary curious about this token - rwhpsa951000155.2013.How did that happen? Currently my search for matching RWHPSA951000155 fails because of this. How can I get it to recognize RWHPSA951000155 and 2013 as separate tokens?

Note that if the value was term1-term2-term3-term4-term5-RWHPSA.2013-05-27.log then it splits RWHPSA and 2013 into separate tokens. So it is something to do with 951000155.

Thanks,

Upvotes: 4

Views: 4671

Answers (1)

Dan Tuffery
Dan Tuffery

Reputation: 5924

The Standard Analyzer is tokenizing rwhpsa951000155.2013 as a product number.

Splits words at hyphens, unless there's a number in the token, in which case the whole token is interpreted as a product number and is not split.

You can add a pattern analyzer to replace the '.' with a white space. The default analyzer will then tokenize the term the way you want.

/POST test
{
    "settings": {
        "index": {
            "analysis": {
                "char_filter": {
                    "my_pattern": {
                        "type": "pattern_replace",
                        "pattern": "\\.",
                        "replacement": " "
                    }
                },
                "analyzer": {
                    "my_analyzer": {
                        "tokenizer": "standard",
                         "char_filter": [
                            "my_pattern"
                        ]
                    }
                }
            }
        }
    },
    "mappings": {
        "my_type": {
            "properties": {
                "test": {
                    "type": "string",
                    "analyzer": "my_analyzer"
                }
            }
        }
    }
}

Calling the analyze API:

curl -XGET 'localhost:9200/test/_analyze?analyzer=my_analyzer&pretty=true' -d 'term1-term2-term3-term4-term5-RWHPSA.2013-05-27.log'

Returns:

{
  "tokens" : [ {
    "token" : "term1",
    "start_offset" : 0,
    "end_offset" : 5,
    "type" : "<ALPHANUM>",
    "position" : 1
  }, {
    "token" : "term2",
    "start_offset" : 6,
    "end_offset" : 11,
    "type" : "<ALPHANUM>",
    "position" : 2
  }, {
    "token" : "term3",
    "start_offset" : 12,
    "end_offset" : 17,
    "type" : "<ALPHANUM>",
    "position" : 3
  }, {
    "token" : "term4",
    "start_offset" : 18,
    "end_offset" : 23,
    "type" : "<ALPHANUM>",
    "position" : 4
  }, {
    "token" : "term5",
    "start_offset" : 24,
    "end_offset" : 29,
    "type" : "<ALPHANUM>",
    "position" : 5
  }, {
    "token" : "RWHPSA951000155",
    "start_offset" : 30,
    "end_offset" : 45,
    "type" : "<ALPHANUM>",
    "position" : 6
  }, {
    "token" : "2013",
    "start_offset" : 46,
    "end_offset" : 50,
    "type" : "<NUM>",
    "position" : 7
  }, {
    "token" : "05",
    "start_offset" : 51,
    "end_offset" : 53,
    "type" : "<NUM>",
    "position" : 8
  }, {
    "token" : "27",
    "start_offset" : 54,
    "end_offset" : 56,
    "type" : "<NUM>",
    "position" : 9
  }, {
    "token" : "log",
    "start_offset" : 57,
    "end_offset" : 60,
    "type" : "<ALPHANUM>",
    "position" : 10
  } ]
}

Upvotes: 11

Related Questions