Foo Bar
Foo Bar

Reputation: 1892

How do I configure Elasticsearch to find substrings at the beginning OR at the end of a word (but not in middle)?

I'm starting to learn Elasticsearch and now I am trying to write my first analyser configuration. What I want to achieve is that substrings are found if they are at the beginning or ending of a word. If I have the word "stackoverflow" and I search for "stack" I want to find it and when I search for "flow" I want to find it, but I do not want to find it when searching for "ackov" (in my use case this would not make sense).

I know there is the "Edge n gram tokenizer", but one analyser can only have one tokenizer and the edge n-gram can either be front or back (but not both at the same time).

And if I understood correctly, applying both version of the "Edge ngram filter" (front and back) to the analyzer, then I would not find either, because both filters need to return true, isn't it? Because "stack" wouldn't be in the ending of the word, so the back edge n gram filter would return false and the word "stackoverflow" would not be found.

So, how do I configure my analyzer to find substrings either in the end or in the beginning of a word, but not in the middle?

Upvotes: 0

Views: 969

Answers (1)

Val
Val

Reputation: 217474

What can be done is to define two analyzers, one for matching at the start of a string and another to match at the end of a string. In the index settings below, I named the former one prefix_edge_ngram_analyzer and the latter one suffix_edge_ngram_analyzer. Those two analyzers can be applied to a multi-field string field to the text.prefix sub-field, respectively to the text.suffix string field.

{
  "settings": {
    "analysis": {
      "analyzer": {
        "prefix_edge_ngram_analyzer": {
          "tokenizer": "prefix_edge_ngram_tokenizer",
          "filter": ["lowercase"]
        },
        "suffix_edge_ngram_analyzer": {
          "tokenizer": "keyword",
          "filter" : ["lowercase","reverse","suffix_edge_ngram_filter","reverse"]
        }
      },
      "tokenizer": {
        "prefix_edge_ngram_tokenizer": {
          "type": "edgeNGram",
          "min_gram": "2",
          "max_gram": "25"
        }
      },
      "filter": {
        "suffix_edge_ngram_filter": {
          "type": "edgeNGram",
          "min_gram": 2,
          "max_gram": 25
        }
      }
    }
  },
  "mappings": {
    "test_type": {
      "properties": {
        "text": {
          "type": "string",
          "fields": {
            "prefix": {
              "type": "string",
              "analyzer": "prefix_edge_ngram_analyzer"
            },
            "suffix": {
              "type": "string",
              "analyzer": "suffix_edge_ngram_analyzer"
            }
          }
        }
      }
    }
  }
}

Then let's say we index the following test document:

PUT test_index/test_type/1
{ "text": "stackoverflow" }

We can then search either by prefix or suffix using the following queries:

# input is "stack" => 1 result
GET test_index/test_type/_search?q=text.prefix:stack OR text.suffix:stack

# input is "flow" => 1 result
GET test_index/test_type/_search?q=text.prefix:flow OR text.suffix:flow

# input is "ackov" => 0 result
GET test_index/test_type/_search?q=text.prefix:ackov OR text.suffix:ackov

Another way to query with the query DSL:

POST test_index/test_type/_search
{
   "query": {
      "multi_match": {
         "query": "stack",
         "fields": [ "text.*" ]
      }
   }
}

UPDATE

If you already have a string field, you can "upgrade" it to a multi-field and create the two required sub-fields with their analyzers. The way to do this would be to do this in order:

  1. Close your index in order to create the analyzers

    POST test_index/_close
    
  2. Update the index settings

    PUT test_index/_settings
    {
    "analysis": {
      "analyzer": {
        "prefix_edge_ngram_analyzer": {
          "tokenizer": "prefix_edge_ngram_tokenizer",
          "filter": ["lowercase"]
        },
        "suffix_edge_ngram_analyzer": {
          "tokenizer": "keyword",
          "filter" : ["lowercase","reverse","suffix_edge_ngram_filter","reverse"]
        }
      },
      "tokenizer": {
        "prefix_edge_ngram_tokenizer": {
          "type": "edgeNGram",
          "min_gram": "2",
          "max_gram": "25"
        }
      },
      "filter": {
        "suffix_edge_ngram_filter": {
          "type": "edgeNGram",
          "min_gram": 2,
          "max_gram": 25
        }
      }
    }
    }
    
  3. Re-open your index

    POST test_index/_open
    
  4. Finally, update the mapping of your text field

    PUT test_index/_mapping/test_type
    {
      "properties": {
        "text": {
          "type": "string",
          "fields": {
            "prefix": {
              "type": "string",
              "analyzer": "prefix_edge_ngram_analyzer"
            },
            "suffix": {
              "type": "string",
              "analyzer": "suffix_edge_ngram_analyzer"
            }
          }
        }
      }
    }
    
  5. You still need to re-index all your documents in order for the new sub-fields text.prefix and text.suffix to be populated and analyzed.

Upvotes: 3

Related Questions