Reputation: 982
Hello Elasticsearch experts!
I have a use case which I am not sure what is the best way to go about it.
I have an html file which I need to index. This part is easy as I can configure my custom analyser and can create the index.
Although I have a special need that I need to extract some data duringn indexing into special fields.
This is an extract from the html which has thousands of such lines.
<td>....</td>
<td>...
<p>Great item to truck</p></td>...
<a href="javascript:selectItem('1.a.b.c.1.d.f.11')">1.a.b.c.1.d.f.11</a> ...
plenty of garbage and even inline css.
my limitations:
my challenge:
so when the user starts typing 1.a.b.c.1.d.f.11 I must be able to autocomplete it.
should I create an analyzer that strips everything but the content of the tag. if so how can I do this ?
I would appreciate any comment or hint what you feel would be the right approach here using elasticsearch
Upvotes: 2
Views: 3815
Reputation: 1519
Solution 1
Now if you want to completely strip out the html prior to indexing and storing the content as is, you can use the mapper attachment plugin - in which when you define the mapping, you can categorize the content_type to be "html."
The mapper attachment is useful for many things especially if you are handling multiple document types, but most notably - I believe just using this for the purpose of stripping out the html tags is sufficient enough (which you cannot do with the html_strip char filter).
Just a forewarning though - NONE of the html tags will be stored. So if you do need those tags somehow, I would suggest defining another field to store the original content. Another note: You cannot specify multifields for mapper attachment documents, so you would need to store that outside of the mapper attachment document. See my working example below.
You'll need to result in this mapping:
{
"html5-es" : {
"aliases" : { },
"mappings" : {
"document" : {
"properties" : {
"delete" : {
"type" : "boolean"
},
"file" : {
"type" : "attachment",
"fields" : {
"content" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets",
"analyzer" : "autocomplete"
},
"author" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets"
},
"title" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets",
"analyzer" : "autocomplete"
},
"name" : {
"type" : "string"
},
"date" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
},
"keywords" : {
"type" : "string"
},
"content_type" : {
"type" : "string"
},
"content_length" : {
"type" : "integer"
},
"language" : {
"type" : "string"
}
}
},
"hash_id" : {
"type" : "string"
},
"path" : {
"type" : "string"
},
"raw_content" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets",
"analyzer" : "raw"
},
"title" : {
"type" : "string"
}
}
}
},
"settings" : { //insert your own settings here },
"warmers" : { }
}
}
Such that in NEST, I will assemble the content as such:
Attachment attachment = new Attachment();
attachment.Content = Convert.ToBase64String(File.ReadAllBytes("path/to/document"));
attachment.ContentType = "html";
Document document = new Document();
document.File = attachment;
document.RawContent = InsertRawContentFromString(originalText);
I have tested this in Sense - results are as follows:
"file": {
"_content": "PGh0bWwgeG1sbnM6TWFkQ2FwPSJodHRwOi8vd3d3Lm1hZGNhcHNvZnR3YXJlLmNvbS9TY2hlbWFzL01hZENhcC54c2QiPg0KICA8aGVhZCAvPg0KICA8Ym9keT4NCiAgICA8aDE+VG9waWMxMDwvaDE+DQogICAgPHA+RGVsZXRlIHRoaXMgdGV4dCBhbmQgcmVwbGFjZSBpdCB3aXRoIHlvdXIgb3duIGNvbnRlbnQuIENoZWNrIHlvdXIgbWFpbGJveC48L3A+DQogICAgPHA+wqA8L3A+DQogICAgPHA+YXNkZjwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD4xMDwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD5MYXZlbmRlci48L3A+DQogICAgPHA+wqA8L3A+DQogICAgPHA+MTAvNiAxMjowMzwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD41IDA5PC9wPg0KICAgIDxwPsKgPC9wPg0KICAgIDxwPjExIDQ3PC9wPg0KICAgIDxwPsKgPC9wPg0KICAgIDxwPkhhbGxvd2VlbiBpcyBpbiBPY3RvYmVyLjwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD5qb2c8L3A+DQogIDwvYm9keT4NCjwvaHRtbD4=",
"_content_length": 0,
"_content_type": "html",
"_date": "0001-01-01T00:00:00",
"_title": "Topic10"
},
"delete": false,
"raw_content": "<h1>Topic10</h1><p>Delete this text and replace it with your own content. Check your mailbox.</p><p> </p><p>asdf</p><p> </p><p>10</p><p> </p><p>Lavender.</p><p> </p><p>10/6 12:03</p><p> </p><p>5 09</p><p> </p><p>11 47</p><p> </p><p>Halloween is in October.</p><p> </p><p>jog</p>"
},
"highlight": {
"file.content": [
"\n <em>Topic10</em>\n\n Delete this text and replace it with your own content. Check your mailbox.\n\n \n\n asdf\n\n \n\n 10\n\n \n\n Lavender.\n\n \n\n 10/6 12:03\n\n \n\n 5 09\n\n \n\n 11 47\n\n \n\n Halloween is in October.\n\n \n\n jog\n\n "
]
}
Solution 2
You'll need to build an NGram analyzer to INDEX your content and SEARCH using the standard analyzer.
"analyzer" : {
"standard" : {
"type" : "standard"
},
"autocomplete" : {
"filter" : [ "standard", "lowercase" ],
"char_filter" : [ "html_strip" ],
"type" : "custom",
"tokenizer" : "ngram"
}
Example of this:
Input: "Brown"
NGram analyzer:
So when you do an autocomplete search, it will match any of those indexed fragments. But it is important to only SEARCH (return a page of results) with the standard analyzer so that it won't match just any of those random fragments.
Upvotes: 0
Reputation: 161
Solution 1:
I suggest that you develop a small application that parses the html file contents and only index the data that you are interested it. in other words strips all html tags and unnecessary data
Solution 2
You can make use of the char filter [html_strip] to strip all html tags
GET /_analyze?tokenizer=keyword&token_filters=lowercase&char_filters=html_strip&text=<td>....</td><td>...<p>Great item to truck</p></td>...<a href="javascript:selectItem('1.a.b.c.1.d.f.11')">1.a.b.c.1.d.f.11</a> ...
Upvotes: 3