Reputation: 135
I'm using this answer to clean an HTML file.
Remove all javascript tags and style tags from html with python and the lxml module
It does a great job of removing all the html, script, and style tags, however if the text doesn't have a space in it the cleaner doesn't add one. This is a problem for things like menus that don't have spaces so it comes out as all one word because they all run together.
Any ideas on how to prevent this, add the spaces, or whatever? Thanks
Upvotes: 2
Views: 1634
Reputation: 5914
If you want to solve the same problem, but using bs4
and dropping lxml
:
from bs4 import BeautifulSoup
html = "<div>Test</div><div>Test 2</div>"
soup = BeautifulSoup(html)
text = soup.getText(separator=u' ')
Upvotes: 2
Reputation: 5914
A relatively concise approach is
import lxml.html
from lxml import etree
html = "<div>Test</div><div>Test 2</div>"
document = lxml.html.document_fromstring(html)
text = " ".join(etree.XPath("//text()")(document))
(see also https://stackoverflow.com/a/23929354/4240413)
Upvotes: 3
Reputation: 135
This may or may not help anyone in the future, but this worked for me.
from lxml import html as HTML
from lxml.html.clean import clean_html
from lxml.html.clean import Cleaner
import re
html = "<div>Test</div><div>Test 2</div>"
spaced_html = re.sub("</", " </", html)
doc = HTML.document_fromstring(spaced_html)
cleaner = Cleaner()
cleaner.javascript = True
cleaner.style = True
doc = cleaner.clean_html(doc)
text = doc.text_content()
text = re.sub(' +',' ',text)
The only catch is that it removes any extra spaces. If you need those, you'll need a different solution, but I didn't so it works perfectly.
Upvotes: 1