user1183556
user1183556

Reputation:

Iterate through multiple files and append text from HTML using Beautiful Soup

I have a directory of downloaded HTML files (46 of them) and I am attempting to iterate through each of them, read their contents, strip the HTML, and append only the text into a text file. However, I'm unsure where I'm messing up, though, as nothing gets written to my text file?

import os
import glob
from bs4 import BeautifulSoup
path = "/"
for infile in glob.glob(os.path.join(path, "*.html")):
        markup = (path)
        soup = BeautifulSoup(markup)
        with open("example.txt", "a") as myfile:
                myfile.write(soup)
                f.close()

-----update---- I've updated my code as below, however the text file still doesn't get created.

import os
import glob
from bs4 import BeautifulSoup
path = "/"
for infile in glob.glob(os.path.join(path, "*.html")):
    markup = (infile)
    soup = BeautifulSoup(markup)
    with open("example.txt", "a") as myfile:
        myfile.write(soup)
        myfile.close()

-----update 2-----

Ah, I caught that I had my directory incorrect, so now I have:

import os
import glob
from bs4 import BeautifulSoup

path = "c:\\users\\me\\downloads\\"

for infile in glob.glob(os.path.join(path, "*.html")):
    markup = (infile)
    soup = BeautifulSoup(markup)
    with open("example.txt", "a") as myfile:
        myfile.write(soup)
        myfile.close()

When this is executed, I get this error:

Traceback (most recent call last):
  File "C:\Users\Me\Downloads\bsoup.py, line 11 in <module>
    myfile.write(soup)
TypeError: must be str, not BeautifulSoup

I fixed this last error by changing

myfile.write(soup)

to

myfile.write(soup.get_text())

-----update 3 ----

It's working properly now, here's the working code:

import os
import glob
from bs4 import BeautifulSoup

path = "c:\\users\\me\\downloads\\"

for infile in glob.glob(os.path.join(path, "*.html")):
    markup = (infile)
    soup = BeautifulSoup(open(markup, "r").read())
    with open("example.txt", "a") as myfile:
        myfile.write(soup.get_text())
        myfile.close()

Upvotes: 0

Views: 3124

Answers (2)

poof
poof

Reputation: 69

If you want to use lxml.html directly here is a modified version of some code I've been using for a project. If you want to grab all the text, just don't filter by tag. There may be a way to do it without iterating, but I don't know. It saves the data as unicode, so you will have to take that into account when opening the file.

import os
import glob

import lxml.html

path = '/'

# Whatever tags you want to pull text from.
visible_text_tags = ['p', 'li', 'td', 'h1', 'h2', 'h3', 'h4',
                     'h5', 'h6', 'a', 'div', 'span']

for infile in glob.glob(os.path.join(path, "*.html")):
    doc = lxml.html.parse(infile)

    file_text = []

    for element in doc.iter(): # Iterate once through the entire document

        try:  # Grab tag name and text (+ tail text)   
            tag = element.tag
            text = element.text
            tail = element.tail
        except:
            continue

        words = None # text words split to list
        if tail: # combine text and tail
            text = text + " " + tail if text else tail
        if text: # lowercase and split to list
            words = text.lower().split()

        if tag in visible_text_tags:
            if words:
                file_text.append(' '.join(words))

    with open('example.txt', 'a') as myfile:
        myfile.write(' '.join(file_text).encode('utf8'))

Upvotes: 0

Moj
Moj

Reputation: 6361

actually you are not reading html file, this should work,

soup=BeautifulSoup(open(webpage,'r').read(), 'lxml')

Upvotes: 1

Related Questions