Felix Yan
Felix Yan

Reputation: 15259

How to open excel file fast in Python?

I am now using PyExcelerator for reading excel files, but it is extremely slow. As I always need to open excel files more than 100MB, it takes me more than twenty minutes to only load one file.

The functionality I need are:

And the code I am using now is:

book = pyExcelerator.parse_xls(filepath)
parsed_dictionary = defaultdict(lambda: '', book[0][1])
number_of_columns = 44
result_list = []
number_of_rows = 500000
for i in range(0, number_of_rows):
    ok = False
    result_list.append([])
    for h in range(0, number_of_columns):
        item = parsed_dictionary[i,h]
        if type(item) is StringType or type(item) is UnicodeType:
            item = item.replace("\t","").strip()
        result_list[i].append(item)
        if item != '':
            ok = True
    if not ok:
        break

Any suggestions?

Upvotes: 8

Views: 5185

Answers (6)

Guido U. Draheim
Guido U. Draheim

Reputation: 3271

For my library/script tabxlsx I have compared read/write times with openpyxl and with my own code that works directly on xml data in the Excel zip. One million numbers get stored in 6 megabyte xlsx file. Conversion to json took 40 and 55 seconds respectivly. That shows the problem is in the Excel format which needs a lot of time in zlib decompression and xml element parsing before you can even convert the numbers to native python types.

On the other hand, here we can make an estimation for the lower bound of reading 100MB excel data being 5 minutes. That because I have the minimal code for decoding the Excel format. You could only get if faster than that when having an excel parser being implemented fully in a C module but even then I think the speedup would be minimal as CPython's zlib and xml reader are already written in C.

Here's my script for further testing. And it is probably the fastest solution currently available for loading xlsx data into python native data types: https://pypi.org/project/tabxlsx/

Upvotes: 0

ktr
ktr

Reputation: 756

I built a library recently that may be of interest: https://github.com/ktr/sxl. It essentially tries to "stream" Excel files like Python does with normal files and is therefore very fast when you only need a subset of data (esp. if it is near the beginning of the file).

Upvotes: 0

John Machin
John Machin

Reputation: 82934

pyExcelerator appears not to be maintained. To write xls files, use xlwt, which is a fork of pyExcelerator with bug fixes and many enhancements. The (very basic) xls reading capability of pyExcelerator was eradicated from xlwt. To read xls files, use xlrd.

If it's taking 20 minutes to load a 100MB xls file, you must be using one or more of: a slow computer, a computer with very little available memory, or an older version of Python.

Neither pyExcelerator nor xlrd read password-protected files.

Here's a link that covers xlrd and xlwt.

Disclaimer: I'm the author of xlrd and maintainer of xlwt.

Upvotes: 6

Imran
Imran

Reputation: 91019

Unrelated to your question: If you're trying to check if none of the columns are empty string, then you set ok = True initially, and do this instead in the inner loop (ok = ok and item != ''). Also, you can just use isinstance(item, basestring) to test whether a variable is string or not.

Revised version

for i in range(0, number_of_rows):
    ok = True
    result_list.append([])
    for h in range(0, number_of_columns):
        item = parsed_dictionary[i,h]
        if isinstance(item, basestring):
            item = item.replace("\t","").strip()
        result_list[i].append(item)
        ok = ok and item != ''

    if not ok:
        break

Upvotes: 1

Paul Sasik
Paul Sasik

Reputation: 81479

You could try to pre-allocate the list to its size in a single statement instead of appending one item at a time like this: (one large allocation of memory should be faster than many small ones)

book = pyExcelerator.parse_xls(filepath)
parsed_dictionary = defaultdict(lambda: '', book[0][1])
number_of_columns = 44
number_of_rows = 500000
result_list = [] * number_of_rows 
for i in range(0, number_of_rows):
    ok = False
    #result_list.append([])
    for h in range(0, number_of_columns):
        item = parsed_dictionary[i,h]
        if type(item) is StringType or type(item) is UnicodeType:
            item = item.replace("\t","").strip()
        result_list[i].append(item)
        if item != '':
            ok = True
    if not ok:
        break

If doing this gives appreciable performance increase you could also try to preallocate each list item with the number of columns and then assign them by index rather than appending one value at a time. Here's a snippet that creates a 10x10, two-dimensional list in a single statement with an initial value of 0:

L = [[0] * 10 for i in range(10)]

So folded into your code, it might work something like this:

book = pyExcelerator.parse_xls(filepath)
parsed_dictionary = defaultdict(lambda: '', book[0][1])
number_of_columns = 44
number_of_rows = 500000
result_list = [[''] * number_of_rows for x in range(number_of_columns)]
for i in range(0, number_of_rows):
    ok = False
    #result_list.append([])
    for h in range(0, number_of_columns):
        item = parsed_dictionary[i,h]
        if type(item) is StringType or type(item) is UnicodeType:
            item = item.replace("\t","").strip()
        result_list[i,h] = item
        if item != '':
            ok = True
    if not ok:
        break

Upvotes: 1

spulec
spulec

Reputation: 837

xlrd is pretty good for reading files and xlwt is pretty good for writing. Both superior to pyExcelerator in my experience.

Upvotes: 3

Related Questions