Aaron
Aaron

Reputation: 49

Speed up reading multiple XML files in PHP

I currently have a php file that must read hundreds of XML files, I have no choice on how these XML files are constructed, they are created by a third party.

The first xml file is a large amount of titles for the rest of the xml files, so I search the first xml file to get file names for the rest of the xml files.

I then read each xml file searching its values for a specific phrase.

This process is really slow. I'm talking 5 1/2 minute runtimes... Which is not acceptable for a website, customers wont stay on for that long.

Does anyone know a way which could speed my code up, to a maximum runtime of approx 30s.

Here is a pastebin of my code : http://pastebin.com/HXSSj0Jt

Thanks, sorry for the incomprehensible English...

Upvotes: 0

Views: 1467

Answers (2)

cryo28
cryo28

Reputation: 1127

First of all if you have to deal with large xml files for each request to your service it is wise to download the xml's once, preprocess and cache them locally.

If you cannot preprocess and cache xml's and have to download them for each request (which I don't really believe is the case) you can try optimize by using XMLReader or some SAX event-based xml parser. The problem with SimpleXML is that it is using DOM underneath. DOM (as the letters stand for) creates document object model in your php process memory which takes a lot of time and eats tons of memory. I would risk to say that DOM is useless for parsing large XML files.

Whereas XMLReader will allow you to traverse the large XML node by node without barely eating any memory with the tradeoff that you cannot issue xpath queries or any other non-consequencial node access patterns.

How to use xmlreader you can consult with php manual for XMLReader extension

Upvotes: 1

goat
goat

Reputation: 31813

Your main problem is you're trying to make hundreds of http downloads to perform the search. Unless you get rid of that restriction, it's only gonna go so fast.

If for some reason the files aren't cachable at all(unlikely), not even some of the time, you can pick up some speed by downloading in parallel. See the curl_multi_*() functions. Alternatively, use wget from the command line with xargs to download in parallel.

The above sounds crazy if you have any kinda of traffic though.

Most likely, the files can be cached for at least a short time. Look at the http headers and see what kind of freshness info their server sends. It might say how long until the file expires, in which case you can save it locally until then. Or, it might give a last modified or etag, in which case you can do conditional get requests, which should speed things up still.

I would probably set up a local squid cache and have php make these requests through squid. It'll take care of all the use the local copy if its fresh, or conditionally retrieve a new version logic for you.

If you still want more performance, you can transform cached files into a more suitable format(eg, stick the relevant data in a database). Or if you must stick with the xml format, you can do a string search on the file first, to test whether you should bother parsing that file as xml at all.

Upvotes: 1

Related Questions