jmoon
jmoon

Reputation: 576

Getting contents of a URL and comparing locally

I'm looking to get a URL (it returns just one line, no html, just plaintext) every 3000sec. I want to compare it against "previous.txt" and call a function if it's different, then write it to previous.txt.

If it's same, do nothing. Can someone point me in a place to starting ?I'm not python much before.

Thanks.

Upvotes: 0

Views: 94

Answers (1)

mhawke
mhawke

Reputation: 87064

Here are some hints.

Looking at your question tags you already know that urllib will help. Try urllib2.urlopen() to retrieve the data from the URL.

Comparing previous and current contents is as simple as reading in the contents of previous.txt and doing a string comparison with whatever you have just retrieved from the URL. If the two strings are different, write the newer one to the file. Look at open() and read/write for your file IO needs.

If your Python process is long running, performing the task every 3000 seconds can be done using time.sleep() or signal.alarm() (or several other ways).

Upvotes: 1

Related Questions