ChrisMcJava
ChrisMcJava

Reputation: 2293

fast way of converting json data into .txt file - python 2.7

so I am converting JSON data from a url into a string, then writing it to a text file. This is my current Python script (I'm using Python 2.7.6):

import json
import urllib
import time

startTime = time.time()

url = "http://someurl..."
success = False

while (True):
    try:
        txt = urllib.urlopen(url).read()
        print "        -> open URL time: %.3f" % (time.time() - startTime)
        secondTime = time.time()

        textFile = open('data.txt', 'w')
        textFile.write("JSON Data (")
        textFile.write(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
        textFile.write("):\n")
        textFile.write(txt)
        textFile.close()
        print "        -> write file time: %.3f" % (time.time() - secondTime)
        thirdTime = time.time()

        success = True
        break
    except ValueError as valueErr:
        print "Error:", err
    except IOError as ioError:
        print "Error: Internet connection issues."
        break

if (success):
    print "    -> data.txt created()."
    print "    -> Finished."
    print "        -> Total Elapsed Time = %.3f" % (time.time() - startTime), "seconds."
else:
    print "    -> Finished."

and the output is as follows (I am running it in Windows command prompt, not the Python prompt):

'getCryptsyData.py' executing...
        -> open URL time: 4.864
    -> data.txt created().
        -> write file time: 0.005
    -> Finished.
        -> Total Elapsed Time = 4.939 seconds.

My question is, is there any faster way of doing this? I.e. with a different python script or another scripting language or in C?

Edit 1: updated code and output to current script I am running.

Upvotes: 0

Views: 8454

Answers (2)

Andrew Ehrlich
Andrew Ehrlich

Reputation: 287

Since the majority of the time is spent waiting for the external server to respond, you probably can't gain anything by changing your code. Depending on how this code is going to be used, you might be able to improve the overall experience by:

  • If the same files are likely to be requested again with no changes, cache them locally.
  • If the files are available on another server, find a mirror that is closer to you.
  • If the files are predictable in size, you could have another process that copies them locally on an interval.

Upvotes: 2

Corey Goldberg
Corey Goldberg

Reputation: 60604

You are loading json from txt already. Why not skip that and just write the response txt to file?

your example could skip json load/dump, and basically be re-written as:

txt = urllib.urlopen(url).read()
with open('data.txt', 'w') as f:
    f.write(txt)

some style tips:

  • use a context manager ("with" statement) for writing to file.
  • for timing code blocks, check out the timeit module.
  • follow pep8. your camelCased var names hurt my eyes :)

Upvotes: 1

Related Questions