Furqan Hashim
Furqan Hashim

Reputation: 1318

Data Extraction Using Python

I am interested in extracting historical prices from this link: https://pakstockexchange.com/stock2/index_new.php?section=research&page=show_price_table_new&symbol=KEL

To do so I am using the following code

import requests
import pandas as pd
import time as t

t0=t.time()

symbols =[
          'HMIM',
           'CWSM','DSIL','RAVT','PIBTL','PICT','PNSC','ASL',
          'DSL','ISL','CSAP','MUGHAL','DKL','ASTL','INIL']

for symbol in symbols:
    header = {
  "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
  "X-Requested-With": "XMLHttpRequest"
}
    r = requests.get('https://pakstockexchange.com/stock2/index_new.php?section=research&page=show_price_table_new&symbol={}'.format(str(symbol)), headers=header)
    dfs = pd.read_html(r.text)
    df=dfs[6]
    df=df.ix[2: , ]
    df.columns=['Date','Open','High','Low','Close','Volume']
    df.set_index('Date', inplace=True)
    df.to_csv('/home/furqan/Desktop/python_data/{}.csv'.format(str(symbol)),columns=['Open','High','Low','Close','Volume'],
             index_label=['Date'])

    print(symbol)


t1=t.time()
print('exec time is ', t1-t0, 'seconds')

Above code extracts data from the link converts it into pandas data frame and saves it.

Problem is that it takes a lot of time and is not efficient with greater number of symbols. Can anyone suggest any other way to achieve the above result in an efficient way.

Moreover, is there any other programming language that would do the same job but in a less time.

Upvotes: 2

Views: 282

Answers (1)

roganjosh
roganjosh

Reputation: 13175

Normal GET requests with requests are "blocking"; one request is sent, one response is received and then processed. At least some portion of your processing time is spent waiting for responses - we can instead send all our requests asynchronously with requests-futures and then collect the responses as they are ready.

That said, I think DSIL is timing out or something similar (I need to look further). While I was able to get a decent speedup with a random selection from symbols, both methods take approx. the same time if DSIL is in the list.

EDIT: Seems I lied, it was just an unfortunate coincidence with "DSIL" on multiple occasions. The more tags you have in symbols, the faster the async method will become over standard requests.

import requests
from requests_futures.sessions import FuturesSession
import time

start_sync = time.time()

symbols =['HMIM','CWSM','RAVT','ASTL','INIL']

header = {
  "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
  "X-Requested-With": "XMLHttpRequest"
}

for symbol in symbols:
    r = requests.get('https://pakstockexchange.com/stock2/index_new.php?section=research&page=show_price_table_new&symbol={}'.format(str(symbol)), headers=header)

end_sync = time.time()

start_async = time.time()
# Setup
session = FuturesSession(max_workers=10)
pooled_requests = []

# Gather request URLs
for symbol in symbols:
    request= 'https://pakstockexchange.com/stock2/index_new.php?section=research&page=show_price_table_new&symbol={}'.format(symbol)
    pooled_requests.append(request)

# Fire the requests
fire_requests = [session.get(url, headers=header) for url in pooled_requests]
responses = [item.result() for item in fire_requests]

end_async = time.time()

print "Synchronous requests took: {}".format(end_sync - start_sync)
print "Async requests took:       {}".format(end_async - start_async)

In the above code, I get a 3x speedup in getting responses. You can iterate through responses list and process each response as normal.

EDIT 2: Going through the responses of the async requests and saving them as you did earlier:

for i, r in enumerate(responses):
    dfs = pd.read_html(r.text)
    df=dfs[6]
    df=df.ix[2: , ]
    df.columns=['Date','Open','High','Low','Close','Volume']
    df.set_index('Date', inplace=True)
    df.to_csv('/home/furqan/Desktop/python_data/{}.csv'.format(symbols[i]),columns=['Open','High','Low','Close','Volume'],
             index_label=['Date'])

Upvotes: 2

Related Questions