BilNash
BilNash

Reputation: 73

Python web scraping with requests - Got only a small part of data in the response

I'm trying to get some financial data from this url:

http://www.casablanca-bourse.com/bourseweb/en/Negociation-History.aspx?Cat=24&IdLink=225

My code work only for a very small date interval (less than 19 days) but in the web site we are allowed to get 3 years of data!.

My code is as follow:

import requests
import string
import csv
from bs4 import BeautifulSoup


# a simple helper function
def formatIt(s) :
    output = ''
    for i in s :
        if i in string.printable :
            output += i
    return output

# default url
uri = "http://www.casablanca-bourse.com/bourseweb/en/Negociation-History.aspx?Cat=24&IdLink=225"


def get_viewState_and_symVal (symbolName, session) :
    #session = requests.Session()
    r = session.get(uri)
    soup = BeautifulSoup(r.content) #soup = BeautifulSoup(r.text)
    # let's get the viewstate value
    viewstate_val = soup.find('input', attrs = {"id" : "__VIEWSTATE"})['value']
    # let's get the symbol value
    selectSymb = soup.find('select', attrs = {"name" : "HistoriqueNegociation1$HistValeur1$DDValeur"})
    for i in selectSymb.find_all('option') : 
        if i.text == symbolName :
            symbol_val = i['value']
    # simple sanity check before return ! 
    try : 
        symbol_val
    except :
        raise NameError ("Symbol Name not found !!!")    
    else :
        return (viewstate_val, symbol_val)


def MainFun (symbolName, dateFrom, dateTo) :
    session = requests.Session()
    request1 = get_viewState_and_symVal (symbolName, session)
    viewstate = request1[0]
    symbol = request1[1]
    payload = {
        'TopControl1$ScriptManager1' : r'HistoriqueNegociation1$UpdatePanel1|HistoriqueNegociation1$HistValeur1$Image1',
        '__VIEWSTATE' : viewstate,
        'HistoriqueNegociation1$HistValeur1$DDValeur' : symbol,  
        'HistoriqueNegociation1$HistValeur1$historique' : r'RBSearchDate',
        'HistoriqueNegociation1$HistValeur1$DateTimeControl1$TBCalendar' : dateFrom,
        'HistoriqueNegociation1$HistValeur1$DateTimeControl2$TBCalendar' : dateTo,
        'HistoriqueNegociation1$HistValeur1$DDuree' : r'6',
        'hiddenInputToUpdateATBuffer_CommonToolkitScripts' : r'1',
        'HistoriqueNegociation1$HistValeur1$Image1.x' : r'27',
        'HistoriqueNegociation1$HistValeur1$Image1.y' : r'8'
    }

    request2 = session.post(uri, data = payload)
    soup2 = BeautifulSoup(request2.content)
    ops = soup2.find_all('table', id = "arial11bleu")
    for i in ops : 
        try :
            i['class']
        except : 
            rslt = i
            break

    output = []
    for i in rslt.find_all('tr')[1:] :
        temp = []
        for j in i.find_all('td') :
            sani = j.text.strip()
            if not sani in string.whitespace :
                temp.append(formatIt(sani))
        if len(temp) > 0 :
            output.append(temp)

    with open("output.csv", "wb") as f :
        writer = csv.writer(f, delimiter = ';')
        writer.writerows(output)

    return writer



# working example
MainFun ("ATLANTA", "1/1/2014", "30/01/2014")

# not working example
MainFun ("ATLANTA", "1/1/2014", "30/03/2014")

Upvotes: 1

Views: 771

Answers (2)

BilNash
BilNash

Reputation: 73

It seems like there is something wrong in my windows environment. The code works fine in a debian based virtual machine and under a python virtualenv.

Upvotes: 2

reynoldsnlp
reynoldsnlp

Reputation: 1210

It may be that the site automatically detects scrapers and blocks you. Try adding a small sleep statement somewhere to give their server some time to breathe. This is generally a polite thing to do while scraping anyway.

from time import sleep
sleep(1) # pauses 1 second

Upvotes: 2

Related Questions