divyu garg
divyu garg

Reputation: 41

How to scrape multiple pages with an unchanging URL - Python & BeautifulSoup

I'm trying to scrape this website: https://www.99acres.com

So far I've used BeautifulSoup to execute the code and extract the data from the website; however, my code right now only gets me the first page. I was wondering if there's a way to access the other pages, because when I click on next page the URL does not change, so I cannot just iterate over a different URL each time.

Below is my code so far:

import io
import csv
import requests
from bs4 import BeautifulSoup

response = requests.get('https://www.99acres.com/search/property/buy/residential-all/hyderabad?search_type=QS&search_location=CP1&lstAcn=CP_R&lstAcnId=1&src=CLUSTER&preference=S&selected_tab=1&city=269&res_com=R&property_type=R&isvoicesearch=N&keyword_suggest=hyderabad%3B&bedroom_num=3&fullSelectedSuggestions=hyderabad&strEntityMap=W3sidHlwZSI6ImNpdHkifSx7IjEiOlsiaHlkZXJhYmFkIiwiQ0lUWV8yNjksIFBSRUZFUkVOQ0VfUywgUkVTQ09NX1IiXX1d&texttypedtillsuggestion=hy&refine_results=Y&Refine_Localities=Refine%20Localities&action=%2Fdo%2Fquicksearch%2Fsearch&suggestion=CITY_269%2C%20PREFERENCE_S%2C%20RESCOM_R&searchform=1&price_min=null&price_max=null')
html = response.text
soup = BeautifulSoup(html, 'html.parser')
list=[]

dealer = soup.findAll('div',{'class': 'srpWrap'})

for item in dealer:
    try:
        p = item.contents[1].find_all("div",{"class":"_srpttl srpttl fwn wdthFix480 lf"})[0].text
    except:
        p=''
    try:
        d = item.contents[1].find_all("div",{"class":"lf f13 hm10 mb5"})[0].text
    except:
        d=''

    li=[p,d]
    list.append(li)


with open('project.txt','w',encoding="utf-8") as file:
    writer= csv.writer(file)
    for row in list:
        writer.writerows(row)

file.close()

Upvotes: 2

Views: 1053

Answers (4)

SIM
SIM

Reputation: 22440

Try this. It will give you different property names from page 1 to 3.

import requests ; from bs4 import BeautifulSoup

base_url = "https://www.99acres.com/3-bhk-property-in-hyderabad-ffid-page-{0}" 
for url in [base_url.format(i) for i in range(1,4)]:
    response = requests.get(url)
    soup = BeautifulSoup(response.text,"html.parser")
    for title in soup.select("a[id^=desc_]"):
        print(title.text.strip())

Upvotes: 1

divyu garg
divyu garg

Reputation: 41

Here is the modified code which is not recieving any data.

import time
import io
import csv
import requests
from bs4 import BeautifulSoup
list=[]
for i in range(1, 101):
    time.sleep(2)
    url = "https://www.99acres.com/3-bhk-property-in-hyderabad-ffid-page-{0}".format(i)
    response = requests.get(url)
    html = response.text
    soup = BeautifulSoup(html, 'html.parser')


    dealer = soup.findAll('div',{'class': 'srpWrap'})

    for item in dealer:
        try:
            p = item.contents[1].find_all("div",{"class":"_srpttl srpttl fwn wdthFix480 lf"})[0].text
        except:
            p=''
        try:
            d = item.contents[1].find_all("div",{"class":"lf f13 hm10 mb5"})[0].text
        except:
            d=''

        li=[p,d]
        list.append(li)


    with open('project.txt','w',encoding="utf-8") as file:
        writer= csv.writer(file)
        for row in list:
            writer.writerows(row)

    file.close()

Upvotes: 0

Pablo Gonzalez Portela
Pablo Gonzalez Portela

Reputation: 199

Yes, once you go to the subsequent pages the url gets rewritten. However, the links are there; have a third page: https://www.99acres.com/3-bhk-property-in-hyderabad-ffid-page-3

Upvotes: 0

Severin
Severin

Reputation: 8588

I have never worked with beautifulSoup, but here is a general approach how to do this: You should index the JSON formatted response from the AJAX response when loading the page. Here is a sample using curl:

curl 'https://www.99acres.com/do/quicksearch/getresults_ajax' -H 'pragma: no-cache' -H 'origin: https://www.99acres.com' -H 'accept-encoding: gzip, deflate, br' -H 'accept-language: en-US,en;q=0.8,de;q=0.6,da;q=0.4' -H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36' -H 'content-type: application/x-www-form-urlencoded' -H 'accept: */*' -H 'cache-control: no-cache' -H 'authority: www.99acres.com' -H 'cookie: 99_ab=37; NEW_VISITOR=1; 99_FP_VISITOR_OFFSET=87; 99_suggestor=37; 99NRI=2; PROP_SOURCE=IP; src_city=-1; 99_citypage=-1; sl_prop=0; 99_defsrch=n; RES_COM=RES; kwp_last_action_id_type=2784981911907674%2CSEARCH%2C402278484965075610; 99_city=38; spd=%7B%22P%22%3A%7B%22a%22%3A%22R%22%2C%22b%22%3A%22S%22%2C%22c%22%3A%22R%22%2C%22d%22%3A%22269%22%2C%22j%22%3A%223%22%7D%7D; lsp=P; 99zedoParameters=%7B%22city%22%3A%22269%22%2C%22locality%22%3Anull%2C%22budgetBucket%22%3Anull%2C%22activity%22%3A%22SRP%22%2C%22rescom%22%3A%22RES%22%2C%22preference%22%3A%22BUY%22%2C%22nri%22%3A%22YES%22%7D; GOOGLE_SEARCH_ID=402278484965075610; _sess_id=1oFlv%2B%2FPAnDwWEEZiIGqNUTFrkARButJKqqEYu%2Fcv5WKMZCNYvpc89tievPnYatE28uBWbcd0PTpvCp9k3O20w%3D%3D; newRequirementsByUser=0' -H 'referer: https://www.99acres.com/3-bhk-property-in-hyderabad-ffid?orig_property_type=R&search_type=QS&search_location=CP1&pageid=QS' --data 'src=PAGING&static_search=1&nextbutton=Next%20%BB&page=2&button_next=2&lstAcnId=2784981911907674&encrypted_input=UiB8IFFTIHwgUyB8IzcjICB8IENQMSB8IzQjICB8IDMgIzE1I3wgIHwgMzExODQzMzMsMzExODM5NTUgfCAgfCAyNjkgfCM1IyAgfCBSICM0MCN8ICA%3D&lstAcn=SEARCH&sortby=&is_ajax=1' --compressed

This way you can adjust the page parameter.

Upvotes: 0

Related Questions