Matthew Cudby
Matthew Cudby

Reputation: 35

BeautifulSoup steam market web scraping errors

I'm attempting to write a program using, python and BeautifulSoup4, that looks at the steam market front page for a certain game (in this case Rust) and looks at each of the items and takes their name and price. So far I have managed to get this working for the first page (as each page only shows 10 items however when I change the web address for the second page I get the exact same output of the first page.

The URL i'm using for the first page is:https://steamcommunity.com/market/search?appid=252490#p1_popular_desc

The second page is:https://steamcommunity.com/market/search?appid=252490#p2_popular_desc

The code is:

import bs4 as bs
import urllib.request

for web_page in range(1,3):
    print('webpage number is: '+ str(web_page))
    if web_page == 1:
        url = "https://steamcommunity.com/market/search?appid=252490#p1_popular_desc"
        print(url)
        sauce = urllib.request.urlopen(url).read()
        soup = bs.BeautifulSoup(sauce,'lxml')


    if web_page == 2:
        urlADD = '#p2_popular_desc'
        url ="https://steamcommunity.com/market/search?appid=252490#p2_popular_desc"
        print(url)

        sauce = urllib.request.urlopen(url).read()
        soup = bs.BeautifulSoup(sauce,'lxml')



    for div in soup.find_all('a',class_='market_listing_row_link'):
        span = div.find('span',class_='normal_price')
        span2 = div.find('span',class_='market_listing_item_name')
        print(span2.text)
        print(span.text)

I'm not sure whats wrong here help would be welcome.

Upvotes: 1

Views: 903

Answers (1)

Cave Man
Cave Man

Reputation: 76

Try this: you need to install selenium and geckodriver for firefox though you will need this pypi.python.org/pypi/selenium (Happy scripting :>)

#Mossein~King(1m here to help)
import time
import selenium
import selenium.webdriver as webdriver
from BeautifulSoup import BeautifulSoup

#for.testing.purposes.only
driver = webdriver.Firefox()

url = ''
driver.get(url)

#pages you like to interact with
pages = 2
for x in xrange(pages):
    pagesource = driver.page_source
    soup = BeautifulSoup(pagesource)
    #do your stuff

    #go to next page
    #example if next button is <a class='MosseinKing Is Awesome'>
    driver.find_element_by_xpath("//span[@class='MosseinKing Is Awesome']").click()
    #wait for 2 seconds for page to load
    time.sleep(2)

Upvotes: 1

Related Questions