exos
exos

Reputation: 13

Scraping .aspx site after click

I am attempting to scrape scheduling data for my squadron from: https://www.cnatra.navy.mil/scheds/schedule_data.aspx?sq=vt-9

I have figured out how to extract the data using BeautifulSoup using:

import urllib2
from urllib2 import urlopen
import bs4 as bs

url = 'https://www.cnatra.navy.mil/scheds/schedule_data.aspx?sq=vt-9'
html = urllib2.urlopen(url).read()
soup = bs.BeautifulSoup(html, 'lxml')
table = soup.find('table')
print(table.text)

However, the table is hidden under the date is selected (if other than the current day) and the 'View Schedule' button is pressed.

How can I modify my code to 'press' the 'View Schedule' button so I can then scrape the data? Bonus points if the code can also choose a date!

I attempted to use:

import urllib2
from urllib2 import urlopen
import bs4 as bs
from selenium import webdriver

driver = webdriver.Chrome("/users/base/Downloads/chromedriver")
driver.get("https://www.cnatra.navy.mil/scheds/schedule_data.aspx?sq=vt-9")
button = driver.find_element_by_id('btnViewSched')
button.click()

which successfully opens Chrome and 'clicks' the button, but I can't scrape from this as the address is unchanged.

Upvotes: 1

Views: 1352

Answers (3)

Rajat
Rajat

Reputation: 118

As I read your problem, you need to use selenium for scraping .aspx pages where input required.

Read this article it will help you to scrape data for .aspx page with selenium

Upvotes: 1

Sers
Sers

Reputation: 12255

On "View Schedule" click, request with same url but with data btnViewSched=View Schedule and tokens are send. Here code, that collect table data in list of maps format:

import requests
from bs4 import BeautifulSoup

headers = {
    'Connection': 'keep-alive',
    'Cache-Control': 'max-age=0',
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) '
                  'Chrome/73.0.3683.86 Safari/537.36',
    'DNT': '1',
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,'
              'application/signed-exchange;v=b3',
    'Accept-Encoding': 'gzip, deflate, br',
    'Accept-Language': 'ru,en-US;q=0.9,en;q=0.8,tr;q=0.7',
}
response = requests.get('https://www.cnatra.navy.mil/scheds/schedule_data.aspx?sq=vt-9', headers=headers)
assert response.ok

page = BeautifulSoup(response.text, "lxml")
# get __VIEWSTATE, __EVENTVALIDATION and __VIEWSTATEGENERATOR for further requests
__VIEWSTATE = page.find("input", attrs={"id": "__VIEWSTATE"}).attrs["value"]
__EVENTVALIDATION = page.find("input", attrs={"id": "__EVENTVALIDATION"}).attrs["value"]
__VIEWSTATEGENERATOR = page.find("input", attrs={"id": "__VIEWSTATEGENERATOR"}).attrs["value"]

# View Schedule click set here
data = {
  '__EVENTTARGET': '',
  '__EVENTARGUMENT': '',
  '__VIEWSTATE': __VIEWSTATE,
  '__VIEWSTATEGENERATOR': __VIEWSTATEGENERATOR,
  '__EVENTVALIDATION': __EVENTVALIDATION,
  'btnViewSched': 'View Schedule',
  'txtNameSearch': ''
}
# request with params
response = requests.post('https://www.cnatra.navy.mil/scheds/schedule_data.aspx?sq=vt-9', headers=headers, data=data)
assert response.ok

page = BeautifulSoup(response.text, "lxml")
# get table headers to map as a keys in result
table_headers = [td.text.strip() for td in page.select("#dgEvents tr:first-child td")]
# get all rows, without table headers
table_rows = page.select("#dgEvents tr:not(:first-child)")

result = []
for row in table_rows:
    table_columns = row.find_all("td")

    # use map with results for row and add all columns as map (key:value)
    row_result = {}
    for i in range(0, len(table_headers)):
        row_result[table_headers[i]] = table_columns[i].text.strip()

    # add row_result to result list
    result.append(row_result)

for r in result:
    print(r)

print("the end")

Example output:

{'TYPE': 'Flight', 'VT': 'VT-9', 'Brief': '07:45', 'EDT': '09:45', 'RTB': '11:15', 'Instructor': 'JARVIS, GRANT M [LT]', 'Student': 'LENNOX, KEVIN I [ENS]', 'Event': 'BI4101', 'Hrs': '1.5', 'Remarks': '2 HR BRIEF MASS BRIEF', 'Location': ''}

Upvotes: -1

Alderven
Alderven

Reputation: 8270

You can use pure selenium to get the schedule:

from selenium import webdriver

driver = webdriver.Chrome('chromedriver.exe')
driver.get("https://www.cnatra.navy.mil/scheds/schedule_data.aspx?sq=vt-9")
button = driver.find_element_by_id('btnViewSched')
button.click()
print(driver.find_element_by_id('dgEvents').text)

Output:

TYPE VT Brief EDT RTB Instructor Student Event Hrs Remarks Location
Flight VT-9 07:45 09:45 11:15 JARVIS, GRANT M [LT] LENNOX, KEVIN I [ENS] BI4101 1.5 2 HR BRIEF MASS BRIEF  
Flight VT-9 07:45 09:45 11:15 MOYNAHAN, WILLIAM P [CDR] FINNERAN, MATTHEW P [1stLt] BI4101 1.5 2 HR BRIEF MASS BRIEF  
Flight VT-9 07:45 12:15 13:45 JARVIS, GRANT M [LT] TAYLOR, ADAM R [1stLt] BI4101 1.5 2 HR BRIEF MASS BRIEF @ 0745 W/ JARVIS MEI OPS  
Flight VT-9 07:45 12:15 13:45 MOYNAHAN, WILLIAM P [CDR] LOW, TRENTON G [ENS] BI4101 1.5 2 HR BRIEF MASS BRIEF @ 0745 W/ MOYNAHAN MEI OPS  
Watch VT-9   00:00 14:00 ANDERSON, LAURA [LT]   ODO (ON CALL) 14.0    
Watch VT-9   00:00 14:00 ANDERSON, LAURA [LT]   ODO (ON CALL) 14.0    
Watch VT-9   00:00 23:59 ANDERSON, LAURA [LT]   ODO (ON CALL) 24.0    
Watch VT-9   00:00 23:59 ANDERSON, LAURA [LT]   ODO (ON CALL) 24.0    
Watch VT-9   07:00 19:00   STUY, JOHN [LTJG] DAY IWO 12.0    
Watch VT-9   19:00 07:00   STRACHAN, ALLYSON [LTJG] IWO 12.0    

Upvotes: 1

Related Questions