krishna
krishna

Reputation: 405

How do I make the driver navigate to new page in selenium python

I am trying to write a script to automate job applications on Linkedin using selenium and python.

The steps are simple:

  1. open the LinkedIn page, enter id password and log in
  2. open https://linkedin.com/jobs and enter the search keyword and location and click search(directly opening links like https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia get stuck as loading, probably due to lack of some post information from the previous page)
  3. the click opens the job search page but this doesn't seem to update the driver as it still searches on the previous page.
    import selenium
    from selenium import webdriver
    from selenium.webdriver.support.ui import WebDriverWait
    from bs4 import BeautifulSoup
    import pandas as pd
    import yaml
    
    driver = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")
    
    url = "https://linkedin.com/"
    driver.get(url)
    content = driver.page_source
    stream = open("details.yaml", 'r')
    details = yaml.safe_load(stream)
    
    def login():
        username = driver.find_element_by_id("session_key")
        password = driver.find_element_by_id("session_password")
        username.send_keys(details["login_details"]["id"])
        password.send_keys(details["login_details"]["password"])
        driver.find_element_by_class_name("sign-in-form__submit-button").click()
    
    
    def get_experience():
        return "1%C22"
    
    login()
    
    jobs_url = f'https://www.linkedin.com/jobs/'
    driver.get(jobs_url)
    
    keyword = driver.find_element_by_xpath("//input[starts-with(@id, 'jobs-search-box-keyword-id-ember')]")
    location = driver.find_element_by_xpath("//input[starts-with(@id, 'jobs-search-box-location-id-ember')]")
    keyword.send_keys("python")
    location.send_keys("Australia")
    driver.find_element_by_xpath("//button[normalize-space()='Search']").click()
    
    WebDriverWait(driver, 10)
    
    # content = driver.page_source
    # soup = BeautifulSoup(content)
    # with open("a.html", 'w') as a:
    #     a.write(str(soup))
    
    print(driver.current_url)

driver.current_url returns https://linkedin.com/jobs/ instead of https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia as it should. I have tried to print the content to a file, it is indeed from the previous jobs page and not from the search page. I have also tried to search elements from page like experience and easy apply button but the search results in a not found error.

I am not sure why this isn't working.

Any ideas? Thanks in Advance

UPDATE

It works if try to directly open something like https://www.linkedin.com/jobs/search/?f_AL=True&f_E=2&keywords=python&location=Australia but not https://www.linkedin.com/jobs/search/?f_AL=True&f_E=1%2C2&keywords=python&location=Australia

the difference in both these links is that one of them takes only one value for experience level while the other one takes two values. This means it's probably not a post values issue.

Upvotes: 1

Views: 629

Answers (1)

Prophet
Prophet

Reputation: 33361

You are getting and printing the current URL immediately after clicking on the search button, before the page changed with the response received from the server.
This is why it outputs you with https://linkedin.com/jobs/ instead of something like https://www.linkedin.com/jobs/search/?geoId=101452733&keywords=python&location=Australia.
WebDriverWait(driver, 10) or wait = WebDriverWait(driver, 20) will not cause any kind of delay like time.sleep(10) does.
wait = WebDriverWait(driver, 20) only instantiates a wait object, instance of WebDriverWait module / class

Upvotes: 1

Related Questions