Sandiph Bamrel
Sandiph Bamrel

Reputation: 147

Why am I unable to request a certain webpage using python requests

I can't get inside this webpage. When I try using requests.get(url) it doesn't progress at all and I get no HTTP errors just hangs like it's trying over and over.

I have tried using session and headers but none of them worked for me.

import bs4
from bs4 import BeautifulSoup as bs
import requests


url="https://www.gogoanime1.com/watch/hangyakusei-million-arthur-2nd-season/episode/episode-1"
epn=int(input("enter which episode link is it?: "))
newses=requests.Session()
newses.headers.update({'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'})
ssn=newses.get(url)
page=ssn.text
print()

soup=bs(page,'html.parser')
a=soup.find('div',{'class':'vmn-buttons'})
links=a.find_all('a')

for link in links:
    print(link)
    if link.text=="Download":
        print("found")
        dl=link['href']
        break
print(dl)

bom=newses.get(dl)
print(bom.text)

I want at least a response, but it hangs in there all day long, how can I access the page like a real user and scrape its content?

Upvotes: 0

Views: 132

Answers (1)

Sandiph Bamrel
Sandiph Bamrel

Reputation: 147

I didn't notice that it was actually a file and not a webpage to parse. Which took more time as it downloads the file.

Upvotes: 1

Related Questions