divine_nature
divine_nature

Reputation: 89

get title inside link tag in HTML using beautifulsoup

I am extracting data from https://data.gov.au/dataset?organization=reservebankofaustralia&_groups_limit=0&groups=business and got output I wanted but now problem is: the output that I am getting is Business Support an... and Reserve Bank of Aus...., not complete text, I want to print the whole text not "......." for all. I replaced line 9 and 10 in answer by jezrael, please refer to Fetching content from html and write fetched content in a specific format in CSV with code org = soup.find_all('a', {'class':'nav-item active'})[0].get('title') groups = soup.find_all('a', {'class':'nav-item active'})[1].get('title') . And I am running it separately and getting error: list index out of range. What should I use to extract complete sentences? I also tried : org = soup.find_all('span',class_="filtered pill"), it gave answer of type string when I ran separately but could not run with whole code.

Upvotes: 1

Views: 1501

Answers (2)

jezrael
jezrael

Reputation: 862641

All data with longer text are in attribut title, shorter are in text. So add double if:

for i in webpage_urls:
    wiki2 = i
    page= urllib.request.urlopen(wiki2)
    soup = BeautifulSoup(page, "lxml")

    lobbying = {}
    #always only 2 active li, so select first by [0]  and second by [1]
    l = soup.find_all('li', class_="nav-item active")

    org = l[0].a.get('title')
    if org == '':
        org = l[0].span.get_text()

    groups = l[1].a.get('title')
    if groups == '':
        groups = l[1].span.get_text()

    data2 = soup.find_all('h3', class_="dataset-heading")
    for element in data2:
        lobbying[element.a.get_text()] = {}
    data2[0].a["href"]
    prefix = "https://data.gov.au"
    for element in data2:
        lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]
        lobbying[element.a.get_text()]["Organisation"] = org
        lobbying[element.a.get_text()]["Group"] = groups

        #print(lobbying)
        df = pd.DataFrame.from_dict(lobbying, orient='index') \
               .rename_axis('Titles').reset_index()
        dfs.append(df)

df = pd.concat(dfs, ignore_index=True)
df1 = df.drop_duplicates(subset = 'Titles').reset_index(drop=True)

df1['Organisation'] = df1['Organisation'].str.replace('\(\d+\)', '')
df1['Group'] = df1['Group'].str.replace('\(\d+\)', '')

print (df1.head())

                                              Titles  \
0                                     Banks – Assets   
1  Consolidated Exposures – Immediate and Ultimat...   
2  Foreign Exchange Transactions and Holdings of ...   
3  Finance Companies and General Financiers – Sel...   
4                   Liabilities and Assets – Monthly   

                                                link  \
0           https://data.gov.au/dataset/banks-assets   
1  https://data.gov.au/dataset/consolidated-expos...   
2  https://data.gov.au/dataset/foreign-exchange-t...   
3  https://data.gov.au/dataset/finance-companies-...   
4  https://data.gov.au/dataset/liabilities-and-as...   

                Organisation                            Group  
0  Reserve Bank of Australia  Business Support and Regulation  
1  Reserve Bank of Australia  Business Support and Regulation  
2  Reserve Bank of Australia  Business Support and Regulation  
3  Reserve Bank of Australia  Business Support and Regulation  
4  Reserve Bank of Australia  Business Support and Regulation  

Upvotes: 2

Shashank
Shashank

Reputation: 1135

I guess you are trying to do this. Here in each link there is title attribute. So here I simply checked if there is any title attribute present or not and if it is then I simply printed it.

There are blank lines because there are few links where title="" so you can avoid that using conditional statement and then get all titles from that.

>>> l = soup.find_all('a')
>>> for i in l:
...     if i.has_attr('title'):
...             print(i['title'])
... 
Remove
Remove
Reserve Bank of Australia

Business Support and Regulation













Creative Commons Attribution 3.0 Australia
>>> 

Upvotes: 1

Related Questions