Reputation: 105
I am writing a Python code to extract all the URLs from an input file, having content or text from Twitter (Tweets). However, while doing so I realized that several URLs that were extracted in the python list had 'special characters' or 'Punctuation' towards the end, because of which I could not further parse through them to get the base URL link. My Question is: 'How do I identify & remove special characters from the end of every URL in my list' ?
Current Output:
['https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u'', 'https://twitter.com/GVNyqWEu5u@#', 'https://twitter.com/GVNyqWEu5u"']
Desired Output:
['https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u']
You would appreciate that not all elements in the 'Current Output' list have special characters / punctuation towards the end. The task is to identify & remove characters / punctuation only from the list elements who have them.
I am using the following Regex to extract twitter URLs from the Tweet Text: lst = re.findall('(http.?://[^\s]+)', text)
Can I remove the special characters / punctuation towards the end of the URL, in this step itself ?
Full Code:
import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
from socket import timeout
import ssl
import re
import csv
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
count = 0
file = "Test.CSV"
with open(file,'r', encoding='utf-8') as f, open('output_themes_1.csv', 'w', newline='', encoding='utf-8') as ofile:
next(f)
reader = csv.reader(f)
writer = csv.writer(ofile)
fir = 'S.No.', 'Article_Id', 'Validity', 'Content', 'Geography', 'URL'
writer.writerow(fir)
for line in reader:
count = count+1
text = line[5]
lst = re.findall('(http.?://[^\s]+)', text)
if not lst:
x = count, line[0], 'Empty List', text, line[8], line[6]
print (x)
writer.writerow(x)
else:
try:
for url in lst:
try:
html = urllib.request.urlopen(url, context=ctx, timeout=60).read()
#html = urllib.request.urlopen(urllib.parse.quote(url, errors='ignore'), context=ctx).read()
soup = BeautifulSoup(html, 'html.parser')
title = soup.title.string
str_title = str (title)
if 'Twitter' in str_title:
if len(lst) > 1: break
else: continue
else:
y = count, line[0], 'Parsed', str_title, line[8], url
print (y)
writer.writerow(y)
except UnicodeEncodeError as e:
b_url = url.encode('ascii', errors='ignore')
n_url = b_url.decode("utf-8")
try:
html = urllib.request.urlopen(n_url, context=ctx, timeout=90).read()
soup = BeautifulSoup(html, 'html.parser')
title = soup.title.string
str_title = str (title)
if 'Twitter' in str_title:
if len(lst) > 1: break
else: continue
else:
z = count, line[0], 'Parsed_2', str_title, line[8], url
print (z)
writer.writerow(z)
except Exception as e:
a = count, line[0], str(e), text, line[8], url
print (a)
writer.writerow(a)
except Exception as e:
b = count, line[0], str(e), text, line[8], url
print (b)
writer.writerow(b)
print ('Total Rows Analyzed:', count)
Upvotes: 1
Views: 3593
Reputation: 26084
Assuming the special characters occur at the end of the string you may use:
mydata = ['https://twitter.com/GVNyqWEu5u', "https://twitter.com/GVNyqWEu5u'", 'https://twitter.com/GVNyqWEu5u@#', 'https://twitter.com/GVNyqWEu5u"']
mydata = [re.sub('[^a-zA-Z0-9]+$','',item) for item in mydata]
print(mydata)
Prints:
['https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u', 'https://twitter.com/GVNyqWEu5u']
Upvotes: 1
Reputation: 3669
You could try this -
lst = [re.sub('[=" ]$', '', i) for i in re.findall('(http.?://[^\s]+)', text)]
You can just add more characters that you want to replace in your sub according to your requirements
Upvotes: 0
Reputation: 1574
Assuming your list is called urls:
def remove_special_chars(url, char_list=None):
if char_list is None:
# Build your own default list here
char_list = ['#', '%']
for character in char_list:
if url.endswith(character):
return remove_special_chars(url[:-1], char_list)
return url
urls = [remove_special_chars(url) for url in urls]
If you want to get rid of a special set of characters just change either the default value or pass a proper list as an argument
Upvotes: 0