Reputation: 11
How can I parse every single page for eth addresses from https://etherscan.io/token/generic-tokenholders2?a=0x6425c6be902d692ae2db752b3c268afadb099d3b&s=0&p=1 ? Then add it to .txt .
Upvotes: 0
Views: 4035
Reputation: 439
Okay, possibly off-topic, but I had a play around with this. (Mainly because I thought I might need to use something similar to grab stuff in future that Etherscan's APIs don't return... )
The following Python2 code will grab what you're after. There's a hacky sleep in there to get around what I think is either something to do with how quickly the pages load, or some rate limiting imposed by Etherscan. I'm not sure.
Data gets written to a .csv
file - a text file wouldn't be much fun.
#!/usr/bin/env python
from __future__ import print_function
import os
import requests
from bs4 import BeautifulSoup
import csv
import time
RESULTS = "results.csv"
URL = "https://etherscan.io/token/generic-tokenholders2?a=0x6425c6be902d692ae2db752b3c268afadb099d3b&s=0&p="
def getData(sess, page):
url = URL + page
print("Retrieving page", page)
return BeautifulSoup(sess.get(url).text, 'html.parser')
def getPage(sess, page):
table = getData(sess, str(int(page))).find('table')
return [[X.text.strip() for X in row.find_all('td')] for row in table.find_all('tr')]
def main():
resp = requests.get(URL)
sess = requests.Session()
with open(RESULTS, 'wb') as f:
wr = csv.writer(f, quoting=csv.QUOTE_ALL)
wr.writerow(map(str, "Rank Address Quantity Percentage".split()))
page = 0
while True:
page += 1
data = getPage(sess, page)
# Even pages that don't contain the data we're
# after still contain a table.
if len(data) < 4:
break
else:
for row in data:
wr.writerow(row)
time.sleep(1)
if __name__ == "__main__":
main()
I'm sure it's not the best Python in the world.
Upvotes: 2