Reputation: 99
I am working with a large dataset that iteratively fetches n number of child URLs for a particular parent URL.
I initially used excel to record the data (test the working my code actually). But later found out that the idea is not worth it as the output data were huge.
for example: i have two set of data:
amazon.com: ['a','b','c','d','e']
a : ['k','j','e','f']
amazon.com
is the parent URL and the list of values are it's child URLs. a
becomes the parent URL and the list of values are it's child URLs.Now what I actually require is to get a dataframe like:
a b c d e k j f
amazon.com 1 1 1 1 1
a 1 1 1 1
where 1 can be assumed to be a value to show that say a is the child of amazon.com
Now the problem is I won't have the data as shown above. They are obtained dynamically as I crawl through the website.
So the flow would be:
Open a website URL
records the URL (parent URL - this is where we get the URL)
records all the URLs present in the page (child URL - this is where we get all the child URLs corresponding to the parent URL and hence can populate our list/dictionary and hence the dataframe)
As can be noticed, no duplicates column headers are found.
Can someone help me out on this one?
Upvotes: 0
Views: 1722
Reputation: 1446
Hope this would help:
xx = {
'amazon.com': ['a','b','c','d','e'],
'a' : ['k','j','e','f']
}
all_vals = [item for key,items in xx.items() for item in items]
all_vals = sorted(set(all_vals))
df = pd.DataFrame(index=xx.keys(),columns=all_vals)
def is_exist(idx,col):
ret = col in xx[idx]
return int(ret)
for idx in df.index:
for col in df.columns:
df.loc[idx, col] = is_exist(idx, col)
df
Upvotes: 2