Reputation: 414
I'm trying to generate a dictionary from a pandas dataframe. Specifically, I need to:
Take the first (x) columns and use the data points in each of their rows, together, as keys.
Compile a dictionary for each key using the remaining data points in the row as values, as a list.
Let's use this sample dataframe for the sake of simplicity.
df = pd.DataFrame([
{'c1':a1, 'c2':110, 'c3':'xyz', 'c4':24},
{'c1':b2,'c2':100, 'c3':'jdf', 'c4':15},
{'c1':a1,'c2':110, 'c3':'kjl', 'c4':125},
{'c1':b2, 'c2':100, 'c3':'abc', 'c4':71},
])
c1 c2 c3 c4
0 a1 110 xyz 24
1 b2 100 jdf 15
2 a1 110 kjl 125
3 b2 100 abc 71
new_dict = some code
new_dict
{('a1', 110): [['xyz', 24], ['kjl', 125]], ('b2', 100): [['jdf', 15], ['abc', 71]]}
I've tried many, many things, including creating a list of tuple lists for the keys, assigning unique lists as keys to a new dictionary (with values empty lists)--but I can't then populate the values.
I'm able to compile a dictionary with a single column as the key, and everything else as needed, like this:
test_dict = {}
for index, row in df.iterrows():
if row['c1'] in test_dict:
test_dict[row['c1']].append([row['c2'], row['c3'], row['c4']])
else:
test_dict[row['c1']] = []
test_dict[row['c1']].append([row['c2'], row['c3'], row['c4']])
But I can't make the jump to combining multiple columns as the key.
Upvotes: 1
Views: 2250
Reputation: 61910
Assuming the following DataFrame:
import pandas as pd
df = pd.DataFrame([
{'c1': 'a1', 'c2': 110, 'c3': 'xyz', 'c4': 24},
{'c1': 'b2', 'c2': 100, 'c3': 'jdf', 'c4': 15},
{'c1': 'a1', 'c2': 110, 'c3': 'kjl', 'c4': 125},
{'c1': 'b2', 'c2': 100, 'c3': 'abc', 'c4': 71},
])
You could groupby, aggregate and then convert to dictionary (to_dict):
groups = df.groupby(['c1', 'c2']).apply(lambda x: x[['c3', 'c4']].values.tolist()).to_dict()
print(groups)
Output
{('a1', 110): [['xyz', 24], ['kjl', 125]], ('b2', 100): [['jdf', 15], ['abc', 71]]}
Upvotes: 1
Reputation: 3591
The data you used to create the dataframe doesn't match the dataframe that you present as an example, but what you seem to be wanting can be done with:
x = 2
key_cols = list(df.columns[:x])
value_cols = df.columns[x:]
new_dict = df.groupby(key_cols).apply(lambda sub_df: sub_df[value_cols].values.tolist()).to_dict()
Upvotes: 0
Reputation: 150745
You can try this groupby
:
(df.groupby(['c1','c2'])
.apply(lambda x: x[['c3','c4']].values)
.to_dict()
)
Output:
{('a1', 110): array([['xyz', 24],
['kjl', 125]], dtype=object), ('b2', 100): array([['jdf', 15],
['abc', 71]], dtype=object)}
Upvotes: 0
Reputation: 1907
test_dict = {}
for index, row in df.iterrows():
if (row['c1'], row['c2']) in test_dict:
test_dict[(row['c1'], row['c2'])].append([row['c3'], row['c4']])
else:
test_dict[(row['c1'], row['c2'])] = [[row['c3'], row['c4']]]
test_dict
{('a1', 100): [['xyz', 24], ['kjl', 125]], ('b2', 110): [['jdf', 15], ['abc', 71]]}
Upvotes: 0