Reputation: 509
I would like to get the matches from one column with the other columns in a dataframe. Below is an example:
date tableNameFrom tableNameJoin attributeName
1 29-03-2019 film language [film.languageId, language.languageID]
2 30-03-2019 inventory rental [invetory.inventoryId, rental.filmId]
In the example above, I would like to match between tablenameFrom and tablenameJoin with the attributeName. But, I want to keep the date column. Here desired output:
date tableName attributeName
1 29-03-2019 film languageId
2 29-03-2019 language languageID
3 30-03-2019 inventory inventoryId
4 30-03-2019 rental filmId
Any idea what should I do? Thank you.
Upvotes: 0
Views: 109
Reputation: 42916
This is basically a melt
problem but with an intermediate step to clean up your attributeName
column:
# Clean up attributeName column by converting it to string and expanding it by splitting
df = pd.concat([df, df.attributeName.apply(lambda x: ' '.join(x).split('.')[1]).str.split(expand=True)], axis=1)\
.drop('attributeName', axis=1)\
.rename({0:'attribute1', 1:'attribute2'}, axis=1)
Which gives us the following dataframe:
print(df)
date tableNameFrom tableNameJoin attribute1 attribute2
0 29-03-2019 film language languageId language
1 30-03-2019 inventory rental inventoryId rental
Then we can use melt
twice to get the desired output:
df1 = df.melt(id_vars='date', value_vars=['tableNameFrom', 'tableNameJoin'], value_name='tableName').drop('variable', axis=1)
df2 = df.melt(id_vars='date', value_vars=['attribute1', 'attribute2'], value_name='attributeName').drop(['variable', 'date'], axis=1)
df = pd.concat([df1, df2], axis=1)
Which gives us the final output:
print(df)
date tableName attributeName
0 29-03-2019 film languageId
1 30-03-2019 inventory inventoryId
2 29-03-2019 language language
3 30-03-2019 rental rental
Upvotes: 1
Reputation: 12406
Here is one possible approach that avoids using apply
Raw DataFrame
date tableNameFrom tableNameJoin attributeName
0 29-03-2019 film language [film.languageId, language.languageID]
1 30-03-2019 inventory rental [invetory.inventoryId, rental.filmId]
Step 1 - (1) Split attributeName
into 2 separate columns using ,
as the separator, (2) remove unwanted square brackets ([
or ]
), (3) drop unwanted columns
# 1
df[['tableName','attributeName2']] = df['attributeName'].str.split(',', expand=True)
# 2
df['tableName'] = df['tableName'].str.strip('[')
df['attributeName2'] = df['attributeName2'].str.strip(']')
# 3
df.drop(['attributeName','tableNameFrom','tableNameJoin'], axis=1, inplace=True)
print(df)
date tableName attributeName2
0 29-03-2019 film.languageId language.languageID
1 30-03-2019 invetory.inventoryId rental.filmId
Step 2 - Finally, use this SO post to split the rows from the columns tableName
and attributeName2
into separate columns
df_match = (df.set_index(['date'])
.stack()
.str.split('.', expand=True)
.stack()
.unstack(-1)
.reset_index(-1, drop=True)
.reset_index()
)
df_match.columns = ['date','tableName','attributeName']
print(df_match)
date tableName attributeName
0 29-03-2019 film languageId
1 29-03-2019 language languageID
2 30-03-2019 invetory inventoryId
3 30-03-2019 rental filmId
Details
.set_index(['date']
- set the columns that are to be kept as the index of the DataFrame
.stack()
- stack rows.str.split('.', expand=True)
- call str.split on period (.
) to split those cells into separate columns.stack()
- get rid of NULL
values, so call stack
again.unstack(-1)
unstack(-1)
(unstack on the last level).reset_index(-1, drop=True)
- eliminate superfluous last level using reset_index(-1)
1 Here is the output before the .unstack(-1)
step and an explanation of why we need to use -1
inside unstack()
df_intermediate = (df.set_index(['date'])
.stack()
.str.split('.', expand=True)
.stack()
)
print(df_intermediate)
date
29-03-2019 tableName 0 film
1 languageId
attributeName2 0 language
1 languageID
30-03-2019 tableName 0 invetory
1 inventoryId
attributeName2 0 rental
1 filmId
dtype: object
-1
in .unstack(-1)
Upvotes: 1
Reputation: 2161
df.drop(["tableNameFrom","tableNameJoin"],inplace=True)
df2=pd.DataFrame()
for row in df.itertuples():
for i in range(2):
df2=df2.append(pd.Series(
[row[1],"".join(row[2][i].split('.')[0]),"".join(row[2][i].split('.')[1])]).T,ignore_index=True)
df2.columns=["date","tableName","tableAttribute"]
df2
don't see why you need the middle two columns so i dropped them.
Upvotes: 1
Reputation: 150745
This may not be what you want, but yields the expected output for the given dataframe:
(df.set_index('date').attributeName.apply(pd.Series).stack()
.reset_index().drop('level_1', axis=1).set_index('date')[0]
.str.split('.').apply(pd.Series)
)
It ignores tableNameFrome
and tableNameJoin
and assumes that these are given in attributeName
.
+----+------------+----------+-------------+
| | date | 0 | 1 |
|----+------------+----------+-------------|
| 0 | 29-03-2019 | film | languageId |
| 1 | 29-03-2019 | language | languageID |
| 2 | 30-03-2019 | invetory | inventoryId |
| 3 | 30-03-2019 | rental | filmId |
+----+------------+----------+-------------+
Upvotes: 2