Reputation: 1991
I have dataframe as below :
User_Id MARKED_CONTENT_AS_FAVOURITE RATE_CONTENT SEARCH VIEWED_CELEBRITY VIEWED_MOVIE VIEWED_TVSHOW
1 6916484f-b7bd-431a-818d-d1a63ff7c717 0 0 1 0 4 0
2 9fbb7702-5209-46c8-b7c8-2c3d03550b56 2 2 1 1 20 3
3 cb1fc554-8566-4c9f-a3ca-f64be302d65e 0 0 1 1 0 0
Now I have a series with just one value. I want to perform a sort of vectorized operation where I have series with only one value.
USER_CHECKED_IN_CONTENT
3 0
I want to append this column to the dataframe, as follows.
User_Id MARKED_CONTENT_AS_FAVOURITE RATE_CONTENT SEARCH VIEWED_CELEBRITY VIEWED_MOVIE VIEWED_TVSHOW USER_CHECKED_IN_CONTENT
1 6916484f-b7bd-431a-818d-d1a63ff7c717 0 0 1 0 4 0 0
2 9fbb7702-5209-46c8-b7c8-2c3d03550b56 2 2 1 1 20 3 0
3 cb1fc554-8566-4c9f-a3ca-f64be302d65e 0 0 1 1 0 0 0
But when I use
pivot_activity.append(subset[[x for x in list(subset) if x not in list(pivot_activity)]])
It gives the output as belows :
MARKED_CONTENT_AS_FAVOURITE RATE_CONTENT SEARCH USER_CHECKED_IN_CONTENT User_Id VIEWED_CELEBRITY VIEWED_MOVIE
0 0.0 0.0 1.0 NaN 6916484f-b7bd-431a-818d-d1a63ff7c717 0.0 4.0
1 2.0 2.0 1.0 NaN 9fbb7702-5209-46c8-b7c8-2c3d03550b56 1.0 20.0
2 0.0 0.0 1.0 NaN cb1fc554-8566-4c9f-a3ca-f64be302d65e 1.0 0.0
2 NaN NaN NaN 0.0 NaN NaN NaN
Upvotes: 1
Views: 69
Reputation: 1016
I think this is what you are going for. A merge does not seem to be what you want since it will have to have some reference to match and you have a 1 to many scenario with differing keys. This code will assign the entire column from the subset data frame's single row single column:
pivot_activity['USER_CHECKED_IN_CONTENT'] = subset[ 'USER_CHECKED_IN_CONTENT'].head(1)
Upvotes: 0
Reputation: 21729
I think you can use concat
here. Let's say you have:
df - bigger data frame
df1 - smaller data frame
You can do:
pd.concat([df, df1], axis=0).fillna(0)
Upvotes: 0
Reputation: 1217
you can add a new column easily to existing dataframe "df"
as below:
df['USER_CHECKED_IN_CONTENT'] = 0 # df is your existing dataframe
Upvotes: 1