Reputation: 19
In my data frame there are two columns and they are obj type.
index | COLOUR | CATEGORY |
---|---|---|
0 | blue | iphone11 |
1 | black | iphone12 |
iphone_sale['COLOUR'] = iphone_sale['COLOUR'].astype(str)
iphone_sale['CATEGORY'] = iphone_sale['CATEGORY'].astype(str)
I need to create a new column using above both columns.
iphone_sales_sum["colour_category"] = iphone_sales_sum.apply(lambda x: f"{x['COLOUR']} /{x['CATEGORY']}", axis=1)
But I am getting a noisy values as the derived column.
iphone 12\nName: 0, dtype: object/... etc
But it doesn't give any error i lmbda function.This code perfectly worked for an other scenario.
Upvotes: 0
Views: 45
Reputation: 94
when i reproduce your code:
import pandas as pd
iphone_sales_sum = pd.DataFrame([[1,"blue", "iphone 12"],[2,"black","iphone12"],[3,"sad3","jkl3"]], columns=["index",'COLOUR', 'CATEGORY' ])
iphone_sales_sum['COLOUR'] = iphone_sales_sum['COLOUR'].astype(str)
iphone_sales_sum['CATEGORY'] = iphone_sales_sum['CATEGORY'].astype(str)
iphone_sales_sum['colour_category'] = iphone_sales_sum.apply(lambda x: f"{x['COLOUR']} / {x['CATEGORY']}", axis=1)
i get
colour_category
blue / iphone 12
black / iphone12
sad3 / jkl3
Upvotes: 1