Sadek
Sadek

Reputation: 127

Merging two dataframes having the same number of columns

I'm looking for a way to merge two dataframes df1 and df2 without any condition, knowing that df1 and df2 have the same length For example:

df1:
+--------+
|Index   |
+--------+
|       0|
|       1|
|       2|
|       3|
|       4|
|       5|
+--------+

df2
+--------+
|Value   |
+--------+
|       a|
|       b|
|       c|
|       d|
|       e|
|       f|
+--------+

The result must be:

+--------+---------+
|Index   | Value   |
+--------+---------+
|       0|        a|
|       1|        b|
|       2|        c|
|       3|        d|
|       4|        e|
|       5|        f|
+--------+---------+

Thank you

Upvotes: 1

Views: 653

Answers (3)

RufusVS
RufusVS

Reputation: 4127

I guess this isn't the same as pandas? I would have thought you could simply say:

df_new=pd.DataFrame()
df_new['Index']=df1['Index']
df_new['Value']=df2['Value']

Mind you, it has been a while since I've used pandas.

Upvotes: 0

Sadek
Sadek

Reputation: 127

Here it is the solution proposed by @dsk and @anky

from pyspark.sql import functions as F
from pyspark.sql.window import Window as W
rnum=F.row_number().over(W.orderBy(F.lit(0)))
Df1 = df1.withColumn('rn_no',rnum)
Df2 = df2.withColumn('rn_no',rnum)
DF= Df1.join(Df2, 'rn_no' , 'left')
DF= sjrDF.drop('rn_no')

Upvotes: 1

dsk
dsk

Reputation: 2003

As you have same number of rows in both the datafram

from pyspark.sql import functions as F
from pyspark.sql.window import Window as W
_w1 = W.partitionBy('index')
_w2 = W.partitionBy('value')

Df1 = df1.withColumn('rn_no', F.row_number().over(_w1))

Df2 = df2.withColumn('rn_no', F.row_number().over(_w2))

Df_final = Df1.join(Df2, 'rn_no' , 'left')
Df_final = Df_final.drop('rn_no')

Upvotes: 1

Related Questions