eran
eran

Reputation: 15136

Pyspark dataframe.limit is slow

I am trying to work with a large dataset, but just play around with a small part of it. Each operation takes a long time, and I want to look at the head or limit of the dataframe.

So, for example, I call a UDF (user defined function) to add a column, but I only care to do so on the first, say, 10 rows.

sum_cols = F.udf(lambda x:x[0] + x[1], IntegerType())
df_with_sum = df.limit(10).withColumn('C',sum_cols(F.array('A','B')))

However, this still to take the same long time it would take if I did not use limit.

Upvotes: 4

Views: 4006

Answers (2)

Chandan Ray
Chandan Ray

Reputation: 2091

limit will first try to get the required data from single partition. If the it does not get the whole data in one partition then it will get remaining data from next partition.

So please check how many partition you have by using df.rdd.getNumPartition

To prove this I would suggest first coalsce your df to one partition and do a limit. You can see this time limit is faster as it’s filtering data from one partition

Upvotes: 3

Ali Yesilli
Ali Yesilli

Reputation: 2200

If you work with 10 rows first, I think it is better that to create a new df and cache it

df2 = df.limit(10).cache()
df_with_sum = df2.withColumn('C',sum_cols(F.array('A','B')))

Upvotes: 3

Related Questions