DroppingOff
DroppingOff

Reputation: 331

PySpark - iterate rows of a Data Frame

I need to iterate rows of a pyspark.sql.dataframe.DataFrame.DataFrame.

I have done it in pandas in the past with the function iterrows() but I need to find something similar for pyspark without using pandas.

If I do for row in myDF: it iterates columns.DataFrame

Thanks

Upvotes: 1

Views: 10231

Answers (1)

aopki
aopki

Reputation: 310

You can use select method to operate on your dataframe using a user defined function something like this :

    columns = header.columns
    my_udf = F.udf(lambda data: "do what ever you want here " , StringType())
    myDF.select(*[my_udf(col(c)) for c in columns])

then inside the select you can choose what you want to do with each column .

Upvotes: 2

Related Questions