Mike
Mike

Reputation: 790

How to: Pyspark dataframe persist usage and reading-back

I'm quite new to pyspark, and I'm having the following error:

Py4JJavaError: An error occurred while calling o517.showString. and I've read that is due to a lack of memory:
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

So, I've been reading that a turn-around to this situation is to use df.persist() and then read again the persisted df, so I would like to know:

I'm really new to this, so please, try to explain as best as you can.

I'm running on a local machine (8GB ram), using jupyter-notebooks(anaconda); windows 7; java 8; python 3.7.1; pyspark v2.4.3

Upvotes: 8

Views: 32516

Answers (1)

DataWrangler
DataWrangler

Reputation: 2165

Spark is lazy evaluated framework so, none of the transformations e.g: join are called until you call an action.

So go ahead with what you have done

from pyspark import StorageLevel
    for col in columns:
       df_AA = df_AA.join(df_B, df_AA[col] == 'some_value', 'outer')
    df_AA.persist(StorageLevel.MEMORY_AND_DISK)
    df_AA.show()

There multiple persist options available so choosing the MEMORY_AND_DISK will spill the data that cannot be handled in memory into DISK.

Also GC errors could be a result of lesser DRIVER memory provided for the Spark Application to run.

Upvotes: 17

Related Questions