Aviral Srivastava
Aviral Srivastava

Reputation: 4582

Unable to save a CSV file using PySpark Dataframe on AWS EMR

I want to save a CSV file with gzip compression. The code runs successfully but it is silently failing - i.e. I see no file present on the path provided.

I tried reading the file that is supposed to be saved successfully but 'No such file found' is what I am getting after running the command file -i <path_to_the_file>.

My code for writing the csv file is:

>>> df
DataFrame[id: int, name: string, alignment: string, gender: string, eyecolor: string, race: string, haircolor: string, publisher: string, skincolor: string, height: int, weight: int, _paseena_row_number_: bigint, _paseena_timestamp_: timestamp, _paseena_commit_id_: string]
>>> df.write.csv('check_csv_post_so.csv')
>>>

Now, when I check, there exists no file.

I would go with some dfs unknown methodology but the catch is, I have worked with spark on other machines and found no such issue.

I expect the file to be present or the code to fail and show errors.

Upvotes: 2

Views: 1929

Answers (1)

gorros
gorros

Reputation: 1461

I think file is stored on HDFS. Try to save file with file:// or s3://. Or use hdfs dfs -ls to see if file is there.

Upvotes: 1

Related Questions