Reputation: 7245
I am using databricks and I am reading .csv file from a bucket.
MOUNT_NAME = "myBucket/"
ALL_FILE_NAMES = [i.name for i in dbutils.fs.ls("/mnt/%s/" % MOUNT_NAME)] \
dfAll = spark.read.format('csv').option("header", "true").schema(schema).load(["/mnt/%s/%s" % (MOUNT_NAME, FILENAME) for FILENAME in ALL_FILE_NAMES])
I would like at the same time to write a table there.
myTable.write.format('com.databricks.spark.csv').save('myBucket/')
Upvotes: 0
Views: 3395
Reputation: 12768
The snippet below shows how to save a dataframe as a single CSV file on DBFS and S3.
myTable.write.save(“s3n://my-bucket/my_path/”, format=”csv”)
OR
# DBFS (CSV)
df.write.save('/FileStore/parquet/game_stats.csv', format='csv')
# S3 (CSV)
df.coalesce(1).write.format("com.databricks.spark.csv")
.option("header", "true").save("s3a://my_bucket/game_sstats.csv")
Upvotes: 2