Reputation: 143
So I've figured out how to find the latest file using python. Now I'm wondering if I can find the latest file using pyspark. Currently I specify a path but I'd like pyspark to get the latest modified file.
Current code looks like this:
df = sc.read.csv("Path://to/file", header=True, inderSchema=True)
Thanks in advance for your help.
Upvotes: 4
Views: 8340
Reputation: 10092
I copied the code to get the HDFS API to work with PySpark from this answer: Pyspark: get list of files/directories on HDFS path
URI = sc._gateway.jvm.java.net.URI
Path = sc._gateway.jvm.org.apache.hadoop.fs.Path
FileSystem = sc._gateway.jvm.org.apache.hadoop.fs.s3.S3FileSystem
Configuration = sc._gateway.jvm.org.apache.hadoop.conf.Configuration
fs = # Create S3FileSystem object here
files = fs.listStatus(Path("Path://to/file"))
# You can also filter for directory here
file_status = [(file.getPath().toString(), file.getModificationTime()) for file in files]
file_status.sort(key = lambda tup: tup[1], reverse= True)
most_recently_updated = file_status[0][0]
spark.read.csv(most_recently_updated).option(...)
Upvotes: 6