Dick McManus
Dick McManus

Reputation: 789

Using Spark Scala in EMR to get S3 Object size (folder, files)

I am trying to get the folder size for some S3 folders with scala from my command line EMR.

I have JSON data stored as GZ files in S3. I find I can count the number of JSON records within my files:

spark.read.json("s3://mybucket/subfolder/subsubfolder/").count

But now I need to know how much GB that data accounts for.

I am finding options to get the size for distinct files, but not for a whole folder all up.

Upvotes: 4

Views: 2689

Answers (1)

Ram Ghadiyaram
Ram Ghadiyaram

Reputation: 29227

I am finding options to get the size for distinct files, but not for a whole folder all up.

Solution :


Option1:

Get the s3 access by FileSystem

    val fs = FileSystem.get(new URI(ipPath), spark.sparkContext.hadoopConfiguration)

Note :

1) new URI is important other wise it will connect to hadoop file system path instread of s3 file system(object store :-)) path . using new URI you are giving scheme s3:// here.

2) org.apache.commons.io.FileUtils.byteCountToDisplaySize will give display sizes of file system in GB MB etc...

      /**
    * recursively print file sizes
    *
    * @param filePath
    * @param fs
    * @return
    */
@throws[FileNotFoundException]
@throws[IOException]
  def getDisplaysizesOfS3Files(filePath: org.apache.hadoop.fs.Path, fs: org.apache.hadoop.fs.FileSystem): scala.collection.mutable.ListBuffer[String] = {
    val fileList = new scala.collection.mutable.ListBuffer[String]
    val fileStatus = fs.listStatus(filePath)
    for (fileStat <- fileStatus) {
      println(s"file path Name : ${fileStat.getPath.toString} length is  ${fileStat.getLen}")
      if (fileStat.isDirectory) fileList ++= (getDisplaysizesOfS3Files(fileStat.getPath, fs))
      else if (fileStat.getLen > 0 && !fileStat.getPath.toString.isEmpty) {
        println("fileStat.getPath.toString" + fileStat.getPath.toString)
        fileList += fileStat.getPath.toString
        val size = fileStat.getLen
        val display = org.apache.commons.io.FileUtils.byteCountToDisplaySize(size)
        println(" length zero files \n " + fileStat)
        println("Name    = " + fileStat.getPath().getName());
        println("Size    = " + size);
        println("Display = " + display);
      } else if (fileStat.getLen == 0) {
        println(" length zero files \n " + fileStat)

      }
    }
    fileList
  }

based on your requirement, you can modify the code... you can sum up all the distinct files.

Option 2 : Simple and crispy using getContentSummary

implicit val spark = SparkSession.builder().appName("ObjectSummary").getOrCreate()
  /**
    * getDisplaysizesOfS3Files 
    * @param path
    * @param spark [[org.apache.spark.sql.SparkSession]]
    */
  def getDisplaysizesOfS3Files(path: String)( implicit spark: org.apache.spark.sql.SparkSession): Unit = {
    val filePath = new org.apache.hadoop.fs.Path(path)
    val fileSystem = filePath.getFileSystem(spark.sparkContext.hadoopConfiguration)
    val size = fileSystem.getContentSummary(filePath).getLength
    val display = org.apache.commons.io.FileUtils.byteCountToDisplaySize(size)
    println("path    = " + path);
    println("Size    = " + size);
    println("Display = " + display);
  } 

Note : Any option showed above will work for local or hdfs or s3 as well

Upvotes: 5

Related Questions