Jerome tan
Jerome tan

Reputation: 155

Split one big parquet file into multiple parquet files by a key

I'd like split a big parquet file into multiple parquet files in different folder in HDFS, so that I can build partitioned table (whatever Hive/Drill/Spark SQL) on it.

Data example:

+-----+------+
|model|  num1|
+-----+------+
|  V80| 195.0|
|  V80| 750.0|
|  V80| 101.0|
|  V80|   0.0|
|  V80|   0.0|
|  V80| 720.0|
|  V80|1360.0|
|  V80| 162.0|
|  V80| 150.0|
|  V90| 450.0|
|  V90| 189.0|
|  V90| 400.0|
|  V90| 120.0|
|  V90|  20.3|
|  V90|   0.0|
|  V90|  84.0|
|  V90| 555.0|
|  V90|   0.0|
|  V90|   9.0|
|  V90|  75.6|
+-----+------+

The result folder structure should be grouped by "model" field:

+
|
+-----model=V80
|       | 
|       +----- XXX.parquet
+-----model=V90
|       | 
|       +----- XXX.parquet

I tried the script like this:

def main(args: Array[String]): Unit = {
   val conf = new SparkConf()
   case class Infos(name:String, name1:String)
    val sc = new SparkContext(conf)
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    val rdd = sqlContext.read.load("hdfs://nameservice1/user/hive/warehouse/a_e550_parquet").select("model", "num1").limit(10000)

    val tmpRDD = rdd.map { item => (item(0), Infos(item.getString(0), item.getString(1))) }.groupByKey()

    for (item <- tmpRDD) {
      import sqlContext.implicits._
      val df = item._2.toSeq.toDF()
      df.write.mode(SaveMode.Overwrite).parquet("hdfs://nameservice1/tmp/model=" + item._1)
    }
  }

Just threw out a null point exception.

Upvotes: 4

Views: 13128

Answers (1)

Jegan
Jegan

Reputation: 1751

You should use partitionBy from DataFrame. You do not need groupBy. Something like below should give what you want.

val df = sqlContext.read.parquet("hdfs://nameservice1/user/hive/warehouse/a_e550_parquet").select("model", "num1").limit(10000)
df.write.partitionBy("model").mode(SaveMode.Overwrite)

Upvotes: 5

Related Questions