pijing
pijing

Reputation: 81

How can I use Apache Flink to read parquet file in HDFS?

I only find TextInputFormat and CsvInputFormat. So how can I use Apache Flink to read parquet file in HDFS?

Upvotes: 4

Views: 3129

Answers (1)

pijing
pijing

Reputation: 81

Ok. I have already found a way to read parquet file in HDFS through Apache Flink.

  1. You should add below dependencies in your pom.xml

    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-hadoop-compatibility_2.11</artifactId>
      <version>1.6.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.flink</groupId>
      <artifactId>flink-avro</artifactId>
      <version>1.6.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.parquet</groupId>
      <artifactId>parquet-avro</artifactId>
      <version>1.10.0</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-mapreduce-client-core</artifactId>
      <version>3.1.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>3.1.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-core</artifactId>
      <version>1.2.1</version>
    </dependency>
    
  2. Create an avsc file to define the schema. Exp:

    {"namespace": "com.flinklearn.models",
     "type": "record",
     "name": "AvroTamAlert",
     "fields": [
        {"name": "raw_data", "type": ["string","null"]}
     ]
    }
  1. Run "java -jar D:\avro-tools-1.8.2.jar compile schema alert.avsc ." to generate Java class and copy AvroTamAlert.java to your project.

  2. Use AvroParquetInputFormat to read parquet file in hdfs:

class Main {
    def startApp(): Unit ={
        val env = ExecutionEnvironment.getExecutionEnvironment

        val job = Job.getInstance()

        val dIf = new HadoopInputFormat[Void, AvroTamAlert](new AvroParquetInputFormat(), classOf[Void], classOf[AvroTamAlert], job)
        FileInputFormat.addInputPath(job, new Path("/user/hive/warehouse/testpath"))

        val dataset = env.createInput(dIf)

        println(dataset.count())

        env.execute("start hdfs parquet test")
    }
}

object Main {
    def main(args:Array[String]):Unit = {
        new Main().startApp()
    }
}

Upvotes: 2

Related Questions