arunK
arunK

Reputation: 418

Reading Big Query using Spark BigQueryConnector

I want to read a big query using spark big query connector and pass the partition information into it. This is working fine but its reading the full table. I want to filter the data based on some partition value. How can I do it? I don't want to read the full table and then apply filter on spark dataset. I want to pass the partition information while reading itself. Is that even possible?

 Dataset<Row> testDS = session.read().format("bigquery")
                    .option("table", <TABLE>)
                    //.option("partition",<PARTITION>)
                    .option("project", <PROJECT_ID>)
                    .option("parentProject", <PROJECT_ID>)
                    .load();

Upvotes: 1

Views: 933

Answers (1)

arunK
arunK

Reputation: 418

filter is working this way .option("filter", "_PARTITIONTIME = '2020-11-23 13:00:00'")

Upvotes: 2

Related Questions