Gergely
Gergely

Reputation: 113

Per-table write limits when using the BigQuery Storage Write API

I'm considering a solution that relies on loading streamed data from Kafka into BigQuery tables (mostly one table per Kafka topic).

I'm trying to figure out whether I might run into limits or quota issues when writing a large number of rows.

On the Storage Write API page it says:

You can use the Storage Write API to stream records into BigQuery in real time or to batch process an arbitrarily large number of records and commit them in a single atomic operation.

However on the BigQuery "Quotas and limits" page I read:

Your project can make up to 1,500 table modifications per table per day, whether the modification appends data, updates data, or truncates the table.

It's not clear to me whether this applies to the Storage Write API.

Upvotes: 0

Views: 428

Answers (1)

Gumaz
Gumaz

Reputation: 264

That 1500 table modifications limit does not apply to data being appended to the table via the storage write API. There is a limit of 1,500 table modification that can be made to a BigQuery table in a day. Table modifications include DELETE, INSERT, MERGE, TRUNCATE TABLE, or UPDATE statements. If a table reaches this limit, any new modifications made through some independent load jobs might fail.

What that limit really is about is:

includes the combined total of all load jobs, copy jobs, and query jobs that append to or overwrite a destination table

And those only apply when you load data into BigQuery, using the Google Cloud console, the bq command-line tool, or the load-type jobs.insert API method.

For details of what "load jobs" means read here

tl;dr storage write API does not count as "table modifications"

Upvotes: 1

Related Questions