Reputation: 1091
Is there a method for BigQuery API that allows you to set the destination table for a query? I found one in the REST API but not for programming languages like ruby.
If there is an example for other languages.. maybe I can try to do the same in ruby
Upvotes: 0
Views: 1747
Reputation: 3640
You can query into a destination table with a single command:
bigquery = Google::Cloud::Bigquery.new(...)
dataset = bigquery.dataset('my_dataset')
job = bigquery.query_job("SELECT * FROM source_table", table: dataset.table('destination_table'), write: 'truncate', create: 'needed')
job.wait_until_done!
Upvotes: 0
Reputation: 173190
didn't know if this is exactly what you were asking - but looks like it is :o)
Ruby API Reference Documentation for the Google BigQuery API Client Library.
You can see more for all supported clients in BigQuery Client Libraries
Upvotes: 1
Reputation: 14791
You need to set the destination table
via the API. Either one of these example snippets should be easy to port to the Ruby client, and be enough to get you going:
Java
JobConfiguration jobConfiguration = newBuilder("select * from..)
.setAllowLargeResults(true)
.setUseLegacySql(false)
.setDryRun(dryRun)
.setDestinationTable(TableId.of("projectId", "dataset", "table"))
.setCreateDisposition(CREATE_IF_NEEDED)
.setWriteDisposition(WRITE_TRUNCATE)
.setPriority(BATCH)
.build();
Python
from google.cloud import bigquery
client = bigquery.Client()
query = """\
SELECT firstname + ' ' + last_name AS full_name,
FLOOR(DATEDIFF(CURRENT_DATE(), birth_date) / 365) AS age
FROM dataset_name.persons
"""
dataset = client.dataset('dataset_name')
table = dataset.table(name='person_ages')
job = client.run_async_query('fullname-age-query-job', query)
job.destination = table
job.write_disposition= 'truncate'
job.begin()
Upvotes: 1