Reputation: 115
I am currently trying to insert a row into a BQ dataset.
I want to use the Client.insert_row() function for this documentation
I got everything up and running, for a simple schema without any field of type 'Record'. However, when I add a field of type 'Record' to my Schema, I dont know how to define this in the insert_row() function
My Schema: Schema in BQ
My Code:
client = bigquery.Client(
credentials=credentials,
project=credentials.project_id,
)
dataset_ref = client.dataset('channel_data')
table_ref = dataset_ref.table('test')
table = client.get_table(table_ref) # API call
rows_to_insert = [{"test1":"a","test2":"b","test3":"c","record":{"1":"d","2":"e"}},]
errors = client.insert_rows_json(table, rows_to_insert) # API request
assert errors == []
I tried many different versions for the row _to_insert:
rows_to_insert = [{"test1":"a","test2":"b","test3":"c","record":["d","e"]},]
rows_to_insert = [{"test1":"a","test2":"b","test3":"c","record.1":"d","record.2":"e"}},]
none of them seem to work and I can't find any information online how to do this. Has anyone ever done this before?
It is important that it is inserted as a JSON, since sometimes some of the values are missing. I know you can just put everything into a list and transfer it like this
rows_to_insert = [("a", "b", "c", ["d", "e"])]
but that is not an option for me!
Upvotes: 3
Views: 16236
Reputation: 115
The right way to insert them is as follows:
rows_to_insert = [{"test1":"a","test2":"b","test3":"c","record":{"r1":"d","r2":"e"}}]
the record field has itself a dictionary as an entry with the specific fields as key/values!
Upvotes: 4