Reputation: 701
I am trying to export a select statement results to another table as a permanent storage. But, whenenver that new table is created it is schemaless. When I try to query that result table an error is shown:
Table project-id.dataset_name.temp_table does not have a schema.
Here is my code to export result from SELECT statement to a temporary tab
def query_to_table():
service_account_info = {} # account info
credentials = Credentials.from_service_account_info(
service_account_info)
client = bigquery.Client(
project=service_account_info.get("project_id"),
credentials=credentials)
query = """
SELECT
a,
b
FROM `project.dataset.table`
WHERE a NOT IN ('error', 'warning')
"""
destination_dataset = client.dataset("abc_123") #this is another dataset
destination_table = destination_dataset.table("temp_table") # destination table
try:
client.get_table(destination_table)
client.delete_table(destination_table)
except Exception as e:
# Some logging
pass
client.create_table(Table(destination_table))
# Execute the job and save to table
job_config = bigquery.QueryJobConfig()
job_config.allow_large_results = True
job_config.use_legacy_sql = False
job_config.destination = destination_table
job_config.dry_run = True
query_job = client.query(query, job_config=job_config)
# Wait till the job done
while not query_job.done():
time.sleep(1)
logging.info(f"Processed {query_job.total_bytes_processed} bytes.")
return destination_table
Where is the mistake? Is there any API changes from the Google Cloud side? Because this script was working one month earlier.
Please help.
Upvotes: 0
Views: 446
Reputation: 701
Damn! I just figured it out, it was because I set the dry_run
to True
.
According to this: https://stackoverflow.com/a/28355802/4494547, if dry_run is set to True, it just evaluates the Query without actually running the job.
Took me 5 hours busting my head. :(
Upvotes: 1