Reputation: 142
google-cloud-bigquery version is 1.5.0 and the stack crashed in sdk code. is there any workaround?
[2018-09-12 21:18:01,265] {base_task_runner.py:98} INFO - Subtask: query_job.result() # Waits for the query to finish
...
[2018-09-12 21:18:01,315] {base_task_runner.py:98} INFO - Subtask: raise exceptions.from_http_response(response)
[2018-09-12 21:18:01,316] {base_task_runner.py:98} INFO - Subtask: google.api_core.exceptions.BadRequest: 400 GET https://www.googleapis.com/bigquery/v2/projects/fsql-production/queries/[my job id]?maxResults=0: Cannot return an invalid timestamp value of 1534808046936000000 microseconds relative to the Unix epoch. The range of valid timestamp values is [0001-01-1 00:00:00, 9999-12-31 23:59:59.999999]; error in writing field request_started_at
Upvotes: 0
Views: 203
Reputation: 33705
This indicates an error in your query, not in the client code. The error is:
Cannot return an invalid timestamp value of 1534808046936000000 microseconds relative to the Unix epoch. The range of valid timestamp values is [0001-01-1 00:00:00, 9999-12-31 23:59:59.999999]; error in writing field request_started_at
It sounds like you have a field/column named request_started_at
that is scaled incorrectly; 1534808046936000000
should probably be 1534808046936000
. There is some material related to this in the guide on migrating to standard SQL. If all the values of this column are scaled incorrectly, then you can do something like this to fix them:
CREATE OR REPLACE dataset.table AS
SELECT *
REPLACE (TIMESTAMP_MICROS(DIV(UNIX_MICROS(request_started_at), 1000)) AS request_started_at)
FROM dataset.table
This replaces the values in the column after scaling them down by a factor of 1000.
Upvotes: 1