Reputation: 2385
I have a python ML process which connects to BigQuery
using a local json
file which the env variable GOOGLE_APPLICATION_CREDENTIALS
is pointing to (The file contains my keys supplied by google, see authentication getting-started )
When Running it locally its works great.
Im now looking to deploy my model through Google's Ml engine, specifically using the shell command gcloud ml-engine jobs submit training
.
However, after i ran my process and looked at the logs in console.cloud.google.com/logs/viewer
i saw that gcloud cant access Bigquery
and i'm getting the following error:
google.auth.exceptions.DefaultCredentialsError: File:
/Users/yehoshaphatschellekens/Desktop/google_cloud_xgboost/....-.....json was not found.
Currently i don't think that the gcloud ml-engine jobs submit training
takes the Json file with it (I thought that gcloud has access automatically to BigQuery, i guess not)
One optional workaround to this is to save my personal .json into my python dependancies in the other sub-package folder (see packaging-trainer) and import it.
Is this solution feasible / safe ?
Is there any other workaround to this issue?
Upvotes: 3
Views: 205
Reputation: 2385
What i did eventually is to upload the json to a gcloud
storage bucket and then uploading it into my project each time i launch the ML-engine train process:
os.system('gsutil cp gs://secured_bucket.json .')
os.environ[ "GOOGLE_APPLICATION_CREDENTIALS"] = "......json"
Upvotes: 3
Reputation: 712
the path should be absolute and with backslashes in Windows:
GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\[FILE_NAME].json"
set it this way in your Python code:
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = "C:\PATH.JSON"
Example with the Google Translate API here.
Upvotes: 2