grasshopper
grasshopper

Reputation: 1

google.api_core.exceptions.Forbidden: 403 POST, Access Denied: User does not have bigquery.jobs.create permission in project xc526656e7cecfc24p-tp

I am running a LoadConfigJob in a kubeflow component in vertex ai pipeline using google.cloud.bigquery library. Objective is to load data from GCS to Bigquery. But I am faced with this error.

google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/xc526656e7cecfc24p-tp/jobs?prettyPrint=false: Access Denied: Project xc526656e7cecfc24p-tp: User does not have bigquery.jobs.create permission in project xc526656e7cecfc24p-tp.

I am not sure from where this project number "xc526656e7cecfc24p-tp" pops up. The Service account and individual account has enough permissions to execute "bigquery.jobs.create". But we can't locate from where the project number pops up. Can anyone guide me where i can check this project number? I have checked the IAM page, it doesn't have this project number.

Below is the script we ran:

*@component(packages_to_install = ['google-cloud-bigquery==3.17.2','kfp==2.4.0'], base_image='python:3.9') def raw_data_load(path:str, table_id:str, fieldDelimiter:str=';', encoding_type:str='UTF-8', file_schema:str=None, enforce_schema:bool=False): """ Load raw data from a CSV file into a BigQuery table.

Args:
 path (str): The Google Cloud Storage (GCS) file path of the CSV file.
 table_id (str): The BigQuery table ID in the format 'project_id.dataset_id.table_id'.
 fieldDelimiter (str): The field delimiter for the CSV file.
 encoding_type (str): The encoding type of the CSV file.
 file_schema (list): The schema of the CSV file represented as a list of BigQuery schema fields.
 enforce_schema (bool): Flag to enforce the provided schema.

Returns:
    None
"""
from google.cloud import bigquery

client = bigquery.Client()

if enforce_schema:
    job_config = bigquery.LoadJobConfig(
                        skip_leading_rows=1,
                        schema=file_schema,
                        field_delimiter=fieldDelimiter,
                        encoding=encoding_type,
                        source_format=bigquery.SourceFormat.CSV
                        )
else:
    job_config = bigquery.LoadJobConfig(
                        skip_leading_rows=1,
                        autodetect=True,
                        field_delimiter=fieldDelimiter,
                        encoding=encoding_type,
                        source_format=bigquery.SourceFormat.CSV
                    )

load_job = client.load_table_from_uri(path, table_id, job_config=job_config)

# Waits for the job to complete
load_job.result()
# Prints the result
print(f"Loaded {load_job.output_rows} rows into {table_id}")

Please let us know where we can check this project number.

Upvotes: 0

Views: 579

Answers (0)

Related Questions