Shikha
Shikha

Reputation: 361

How to skip rows of csv file in BIGQUERY load API

I am trying to load CSV data from cloud storage bucket to BigQuery table using BigQuery API My Code is :

def load_data_from_gcs(dataset_name, table_name, source):
    bigquery_client = bigquery.Client()
    dataset = bigquery_client.dataset(dataset_name)
    table = dataset.table(table_name)
    job_name = str(uuid.uuid4())

    job = bigquery_client.load_table_from_storage(
        job_name, table, source)
    job.sourceFormat = 'CSV'
    job.fieldDelimiter = ','
    job.skipLeadingRows = 2

    job.begin()
    job.result()  # Wait for job to complete

    print('Loaded {} rows into {}:{}.'.format(
        job.output_rows, dataset_name, table_name))

    wait_for_job(job)

It is giving me error:

400 CSV table encountered too many errors, giving up. Rows: 1; errors: 1.

this error is because,my csv file contains first two rows as header information and that is not supposed to be loaded. I have given job.skipLeadingRows = 2 but it is not skipping the first 2 rows. Is there any other syntax to set skip rows ?

Please help on this.

Upvotes: 3

Views: 3644

Answers (1)

Graham Polley
Graham Polley

Reputation: 14791

You're spelling it wrong (using camelcase instead of underscores). It's skip_leading_rows, not skipLeadingRows. Same for field_delimiter and source_format.

Check out the Python sources here.

Upvotes: 6

Related Questions