S Anu
S Anu

Reputation: 13

Import data from Mongodb on GCE to Bigquery

My task is to import data from a mongodb collection hosted on GCE to Bigquery. I tried the following. Since bigquery does not accept '$' symbol in field names, I ran the following to remove the $oid field,

mongo test --quiet \
  --eval "db.trial.find({}, {_id: 0})
                  .forEach(function(doc) {
                     print(JSON.stringify(doc)); });" \
  > trial_noid.json

But, while importing the result file, I get an error that says

parse error: premature EOF (error code: invalid)

Is there a way to avoid these steps and directly transfer the data to bigquery from mongodb hosted on GCE?

Upvotes: 1

Views: 2033

Answers (2)

Qorbani
Qorbani

Reputation: 5905

In my opinion, the best practice is building your own extractor. That can be done with the language of your choice and you can extract to CSV or JSON.

But if you looking to a fast way and if your data is not huge and can fit within one server, then I recommend using mongoexport to extract to JSON. Let's assume you have a simple document structure such as below:

{
    "_id" : "tdfMXH0En5of2rZXSQ2wpzVhZ",
    "statuses" : [ 
        {
            "status" : "dc9e5511-466c-4146-888a-574918cc2534",
            "score" : 53.24388894
        }
    ],
    "stored_at" : ISODate("2017-04-12T07:04:23.545Z")
}

Then you need to define your BigQuery Schema (mongodb_schema.json) such as:

$ cat > mongodb_schema.json <<EOF
[
    { "name":"_id", "type": "STRING" },
    { "name":"stored_at", "type": "record", "fields": [
        { "name":"date", "type": "STRING" }
    ]},
    { "name":"statuses", "type": "record", "mode": "repeated", "fields": [
        { "name":"status", "type": "STRING" },
        { "name":"score", "type": "FLOAT" }
    ]}
]
EOF

Now, the fun part starts :-) Extracting data as JSON from your MongoDB. Let's assume you have a cluster with replica set name statuses, your db is sample, and your collection is status.

mongoexport \
    --host statuses/db-01:27017,db-02:27017,db-03:27017 \
    -vv \
    --db "sample" \
    --collection "status" \
    --type "json" \
    --limit 100000 \
    --out ~/sample.json

As you can see above, I limit the output to 100k records because I recommend you run sample and load to BigQuery before doing it for all your data. After running above command you should have your sample data in sample.json BUT there is a field $date which will cause you an error loading to BigQuery. To fix that we can use sed to replace them to simple field name:

# Fix Date field to make it compatible with BQ
sed -i 's/"\$date"/"date"/g' sample.json

Now you can compress, upload to Google Cloud Storage (GCS) and then load to BigQuery using following commands:

# Compress for faster load
gzip sample.json

# Move to GCloud
gsutil mv ./sample.json.gz gs://your-bucket/sample/sample.json.gz

# Load to BQ
bq load \
    --source_format=NEWLINE_DELIMITED_JSON \
    --max_bad_records=999999 \
    --ignore_unknown_values=true \
    --encoding=UTF-8 \
    --replace \
    "YOUR_DATASET.mongodb_sample" \
    "gs://your-bucket/sample/*.json.gz" \
    "mongodb_schema.json"

If everything was okay, then go back and remove --limit 100000 from mongoexport command and re-run above commands again to load everything instead of 100k sample.

With this solution, you can import your data with the same hierarchy to BigQuery but if you want to flat your data, then below alternative solution would work better.

ALTERNATIVE SOLUTION:

If you want more flexibility and performance is not your concern, then you can use mongo CLI tool as well. This way you can write your extract logic in a JavaScript and execute it against your data and then send output to BigQuery. Here is what I did for the same process but used JavaScript to output in CSV so I can load it much easier to BigQuery:

# Export Logic in JavaScript
cat > export-csv.js <<EOF
var size = 100000;
var maxCount = 1;
for (x = 0; x < maxCount; x = x + 1) {
    var recToSkip = x * size;
    db.entities.find().skip(recToSkip).limit(size).forEach(function(record) {
        var row = record._id + "," + record.stored_at.toISOString();;
        record.statuses.forEach(function (l) {
            print(row + "," + l.status + "," + l.score)
        });
    });
}
EOF

# Execute on Mongo CLI
_MONGO_HOSTS="db-01:27017,db-02:27017,db-03:27017/sample?replicaSet=statuses"
mongo --quiet \
    "${_MONGO_HOSTS}" \
    export-csv.js \
    | split -l 500000 --filter='gzip > $FILE.csv.gz' - sample_

# Load all Splitted Files to Google Cloud Storage
gsutil -m mv ./sample_* gs://your-bucket/sample/

# Load files to BigQuery
bq load \
    --source_format=CSV \
    --max_bad_records=999999 \
    --ignore_unknown_values=true \
    --encoding=UTF-8 \
    --replace \
    "YOUR_DATASET.mongodb_sample" \
    "gs://your-bucket/sample/sample_*.csv.gz" \
    "ID,StoredDate:DATETIME,Status,Score:FLOAT"

TIP: In above script I did the small trick by piping output to able to split the output into multiple files with sample_ prefix. Also during split, it will GZip the output so you can load it easier to GCS.

Upvotes: 2

DoiT International
DoiT International

Reputation: 2435

When using NEWLINE_DELIMITED_JSON to import data into BigQuery, one JSON object, including any nested/repeated fields, must appear on each line.

The issue with your input file appears to be that the JSON object is split into many lines; if you collapse it to a single line, it will resolve this error.

Requiring this format allows BigQuery to split the file and process it in parallel without being concerned that splitting the file will put one part of a JSON object in one split, and another part in the next split.

Upvotes: 1

Related Questions