Soheil
Soheil

Reputation: 5354

How to save batch of data in Parse Cloud Code?

In my cloud code, I would like to update all of my record which is around 50k with a new data. But I noticed that my job fails even though I follow 1000 records limit. I get success/error was not called error for this job. Any Idea how can I resolve this?

Parse.Cloud.job("hello", function(request, response) {
Parse.Cloud.useMasterKey();  
var results = [];
var limit = 1000;

var saveUpdatedQueries = function(queries) {
    console.log("updating records " + queries.length);

    Parse.Object.saveAll(queries,{
        success:function(lists){
        console.log("lists ok "+lists.length);

        if (!results.length) {
            response.success("finished");
            return;
        }

        updatingRecords(lists.length);

        },error: function(reason){
            console.log("error");
        }
    });
}

var updatingRecords = function(skip) {
    var tempRecords = [];

    if (skip) {
        results = results.slice(skip);
    }

    console.log("skip: " + skip + " Results length: "+ results.length);

    for (var i = 0; i < results.length; i++) {
        var today = new Date();
        var newObject = results[i];
        newObject.set('newCulumn', today);
        tempRecords.push(newObject);

        if (i === results.length - 1 || tempRecords.length === limit) {
            break;
        };
    };

    saveUpdatedQueries(tempRecords);
}

var processCallback = function(res) {
    results = results.concat(res);
    if (res.length === limit) {
        process(res[res.length - 1].id);
        return;
    }

    updatingRecords(0);
}

var process = function(skip) {
    var query = new Parse.Query(Parse.Installation);

    if (skip) {
        query.greaterThan("objectId", skip);
    }

    query.limit(limit);
    query.ascending("objectId");
    query.find().then(function querySuccess(res) {
    processCallback(res);

    }, function queryFailed(reason) {
        if (reason.code == 155 || reason.code == 141) { // exceeded parse timout
            console.log("time out error");
            process(skip);
        } else {
            response.error("query unsuccessful, length of result " + results.length + ", error:" + reason.code + " " + reason.message);
        }
    });
}

process(false);

});

Upvotes: 32

Views: 1942

Answers (2)

Naween Banuka
Naween Banuka

Reputation: 106

Basically in cloud architecture, request time out time is around 60 sec, but you try to insert over thousands records in one transaction , it takes more than 60 seconds, that's why your request always fail.

There's better ways to insert bigger amount of records,

  1. Task Queues
  2. Cron or scheduled task

I think task queue is better for your problem. watch this video, you can get super idea about task queues

Task queue & cron jobs

Upvotes: 1

TinkerTenorSoftwareGuy
TinkerTenorSoftwareGuy

Reputation: 797

Workaround: You could schedule a cron job in batches of an acceptably low number of records, limited by the hosting services limit you have. For example, if you can only process 10 requests every minute, you would first request all the IDs that need to be updated, then split them into chunks that the server will accept and process within the time limit. It's just a workaround.

Long-Term: A better solution would be to design your app to request as little data as possible from the server, rather than forcing the server to do all the heavy lifting. This also allows your business logic to be exposed through a convenient public API, rather than sitting as a hidden process on your server.

Upvotes: 0

Related Questions