igorpavlov
igorpavlov

Reputation: 3626

Dealing with NodeJS asynchronous behavior

Using NodeJS with MongoDB+Mongoose.

First of all, I know the advantages of async non-blocking code. So I do deal with callbacks. But finally I faced the following problem.

Lets say I have a function which can be called by user any time. And it is possible, that a super "lightning-speed" user call it twice almost at the same time.

function do_something_with_user(user_id){
    User.findOne({_id:user_id}).exec(function(err,user){ // FIND QUERY
        // Do a lot of different stuff with user
        // I just cannot update user with a single query
        // I might need here to execute any other MongoDB queries
        // So this code is a set of queries-callbacks
        user.save() // SAVE QUERY
    })
}

Of course it executes like this: FIND QUERY, FIND QUERY, SAVE QUERY, SAVE QUERY

This totally breaks the logic of an app (should FIND QUERY, SAVE QUERY, FIND QUERY, SAVE QUERY). So I decided to prevent asynchronous behavior by "locking" the whole function for particular user (so inside the function code is still async).

var lock_function_for_user = {}

function do_something_with_user(user_id){
    if(!lock_function_for_user[user_id]){
        lock_function_for_user[user_id] = true
        User.findOne({_id:user_id}).exec(function(err,user){
            // Same code as above
            user.save(function(){
                lock_function_for_user[user_id] = false
            })
        })
    } else {
        setTimeout(function(){
            do_something_with_user(user_id)
        },100) // assuming that average function execution time is 100ms in average
    }
}

So, my question is: is it a good practice, good hack or bad hack? If it is a bad hack, please provide any other solution. Especially, I doubt this solution will work when we scale and launch more than one NodeJS processes.

Upvotes: 2

Views: 532

Answers (2)

Gabriel Llamas
Gabriel Llamas

Reputation: 18427

This is a very bad practice, you should never use timers to control the flow of the code.

The problem here is called atomicity. If you need to do find-save, find-save then you need to pack these operations somehow (transaction). It depends on the software you use. In redis you have the multi and exec commands. In mongodb you have findAndModify(). Another solution is to use an index. When you try to save the same field twice you get an error. Use the attributes, "index: true" and "unique: true" in the schemaType in mongoose:

var schema = mongoose.Schema ({
    myField: { type: String, index: true, unique: true, required: true },
});

This is what you need: Mongodb - Isolate sequence of operations - Perform Two Phase Commits. But take into account that mongodb could not be the best choice if you need to do a lot of transactions.

Upvotes: 2

thejh
thejh

Reputation: 45568

You don't want to waste RAM, so replace

lock_function_for_user[user_id] = false

with

delete lock_function_for_user[user_id]

Apart from that: You could just be optimistic and retry if a conflict happens. Just leave out the locking and make sure that the DB notices when stuff goes wrong (and retry in that case). Of course, which way is better depends on how often such conflicts really happen.

Upvotes: 0

Related Questions