Stef Kors
Stef Kors

Reputation: 310

Node.js Page request slow with complex mongodb call, how to make faster?

I'm loading a "archive" page which consists of searching though a mongodb collection and showing a number of documents on the page. However when doing this, the server call takes a while. any suggestions in making it faster? I think the slowness is coming from this line:

Publication.find().limit(perPage).skip(perPage * page).sort('-date').exec(function (err, _publications) {

full page request:

app.get('/archive', function (req, res) {

  function timeConverter(UNIX_timestamp){
    var a = new Date(UNIX_timestamp);
    var months = ['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'];
    var year = a.getFullYear();
    var month = months[a.getMonth()];
    var date = a.getDate();
    var time = date + ' ' + month + ' ' + year;
    return time;
  }

  var perPage = 6
  pageParam = req.query['page']
  if (pageParam == null) {
    pageParam = 0
  }
  var page = Math.max(0, pageParam)

  // find all publications
  Publication.find().limit(perPage).skip(perPage * page).sort('-date').exec(function (err, _publications) {
    if (err) return console.error(err)

    for (id in _publications) { // convert date to text
      _publications[id].date = timeConverter( Number(_publications[id].date) )
    }

    Publication.find().limit(perPage).skip(perPage * (page + 1) ).count({},function(err, count) { // check if it's last page
      if (err) return console.error(err)

      if (count == 0) {
        nextPage = false
      } else {
        nextPage = page + 1
      }

      res.render(__dirname + '/../source/views/archive', {
        publications: _publications,
        nextPage: nextPage,
        prevPage: page - 1
      })

    })

    console.log('serving archive')
  })

})

Upvotes: 0

Views: 28

Answers (1)

Muhammad Usman
Muhammad Usman

Reputation: 10148

Doing .limit(perPage).skip(perPage * page) will affect your response time. This is now considered as best approach as mongo will first scan all of the previous documents in specified collection and then skip them.

A better solution will be getting all the docs with _id greater than sent in the first response. Something like

Publication.find({'_id': {'$gt': req.params.last_id}}, {}, { limit: perPage })

Here last_id is the id of the last document and this query will return you all (or specified number of) documents after that id.

Moreover, mongodb applies indexes on its generated id and it is always faster to search with this.

Main cause of slowness in your approach is because of using skip

The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return results. As the offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound

Read more here

Thanks

Upvotes: 1

Related Questions