Pavan Kumar
Pavan Kumar

Reputation: 1891

Waypoint matching query

We have collection as follows. Each document represent a trip of driver, loc property contains way-points, time property contains time corresponding to way-points. For example, in Trip A, Driver would be at GeoLocation tripA.loc.coordinates[0] at the time tripA.time[0]

{
    tripId : "Trip A",
    time : [
        "2015-03-08T04:47:43.589Z",
        "2015-03-08T04:48:43.589Z",
        "2015-03-08T04:49:43.589Z",
        "2015-03-08T04:50:43.589Z",
    ],
    loc: {
        type: "MultiPoint",
        coordinates: [
            [ -73.9580, 40.8003 ],
            [ -73.9498, 40.7968 ],
            [ -73.9737, 40.7648 ],
            [ -73.9814, 40.7681 ]
        ]
    }
}
{
    tripId : "Trip B",
    time : [
        "2015-03-08T04:47:43.589Z",
        "2015-03-08T04:48:43.589Z",
        "2015-03-08T04:49:43.589Z",
        "2015-03-08T04:50:43.589Z",
    ],
    loc: {
        type: "MultiPoint",
        coordinates: [
            [ -72.9580, 41.8003 ],
            [ -72.9498, 41.7968 ],
            [ -72.9737, 41.7648 ],
            [ -72.9814, 41.7681 ]
        ]
    }
}

We would like to query for trips which starts near (1km) location "[long1,lat1]" around the time t (+-10 minutes) and ends at [long2,lat2].

Is there simple and efficient way to formulate above query for MongoDB or Elasticsearch?

If so could you please give the query to do so. either in MongoDB or Elasticsearch. (MongoDB preferable)

Upvotes: 1

Views: 273

Answers (1)

Neil Lunn
Neil Lunn

Reputation: 151112

This did start as a comment but was clearly getting way to long. So it's a long explanation of the limitations and the approach.

The bottom line of what you are asking to achieve here is effectively a "union query" which is generally defined as two separate queries where the end result is the "set intersection" of each of the results. In more brief, where the selected "trips" from your "origin" query matches results found in your "destination" query.

In general database terms we refer to a "union" operation as a "join" or at least a condition where the selection of one set of criteria "and" the selection of another must both meet with a common base grouping identifier.

The base points in MongoDB speak as I believe also applies to elastic search indexes is that neither datastore mechanism supports the notion of a "join" in any way from a direct singular query.

There is another MongoDB principle here considering your proposed or existing modelling, in that even with items specified in "array" terms, there is no way to implement an "and" condition with a geospatial search on coordinates and that also considering your choice of modelling as a GeoJSON "MultiPoint" the query cannot "choose" which element of that object to match the "nearest" to. Therefore "all points" would be considered when considering the "nearest match".

Your explanation is quite clear in the intent. So we can see that "origin" is both notated as and represented within what is essentially "two arrays" in your document structure as the "first" element in each of those arrays. The representative data being a "location" and "time" for each progressive "waypoint" in the "trip". Naturally ending in your "destination" at the end element of each array, considering of course that the data points are "paired".

I see the logic in thinking that this is a good way to store things, but it does not follow the allowed query patterns of either of the storage solutions you mention here.

As I mentioned already, this is indeed a "union" in intent so while I see the thinking that led to the design it would be better to store things like this:

{
    "tripId" : "Trip A",
    "time" : ISODate("2015-03-08T04:47:43.589Z"),
    "loc": {
        "type": "Point",
        "coordinates": [ -73.9580, 40.8003 ]
    },
    "seq": 0
},
{
    "tripId" : "Trip A",
    "time" : ISODate("2015-03-08T04:48:43.589Z"),
    "loc": {
        "type": "Point",
        "coordinates": [ -73.9498, 40.7968 ]
    },
    "seq": 1
},
{
    "tripId" : "Trip A",
    "time" : ISODate("2015-03-08T04:49:43.589Z"),
    "loc": {
        "type": "Point",
        "coordinates": [ -73.9737, 40.7648 ]
    },
    "seq": 2
},
{
    "tripId" : "Trip A",
    "time" : ISODate("2015-03-08T04:50:43.589Z"),
    "loc": {
        "type": "Point",
        "coordinates": [ -73.9814, 40.7681 ]
    },
    "seq": 3,
    "isEnd": true
}

In the example, I'm just inserting those documents into a collection called "geojunk", and then issuing a 2dsphere index for the "loc" field:

db.geojunk.ensureIndex({ "loc": "2dsphere" })

The processing of this is then done with "two" .aggregate() queries. The reason for .aggregate() is because you want to match the "first" document "per trip" in each case. This represents the nearest waypoint for each trip found by the queries. Then basically you want to "merge" these results into some kind of "hash" structure keyed by "tripId".

The end logic says that if both an "origin" and a "destination" matched your query conditions for a given "trip", then that is a valid result for your overall query.

The code I give here is an arbitrary nodejs implementaion. Mostly because it's a good base to represent issuing the queries in "parallel" for best performance and also because I'm choosing to use nedb as an example of the "hash" with a little more "Mongolike" syntax:

var async = require('async'),
    MongoClient = require("mongodb").MongoClient;
    DataStore = require('nedb');


// Common stream upsert handler
function streamProcess(stream,store,callback) {

  stream.on("data",function(data) {
    // Clean "_id" to keep nedb happy
    data.trip = data._id;
    delete data._id;


    // Upsert to store
    store.update(
      { "trip": data.trip },
      {
        "$push": {
          "time": data.time,
          "loc": data.loc
        }
      },
      { "upsert": true },
      function(err,num) {
        if (err) callback(err);
      }
    );

  });

  stream.on("err",callback)

  stream.on("end",callback);

}

MongoClient.connect('mongodb://localhost/test',function(err,db) {
  if (err) throw err;

  db.collection('geojunk',function(err,collection) {
    if (err) throw err;

  var store = new DataStore();

    // Parallel execution
    async.parallel(
      [
        // Match origin trips
        function(callback) {
          var stream = collection.aggregate(
            [
              { "$geoNear": {
                "near": {
                  "type": "Point",
                  "coordinates": [ -73.9580, 40.8003 ],
                },
                "query": {
                  "time": {
                    "$gte": new Date("2015-03-08T04:40:00.000Z"),
                    "$lte": new Date("2015-03-08T04:50:00.000Z")
                  },
                  "seq": 0
                },
                "maxDistance": 1000,
                "distanceField": "distance",
                "spherical": true
              }},
              { "$group": {
                "_id": "$tripId",
                "time": { "$first": "$time" },
                "loc": { "$first": "$loc" }
              }}
            ],
            { "cursor": { "batchSize": 1 } }
          );
          streamProcess(stream,store,callback);
        },

        // Match destination trips
        function(callback) {
          var stream = collection.aggregate(
            [
              { "$geoNear": {
                "near": {
                  "type": "Point",
                  "coordinates": [ -73.9814, 40.7681 ]
                },
                "query": { "isEnd": true },
                "maxDistance": 1000,
                "distanceField": "distance",
                "spherical": true
              }},
              { "$group": {
                "_id": "$tripId",
                "time": { "$first": "$time" },
                "loc": { "$first": "$loc" }
              }}
            ],
            { "cursor": { "batchSize": 25 } }
          );
          streamProcess(stream,store,callback);
        }

      ],
      function(err) {
        if (err) throw err;

        // Just documents that matched origin and destination
        store.find({ "loc": { "$size": 2 }},{ "_id": 0 },function(err,result) {
          if (err) throw err;
          console.log( JSON.stringify( result, undefined, 2 ) );
          db.close();
        });
      }
    );

  });

});

On the sample data as I listed it this will return:

[
  {
    "trip": "Trip A",
    "time": [
      "2015-03-08T04:47:43.589Z",
      "2015-03-08T04:50:43.589Z"
    ],
    "loc": [
      {
        "type": "Point",
        "coordinates": [
          -73.958,
          40.8003
        ]
      },
      {
        "type": "Point",
        "coordinates": [
          -73.9814,
          40.7681
        ]
      }
    ]
  }
]

So it found the origin and destination that was nearest to the queried locations, also being an "origin" within the required time and something that is defined as a destination, i.e. "isEnd".

So the $geoNear operation does the matching with the returned results being the documents nearest to the point and other conditions. The $group stage is required because other documents in the same trip could "possibly" match the conditions,so it's just a way of making sure. The $first operator makes sure that the already "sorted" results will contain only one result per "trip". If you are really "sure" that will not happen with the conditions, then you could just use a standard $nearSphere query outside of aggregation instead. So I'm playing it safe here.

One thing to note there that even with the inclusion on "nedb" here and though it does support dumping output to disk, the data is still accumulated in memory. If you are expecting large results then rather than this type of "hash table" implementation, you would need to output in a similar fashion to what is shown to another mongodb collection and retrieve the matching results from there.

That doesn't change the overall logic though, and yet another reason to use "nedb" to demonstrate, since you would "upsert" to documents in the results collection in the same way.

Upvotes: 1

Related Questions