user3884301
user3884301

Reputation:

runtime optimization of a matching algorithm

I made the following matching algorithm, but of course i will having big runtimes...

Has anybody an idea to make this matching faster (by changing the code or changing the algorithm)

for (i=0;i<AnzEntity;i++){
  for (j=0;j<8;j++){
    if (Entity[i].GID[j] > 0){
      for (k=0;k<AnzGrid;k++){
        if (Entity[i].Module == Grid[k].Module && Entity[i].GID[j] == Grid[k].ID ){
          Entity[i].GIDCoord[j][0] = Grid[k].XYZ[0];
          Entity[i].GIDCoord[j][1] = Grid[k].XYZ[1];
          Entity[i].GIDCoord[j][2] = Grid[k].XYZ[2];
          continue;
        }
      }
    }    
  }
}

Upvotes: 1

Views: 66

Answers (2)

user3793679
user3793679

Reputation:

A very general question... for which one can give only a very general answer.

All faster search algorithms come down to divide and conquer. There's a whole family of searches which start by sorting the data to be searched, so that you can progressively halve (or better) the number of things you are searching through (eg: binary search lists, trees of all kinds, ...). There's a whole family of searches where you use some property of each value to cut the search to some (small) subset of the data (hashing). There are searches which cache recent results, which can work in some cases (eg: bring to front lists). Which of these may be suitable depends on the data.

The big thing to look at, however, is whether the data being search changes, and if so how often. If the data does not change, then you can hit it with a big hammer and crunch out a simple structure to search. If the data changes all the time, then you need a more complicated structure so that changes are not prohibitively expensive and search speed is maintained. Depending on the circumstances the trade-off will vary.

Upvotes: 1

user1196549
user1196549

Reputation:

You are exhaustively comparing all Entity[i] (with a positive GID[j]) to all Grid[k]. This implies a total of AnzEntity * AnzGrid comparisons.

Instead, you can sort the Entity and Grid elements in increasing lexicographical order (by ID value and Module value in case of a tie). You should duplicate all Entity having nonzero Entity.GID.

Exploiting the sorted order, the number of comparisons will drop to 8.AnzEntity + AnzGrid.

Taking the sorts into account, O(NM) is turned to O(NLog(N)+MLog(M)).

ALTERNATIVE:

Another option is to enter one of Entity or Grid items in a hash table, using pairs ID/Module for the key, and use the hash table for fast lookups. This should result in a behavior close to linear O(N + M).

Upvotes: 0

Related Questions