Reputation: 31
I am a university student stuck on an assignment for my Advanced Algorithms class.
The task in simple terms:
I'm given an array of 2D points. For each point, I need to display the number of other points that have greater X and Y coordinates. For each point, the Y coordinate is unique.
The hint given for this assignment is "merge sort". After a bit of Googling I saw a few things about inversion counting, but I couldn't apply it to 2D points.
Instead, I built a different algorithm:
Merge sort by the y
cooridnate. Each time an element is added during the merge, backtrack through the already added elements and increase the count for each one that has a lower x
coordinate.
It works fine, it gives me the right output, but this backtracking method is quite inefficient, and takes too long as n
gets large. It not much better than doing it with 2 nested loops. Unfortunately, this was the only thing I could think of that was built around a merge sort.
I'm looking for ideas that would speed up my code. Thanks in advance.
Here is my algorithm:
#include <stdio.h>
#include <stdlib.h>
typedef struct {
int x;
int y;
int count;
int isAtRightSide; // Only used during mergeAndCount
} Point;
void mergeAndCount(Point *arr[], int l, int m, int r, int minXDiff) {
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;
Point *L[n1];
Point *R[n2];
for (i = 0; i < n1; i++) {
L[i] = arr[l + i];
L[i]->isAtRightSide = 0;
}
for (j = 0; j < n2; j++) {
R[j] = arr[m + 1 + j];
R[j]->isAtRightSide = 1;
}
// ==========
i = 0;
j = 0;
k = l;
while (i < n1 || j < n2) {
if (i < n1 && j < n2) {
if (L[i]->y <= R[j]->y) {
arr[k] = L[i++];
} else {
arr[k] = R[j++];
}
} else if (i < n1) {
arr[k] = L[i++];
} else {
arr[k] = R[j++];
}
for (int o = k - 1; o >= l; o--) {
if (
arr[o]->x + minXDiff <= arr[k]->x &&
arr[o]->isAtRightSide != arr[k]->isAtRightSide // Same side already has it counted
) {
arr[o]->count++;
}
}
k++;
}
}
void mergeSortAndCount(Point *arr[], int l, int r, int minXDiff) {
if (l < r) {
int m = l + (r - l) / 2;
mergeSortAndCount(arr, l, m, minXDiff);
mergeSortAndCount(arr, m + 1, r, minXDiff);
mergeAndCount(arr, l, m, r, minXDiff);
}
}
int main() {
int n, minXDiff;
scanf("%d %d", &n, &minXDiff);
Point **points = malloc(n * sizeof(Point*));
Point **temp = malloc(n * sizeof(Point*));
for (int i = 0; i < n; i++) {
points[i] = temp[i] = malloc(sizeof(Point));
points[i]->count = 0;
scanf("%d %d", &points[i]->x, &points[i]->y);
}
// ====================
mergeSortAndCount(temp, 0, n - 1, minXDiff);
for (int i = 0; i < n; i++) {
printf("%d\n", points[i]->count);
}
// ====================
for (int i = 0; i < n; i++) {
free(points[i]);
}
free(points);
free(temp);
return 0;
}
I'd also appreciate constructive criticism over what I made here.
Upvotes: 3
Views: 143
Reputation: 23945
One way to achieve it is by understanding that it is enough for a solution to rely on results already seen during the iteration. Why? Because if we sort by Y and descend on these points, we are guaranteed no future point in the iteration can meet the expectation for the current point (no points have a greater Y ahead in the list).
Now the question becomes, from the Xs we have seen so far, how many are greater than the current one, which obliges some data structure to efficiently answer, as well as update as we go.
Upvotes: 0
Reputation: 46497
Here is an explanation of the hint.
Sort by y
first.
Mergesort by x
, breaking ties by taking them from the second list. Every time you merge an element from the first list, what is the efficient way to find how many points you just found with higher x
and y
?
Note, the condition that y
is distinct will matter.
Upvotes: 3