Reputation: 508
I am writing a library with explainer that builds a specialised binary search tree at compile time. Optimal lookup times come with data sorted by decreasing weights. Such sorting is currently up to the caller. I want to provide that for them.
The tree is actually stored as an array, with the root at a[0]
, level 1 at a[1] … a[2]
, level 2 at a[3] … a[6]
, &c. Since the tree is complete, there are no gaps. The last level need not be full width, but it is filled from the left.
Rust has highly optimised sort functions, combining various strategies based on the data. Alas they are not currently callable from compile-time code. So I need to roll my own. This might actually be great, as I’ve found that my situation allows cutting corners.
The data need not be sorted fully. Nor does sorting need to be stable. It suffices to have the biggest weight (my criterion) at the root of the tree, the next biggest two on level 1, the next four on level 2, &c. So essentially I want to sort my data into buckets, the capacity of which doubles at every level. Which cheap (ideally in time and space) sorting algorithm could best be tweaked this way? I guess it would still be O(n log n), but hopefully with far less swaps.
Upvotes: 0
Views: 27