kuzzooroo
kuzzooroo

Reputation: 7408

Are Fibonacci heaps or Brodal queues used in practice anywhere?

Are Fibonacci heaps used in practice anywhere? I've looked around on SO and found answers to related questions (see below) but nothing that actually quite answers the question.

  1. There are good implementations of Fibonacci heaps out there, including in standard libraries such as Boost C++. The fact that these libraries contain Fibonacci heaps suggests to be that they must be useful somewhere.
  2. We know that certain conditions need to be met for a Fibonacci heap to be faster in practice: "to benefit from Fibonacci heaps in practice, you have to use them in an application where decrease_keys are incredibly frequent"; "For the Fibonacci Heap to really shine, you need either of the following cases: a) Expensive comparisons: Fib Heaps minimize the number of comparisons required to organize the data. b) The majority of operations is updateKey/insert/delete. As Fibonacci Heaps 'group' the updates together until the next extractMin, the larger the 'batch', the more efficient it gets."
  3. There is a data structure called a "Brodal Queue" which I'm not sure I'd heard of before that seems to have time complexity behaviors at least as good as Fibonacci heaps. Here's a nice table with a comparison of time complexities for various operations for different varieties of heaps.
  4. On a question about whether there are any applications of Fibonacci or binomial heaps, answerers only gave examples of binomial heaps.

Upvotes: 32

Views: 8291

Answers (2)

fwip
fwip

Reputation: 415

Fibonacci heaps see use in Velvet, a bioinformatics short-read assembler.

See the section "Complexity and Scaling Issues" of the paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2336801/

Extract, emphasis mine:

The main bottleneck, in terms of time and memory, is the graph construction. The initial graph of the Streptococcus reads needs 2.0 Gb of RAM. The scale of the problem will eventually require memory persistent data structures for whole-genome assemblies. Admittedly, access times would be greater, but the amount of storable data would be virtually unlimited. The time complexity of Tour Bus depends primarily on the number N of nodes in the graph, which itself depends on the read coverage, error rate, and number of repeats. Idury and Waterman (1995) estimated N but neglected to consider repeats. The search itself is based on the Dijkstra algorithm, which has a time complexity of O(N logN) when implemented with a Fibonacci heap (Gross and Yellen 2004). The cost of individual path comparisons and the corresponding graph modifications can be limited by a length cutoff.

Upvotes: 4

templatetypedef
templatetypedef

Reputation: 372814

To the best of my knowledge, there are no major applications that actually use Fibonacci heaps or Brodal queues.

Fibonacci heaps were initially designed to satisfy a theoretical rather than a practical need: to speed up Dijkstra's shortest paths algorithm asymptotically. The Brodal queue (and the related functional data structure) were similarly designed to meet theoretical guarantees, specifically, to answer a longstanding open question about whether it was possible to match the time bounds of a Fibonacci heap with worst-case guarantees rather than amortized guarantees. In that sense, the data structures were not developed to meet practical needs, but rather to push forward our theoretical understanding of the limits of algorithmic efficiency. To the best of my knowledge, there are no present algorithms in which it would actually be better to use a Brodal queue over a Fibonacci heap.

As other answers have noted, the constant factors hidden in a Fibonacci heap or Brodal queue are very high. They need a lot of pointers wired in lots of complicated linked lists and, accordingly, have absolutely terrible locality of reference, especially compared to a standard binary heap. This means that they're likely to perform worse in practice given caching effects unless you have algorithms that need a colossally large number of decrease-key operations. There are some cases where this comes up (the linked answers talk about a few of them, for example), but treat them as highly specialized circumstances rather than common use cases. If you're working on huge graphs, it's more common to use other techniques to improve efficiency, such as using approximation algorithms for the problem at hand, better heuristics, or algorithms that use specific properties of the underlying data.

Hope this helps!

Upvotes: 36

Related Questions