Are Fibonacci heaps or Brodal queues used in practice anywhere?

7.3k views Asked by At
2

There are 2 answers

1
templatetypedef On BEST ANSWER

To the best of my knowledge, there are no major applications that actually use Fibonacci heaps or Brodal queues.

Fibonacci heaps were initially designed to satisfy a theoretical rather than a practical need: to speed up Dijkstra's shortest paths algorithm asymptotically. The Brodal queue (and the related functional data structure) were similarly designed to meet theoretical guarantees, specifically, to answer a longstanding open question about whether it was possible to match the time bounds of a Fibonacci heap with worst-case guarantees rather than amortized guarantees. In that sense, the data structures were not developed to meet practical needs, but rather to push forward our theoretical understanding of the limits of algorithmic efficiency. To the best of my knowledge, there are no present algorithms in which it would actually be better to use a Brodal queue over a Fibonacci heap.

As other answers have noted, the constant factors hidden in a Fibonacci heap or Brodal queue are very high. They need a lot of pointers wired in lots of complicated linked lists and, accordingly, have absolutely terrible locality of reference, especially compared to a standard binary heap. This means that they're likely to perform worse in practice given caching effects unless you have algorithms that need a colossally large number of decrease-key operations. There are some cases where this comes up (the linked answers talk about a few of them, for example), but treat them as highly specialized circumstances rather than common use cases. If you're working on huge graphs, it's more common to use other techniques to improve efficiency, such as using approximation algorithms for the problem at hand, better heuristics, or algorithms that use specific properties of the underlying data.

Hope this helps!

1
fwip On

Fibonacci heaps see use in Velvet, a bioinformatics short-read assembler.

See the section "Complexity and Scaling Issues" of the paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2336801/

Extract, emphasis mine:

The main bottleneck, in terms of time and memory, is the graph construction. The initial graph of the Streptococcus reads needs 2.0 Gb of RAM. The scale of the problem will eventually require memory persistent data structures for whole-genome assemblies. Admittedly, access times would be greater, but the amount of storable data would be virtually unlimited. The time complexity of Tour Bus depends primarily on the number N of nodes in the graph, which itself depends on the read coverage, error rate, and number of repeats. Idury and Waterman (1995) estimated N but neglected to consider repeats. The search itself is based on the Dijkstra algorithm, which has a time complexity of O(N logN) when implemented with a Fibonacci heap (Gross and Yellen 2004). The cost of individual path comparisons and the corresponding graph modifications can be limited by a length cutoff.