They appear quite naturally in the fermion-qubit mappings we looked at and I worked the data structure out at the time and only then found out about Fenwick’s work.
So I guess I also agree with the title :)
[1] https://wiki.haskell.org/wikiupload/e/e9/Typeclassopedia.pdf
1
2 or 3 (let's assume 2)
4 or 5 (let's assume 5)
10 or 11
Remember these are all adjacent. So the nodes near the root stay in cache (regardless of whether a path through them in particular was taken), and the other nodes are in cache if you've recently looked up a similar key.There won't be any further improvement from using a B-tree, which only scatters the memory further. (if anything, you might consider using a higher-base Eytzinger tree for SIMD reasons, but I've never actually seen anyone do this)
That said, as far as I remember, I did not know about Ryabko. So while it might have been a misatribution, it's far from a clear one.
Did he publish it btw.? If so, do you have a reference?
Fenwick trees seem to use much less memory while not being quite as general-purpose as skip lists.
That's quite a big part of the difference, I think.
> Fenwick trees, also known as binary indexed trees
Every time I read something like this, I'm reminded of https://willcrichton.net/notes/naming-conventions-that-need-... . "Fenwick tree" makes it seem like some unknown strange entity, while "binary indexed tree" immediately makes it a lot more accessible and gives you a better handle on the concept too.
Short, memorable names will always dominate precise, descriptive ones.
Finding short, memorable, precise and descriptive names is why "naming things is hard".
Or, in english, if you have a bunch of graph nodes and edges, and lay them out in straight lines horizontally and vertically in some fashion, with lines between them representing the edges, fenwick trees tell you how many of the lines that you draw run into each other [2].
This is normally an expensive thing to compute (the alternative is to use a standard line intersection calculation for every edge or something like this). However, it turns out fenwick trees directly encode and give you the answer.
So this is what ELK and Dagre and friends use to compute the number of edges that cross between layers as they do layout.
[1] see "Simple and Efficient Bilayer Cross Counting"
[2] In general, the graph is more visually pleasing to most people if you minimize the number of lines that cross. So most layout algorithms spend time rearranging the nodes to minimize edge crossings. It's NP-hard to make it optimal, so heuristics are used.
It's not trivial to come up with descriptive names, but the difficulty is often overstated because people underestimate its importance. The names don't have to be perfectly descriptive, we don't need twenty syllable names like organic chemical compounds, but every step taken towards making it descriptive helps a lot. The reality is that the names get chosen basically arbitrarily in journals or textbooks, with hardly any thought given to how it will affect generations of students, working professionals, and even future researchers in remembering the concept and looking it up in their mind among the hundreds or thousands of other things they've learnt.