The buffers have the same capacity at each level of the tree. It doesn't have to be that way, and you can find different strategies in the literature, but that's the way we do it.
The degree of each internal node is way lower, because we want to use the space in internal nodes for buffers more than for keys. In a 16TB tree, we would probably have a tree of height 6 or 7.
What we restrict is the size of a whole internal node, so that any one buffer in it can get much more full than the others if it needs to. We want to pick a node size that makes sense for your media. 4MB is a good compromise for spinning disks between their typical bandwidth and seek time, and it's large enough to give the garbage collector a rest on SSDs.
OK, I'm starting to get the concept. Here is how I'm understanding the whole thing:
Assume a node size that can hold 1M records in a buffer and 32 links (that's about 16 MiB more or less, more than what you suggest, but it simplifies the explanations).
Assume further that to operate on a node, you read it entirely in memory, modify it, sort it, then write it entirely on disk and that that takes about 1 second. This is pessimistic as there are surely more efficient ways to maintain sorted 1M nodes.
Now consider completely random insertions or 16-byte records and assume that the keys will spilt completely uniformly among the subtrees.
1. Every 1M insertions, you need to spill from level-1 (root buffer) to level-2, which generates about 32 IOs
2. Additionnally, every 32M insertions, you need to spill all level-2 buffers to level-3, which generates 3232 IOs
n. every 32^(n-1) M.insertions, you need an additional 32^n IOs to spill from level-n to level-(n+1)
So, all in all, to insert 32^n M.records, you need a total of (n+1)32^(n+1) I/Os, which means 32(n+1) IO per 1M insertions.
For example, in a 16TiB data store, i.e. with n=8, you need 288 IOs per million insertions, or about 2400 random insertions/second at 1s/random IO.
In comparison, a B-Tree with a fanout 1024 and a block size 16KiB would generate about 4 random IOs per insertion. These IOs would be dominated by the seek time (~10ms), so you could get only 25 insertions per second.
On the other hand, a point query in this B-Tree would take 4 IOs instead of 8 random I/Os in the fractal tree. So, that's the tradeoff: much better insertion rate vs. slightly worse query time.
Note that a small fanout (about square root of the equivalent B-tree fanout) is important. If you take 1024 as fanout in the fractal tree, the tree has half the height, but the number of IOs per 1M insertions drops to 10245 instead of 32*9.
Flushing one buffer one level down costs O(1) I/Os. It moves B elements, so flushing one element one level down costs O(1/B) amortized I/Os. The tree has fanout O(1), so it has height O(log N). Therefore, each element must be flushed O(log N) times before it reaches its target leaf. So the cost to get one element down to its leaf is O((log N)/B) amortized I/Os.
Compare this with a B-tree, which costs O(log_B N) = O((log N)/(log B)) per insertion.
I'm pretty sure your math is right, though it attacks it from the other direction and I've only just skimmed it. It seems sound though.
The point query tradeoff is there in the theory, you're right, but it assumes a cold cache. In practice, most people have enough RAM for all their internal nodes, whether they're using a B-tree or a fractal tree, so either way it's typically 1 I/O per random point query in either data structure.
Now, what is the recommanded layout of the node buffers (the 4MiB buffers). I've read about Packed Memory Arrays, but don't quite get the complete idea. Can we really do better than read, merge, write ?
(A pity there's no preview button on this forum...)
I should add that, to reduce the cost of a flush by a pretty decent constant number of I/Os, you keep one buffer per child in each node. Basically, you want to do the sorting work on arrival into the node, not during the flush.
The degree of each internal node is way lower, because we want to use the space in internal nodes for buffers more than for keys. In a 16TB tree, we would probably have a tree of height 6 or 7.
What we restrict is the size of a whole internal node, so that any one buffer in it can get much more full than the others if it needs to. We want to pick a node size that makes sense for your media. 4MB is a good compromise for spinning disks between their typical bandwidth and seek time, and it's large enough to give the garbage collector a rest on SSDs.