> *"The image here appears to be upside down (or rather rotated 180)"*
Yeah, something weird happened with the image rendering; I'll fix that.
> *"It's not clear from the readme physically where the data is stored, nor where in the storage process the 'congestion' is coming from."*
The data is *fully in-memory*, distributed across dynamically growing shards (think of an adaptive hashtable that resizes itself). There’s no external storage layer like RocksDB or disk persistence—this is meant to be *pure cache-speed KV storage.*
Congestion happens when a shard starts getting too many keys relative to the rest of the system. The engine constantly tracks *contention per shard*, and when it crosses a threshold, we trigger an upgrade (new shards added, old ones redistributed). Migration is *zero-downtime*, but at very high write rates, there’s a brief moment where some writes are directed to the old store while the new one warms up.
> *"I'm surprised there's no range scan."*
Yeah, that’s an intentional design choice—this is meant to be a *high-speed cache*, closer to Redis than a full database like RocksDB or BigTable. Range queries would need an ordered structure (e.g., skip lists or B-trees), which add overhead. But I’m definitely considering implementing *prefix scans* (e.g., `SCAN user:*` style queries) since that’d be useful for a lot of real-world use cases.