←back to thread

196 points ashvardanian | 8 comments | | HN request time: 3.192s | source | bottom
Show context
mgaunard ◴[] No.46287778[source]
In practice you should always normalize your Unicode data, then all you need to do is memcmp + boundary check.

Interestingly enough this library doesn't provide grapheme cluster tokenization and/or boundary checking which is one of the most useful primitive for this.

replies(2): >>46287938 #>>46287993 #
1. stingraycharles ◴[] No.46287938[source]
That’s not practical in many situations, as the normalization alone may very well be more expensive than the search.

If you’re in control of all data representations in your entire stack, then yes of course, but that’s hardly ever the case and different tradeoffs are made at different times (eg storage in UTF-8 because of efficiency, but in-memory representation in UTF-32 because of speed).

replies(1): >>46288010 #
2. mgaunard ◴[] No.46288010[source]
That doesn't make sense; the search is doing on-the-fly normalization as part of its algorithm, so it cannot be faster than normalization alone.
replies(3): >>46288133 #>>46288181 #>>46288218 #
3. stingraycharles ◴[] No.46288133[source]
It can, because of how CPUs work with registers and hot code paths and all that.

First normalizing everything and then comparing normalized versions isn’t as fast.

And it also enables “stopping early” when a match has been found / not found, you may not actually have to convert everything.

replies(1): >>46288760 #
4. ashvardanian ◴[] No.46288181[source]
I get why it sounds that way, but it’s not actually true.

StringZilla added full Unicode case folding in an earlier release, and had a state-of-the-art exact case-sensitive substring search for years. However, doing a full fold of the entire haystack is significantly slower than the new case-insensitive search path.

The key point is that you don’t need to fully normalize the haystack to correctly answer most substring queries. The search algorithm can rule out the vast majority of positions using cheap, SIMD-friendly probes and only apply fold logic on a very small subset of candidates.

I go into the details in the “Ideation & Challenges in Substring Search” section of the article

5. Const-me ◴[] No.46288218[source]
> it cannot be faster than normalization alone

Modern processors are generally computing stuff way faster than they can load and store bytes from main memory.

The code which does on the fly normalization only needs to normalize a small window. If you’re careful, you can even keep that window in registers, which have single CPU cycle access latency and ridiculously high throughput like 500GB/sec. Even if you have to store and reload, on-the-fly normalization is likely to handle tiny windows which fit in the in-core L1D cache. The access cost for L1D is like ~5 cycles of latency, and equally high throughput because many modern processors can load two 64-bytes vectors and store one vector each and every cycle.

replies(1): >>46288792 #
6. mgaunard ◴[] No.46288760{3}[source]
Running more code per unit of data does not make the code hotter or reduce the register pressure, quite the opposite...
replies(1): >>46288861 #
7. mgaunard ◴[] No.46288792{3}[source]
The author published the bandwidth of its algo, it's one fifth of a typical memory bandwidth (it's not possible to go faster than memory obviously for this benchmark, since we're assuming the data is not in cache).
8. stingraycharles ◴[] No.46288861{4}[source]
You’re misunderstanding: you just convert to 32 bits once and reuse that same register all the time.

You’re running the exact same code, but are more more efficient in terms of “I immediately use the data for comparison after converting it”, which means it’s likely either in a register or L1 cache already.