←back to thread

180 points xnacly | 1 comments | | HN request time: 0s | source
Show context
norir ◴[] No.44562212[source]
Lexing being the major performance bottleneck in a compiler is a great problem to have.
replies(3): >>44563135 #>>44568294 #>>44568430 #
1. Phil_Latio ◴[] No.44568294[source]
You have a point, but getting the easy part (the lexer/token) "right" is something to strive for, because it will also influence the memory and performance characteristics of later stages in the pipeline. So thinking (and writing) about this stuff makes sense.

Example: In the blog post a single token uses 32 bytes + 8 bytes for the pointer indirection in AST node. That's a lot of memory, cache line misses and indirections.