←back to thread

279 points matthewolfe | 1 comments | | HN request time: 0s | source

TokenDagger is a drop-in replacement for OpenAI’s Tiktoken (the tokenizer behind Llama 3, Mistral, GPT-3.*, etc.). It’s written in C++ 17 with thin Python bindings, keeps the exact same BPE vocab/special-token rules, and focuses on raw speed.

I’m teaching myself LLM internals by re-implementing the stack from first principles. Profiling TikToken’s Python/Rust implementation showed a lot of time was spent doing regex matching. Most of my perf gains come from a) using a faster jit-compiled regex engine; and b) simplifying the algorithm to forego regex matching special tokens at all.

Benchmarking code is included. Notable results show: - 4x faster code sample tokenization on a single thread. - 2-3x higher throughput when tested on a 1GB natural language text file.

Show context
chrismustcode ◴[] No.44422818[source]
There’s something beautiful about creating a drop in replacement for something that improves performance substantially.

ScyllaDB comes to mind

replies(1): >>44422833 #
matthewolfe ◴[] No.44422833[source]
Agreed. I figured nobody would use it otherwise.
replies(2): >>44422860 #>>44422898 #
1. pvg ◴[] No.44422898[source]
To be fair, many people have token stabbing needs.