Not to take away from the nice writeup, but for anyone not getting far enough into the writeup, this is essentially taking https://github.com/ScalingIntelligence/KernelBench and seeing if it can generate Metal kernels in addition to the CUDA kernels the benchmark is written for. The dataset was released in November 2024, it looks like, with a paper on arXiv in February and a bunch of discussion at the time[1], so worth keeping likelihood of inclusion in training data in mind when comparing models.
The different levels are interesting. Level 1 and 3 are successfully (5-shot) translated to Metal kernels by gpt5 97% and 88% of the time , but in both cases, the majority of generated kernels are slower than the reference compiled pytorch versions. The speculation about more simple op fusion opportunities in the Level 2 kernels vs the very simple Level 1 kernels and the complex architecture Level 3 kernels seems plausible. From the KernelBench paper, it looks like Level 2 kernels were mostly automatically generated from randomly picking operators and then getting an LLM to generate a kernel combining them, while Level 1 were mostly hand written and Level 3 came from well-known ML architectures.
The swarm part seemed a bit of a stretch. They fired off requests to 8 different models to do the translation, and the "supervisor" benchmarked the returned kernels and picked the fastest one. Technically a swarm, I guess, but feels like we're devaluing the term :)
The correctness testing used made my eye twitch a bit:
> We tested the generated kernel's output against the default implementation's output on 100 random inputs. We set a 0.01 tolerance for both relative and absolute. Let a be the generated kernel output, and b be the reference kernel output. Outputs were considered equal if for every element in the output, absolute(a - b) ≤ (atol + rtol absolute(b)) held true.*
For a numerical kernel, this seems way too loose, but turns out those bounds come straight from KernelBench, which only tested for correctness on 5 random inputs by default in their harness, not the 100 they used here. KernelBench mentions the clear tradeoff they get between how strictly they define correctness and kernel performance, but for Level 1 kernels in particular, which are really just single operations, it seems like the bounds should be multiple orders of magnitude smaller to ensure robust translation. For instance, the all 0s "optimization" mentioned in the writeup allowing for trivially translating the kernel looks like it's due to those loose tolerances[2] and KernelBench was looking to make the evaluation more robust.
[1] Like https://metr.org/blog/2025-02-14-measuring-automated-kernel-...
[2] https://github.com/ScalingIntelligence/KernelBench/pull/25