←back to thread

566 points PaulHoule | 3 comments | | HN request time: 0.747s | source
Show context
mike_hearn ◴[] No.44490340[source]
A good chance to bring up something I've been flagging to colleagues for a while now: with LLM agents we are very quickly going to become even more CPU bottlenecked on testing performance than today, and every team I know of today was bottlenecked on CI speed even before LLMs. There's no point having an agent that can write code 100x faster than a human if every change takes an hour to test.

Maybe I've just got unlucky in the past, but in most projects I worked on a lot of developer time was wasted on waiting for PRs to go green. Many runs end up bottlenecked on I/O or availability of workers, and so changes can sit in queues for hours, or they flake out and everything has to start again.

As they get better coding agents are going to be assigned simple tickets that they turn into green PRs, with the model reacting to test failures and fixing them as they go. This will make the CI bottleneck even worse.

It feels like there's a lot of low hanging fruit in most project's testing setups, but for some reason I've seen nearly no progress here for years. It feels like we kinda collectively got used to the idea that CI services are slow and expensive, then stopped trying to improve things. If anything CI got a lot slower over time as people tried to make builds fully hermetic (so no inter-run caching), and move them from on-prem dedicated hardware to expensive cloud VMs with slow IO, which haven't got much faster over time.

Mercury is crazy fast and in a few quick tests I did, created good and correct code. How will we make test execution keep up with it?

replies(28): >>44490408 #>>44490637 #>>44490652 #>>44490785 #>>44491195 #>>44491421 #>>44491483 #>>44491551 #>>44491898 #>>44492096 #>>44492183 #>>44492230 #>>44492386 #>>44492525 #>>44493236 #>>44493262 #>>44493392 #>>44493568 #>>44493577 #>>44495068 #>>44495946 #>>44496321 #>>44496534 #>>44497037 #>>44497707 #>>44498689 #>>44502041 #>>44504650 #
kccqzy ◴[] No.44490652[source]
> Maybe I've just got unlucky in the past, but in most projects I worked on a lot of developer time was wasted on waiting for PRs to go green.

I don't understand this. Developer time is so much more expensive than machine time. Do companies not just double their CI workers after hearing people complain? It's just a throw-more-resources problem. When I was at Google, it was somewhat common for me to debug non-deterministic bugs such as a missing synchronization or fence causing flakiness; and it was common to just launch 10000 copies of the same test on 10000 machines to find perhaps a single digit number of failures. My current employer has a clunkier implementation of the same thing (no UI), but there's also a single command to launch 1000 test workers to run all tests from your own checkout. The goal is to finish testing a 1M loc codebase in no more than five minutes so that you get quick feedback on your changes.

> make builds fully hermetic (so no inter-run caching)

These are orthogonal. You want maximum deterministic CI steps so that you make builds fully hermetic and cache every single thing.

replies(16): >>44490726 #>>44490764 #>>44491015 #>>44491034 #>>44491088 #>>44491949 #>>44491953 #>>44492546 #>>44493309 #>>44494481 #>>44494583 #>>44495174 #>>44496510 #>>44497007 #>>44500400 #>>44513737 #
mike_hearn ◴[] No.44490764[source]
I was also at Google for years. Places like that are not even close to representative. They can afford to just-throw-more-resources, they get bulk discounts on hardware and they pay top dollar for engineers.

In more common scenarios that represent 95% of the software industry CI budgets are fixed, clusters are sized to be busy most of the time, and you cannot simply launch 10,000 copies of the same test on 10,000 machines. And even despite that these CI clusters can easily burn through the equivalent of several SWE salaries.

> These are orthogonal. You want maximum deterministic CI steps so that you make builds fully hermetic and cache every single thing.

Again, that's how companies like Google do it. In normal companies, build caching isn't always perfectly reliable, and if CI runs suffer flakes due to caching then eventually some engineer is gonna get mad and convince someone else to turn the caching off. Blaze goes to extreme lengths to ensure this doesn't happen, and Google spends extreme sums of money on helping it do that (e.g. porting third party libraries to use Blaze instead of their own build system).

In companies without money printing machines, they sacrifice caching to get determinism and everything ends up slow.

replies(4): >>44491160 #>>44492797 #>>44498410 #>>44498934 #
kridsdale1 ◴[] No.44492797[source]
I’m at Google today and even with all the resources, I am absolutely most bottlenecked by the Presubmit TAP and human review latency. Making CLs in the editor takes me a few hours. Getting them in the system takes days and sometimes weeks.
replies(4): >>44495094 #>>44497595 #>>44498895 #>>44503920 #
1. to23iho34324 ◴[] No.44497595[source]
Indeed. You'd think Google would test for how well people will cope with boredom, rather than their bait-and-switch interviews that make it seem like you'll be solving l33tcode every evening.
replies(1): >>44497989 #
2. ozim ◴[] No.44497989[source]
You think people work on a single issue at a time?

Maybe at Google they can afford that, where I worked at some point I was working 2 or 3 projects switching between issues. Of course all projects were the same tech and mostly the same setup, but business logic and tasks were different.

If I have to wait 2-3 hours I have code to review, bug fixes in different places to implement. Even on a single project if you wait 2 hours till your code lands test env and have nothing else to do someone is mismanaging the process.

replies(1): >>44505908 #
3. to23iho34324 ◴[] No.44505908[source]
Dude, I've worked at Google.

It's an overpaid overglorified boring job (obv. outside all the 'cool' research-y projects).