Most active commenters
  • mike_hearn(3)

←back to thread

566 points PaulHoule | 20 comments | | HN request time: 2.325s | source | bottom
Show context
mike_hearn ◴[] No.44490340[source]
A good chance to bring up something I've been flagging to colleagues for a while now: with LLM agents we are very quickly going to become even more CPU bottlenecked on testing performance than today, and every team I know of today was bottlenecked on CI speed even before LLMs. There's no point having an agent that can write code 100x faster than a human if every change takes an hour to test.

Maybe I've just got unlucky in the past, but in most projects I worked on a lot of developer time was wasted on waiting for PRs to go green. Many runs end up bottlenecked on I/O or availability of workers, and so changes can sit in queues for hours, or they flake out and everything has to start again.

As they get better coding agents are going to be assigned simple tickets that they turn into green PRs, with the model reacting to test failures and fixing them as they go. This will make the CI bottleneck even worse.

It feels like there's a lot of low hanging fruit in most project's testing setups, but for some reason I've seen nearly no progress here for years. It feels like we kinda collectively got used to the idea that CI services are slow and expensive, then stopped trying to improve things. If anything CI got a lot slower over time as people tried to make builds fully hermetic (so no inter-run caching), and move them from on-prem dedicated hardware to expensive cloud VMs with slow IO, which haven't got much faster over time.

Mercury is crazy fast and in a few quick tests I did, created good and correct code. How will we make test execution keep up with it?

replies(28): >>44490408 #>>44490637 #>>44490652 #>>44490785 #>>44491195 #>>44491421 #>>44491483 #>>44491551 #>>44491898 #>>44492096 #>>44492183 #>>44492230 #>>44492386 #>>44492525 #>>44493236 #>>44493262 #>>44493392 #>>44493568 #>>44493577 #>>44495068 #>>44495946 #>>44496321 #>>44496534 #>>44497037 #>>44497707 #>>44498689 #>>44502041 #>>44504650 #
1. TechDebtDevin ◴[] No.44490408[source]
LLM making a quick edit, <100 lines... Sure. Asking an LLM to rubber-duck your code, sure. But integrating an LLM into your CI is going to end up costing you 100s of hours productivity on any large project. That or spend half the time you should be spending learning to write your own code, dialing down context sizing and prompt accuracy.

I really really don't understand the hubris around llm tooling, and don't see it catching on outside of personal projects and small web apps. These things don't handle complex systems well at all, you would have to put a gun in my mouth to let one of these things work on an important repo of mine without any supervision... And if I'm supervising the LLM I might as well do it myself, because I'm going to end up redoing 50% of its work anyways..

replies(4): >>44490540 #>>44490612 #>>44490651 #>>44491513 #
2. mike_hearn ◴[] No.44490540[source]
I've used Claude with a large, mature codebase and it did fine. Not for every possible task, but for many.

Probably, Mercury isn't as good at coding as Claude is. But even if it's not, there's lots of small tasks that LLMs can do without needing senior engineer level skills. Adding test coverage, fixing low priority bugs, adding nice animations to the UI etc. Stuff that maybe isn't critical so if a PR turns up and it's DOA you just close it, but which otherwise works.

Note that many projects already use this approach with bots like Renovate. Such bots also consume a ton of CI time, but it's generally worth it.

replies(2): >>44490641 #>>44490758 #
3. DSingularity ◴[] No.44490612[source]
He is simply observing that if PR numbers and launch rates increase dramatically CI cost will become untenable.
4. flir ◴[] No.44490641[source]
Don't want to put words in the parent commenter's mouth, but I think the key word is "unsupervised". Claude doesn't know what it doesn't know, and will keep going round the loop until the tests go green, or until the heat death of the universe.
replies(1): >>44490658 #
5. kraftman ◴[] No.44490651[source]
I keep seeing this argument over and over again, and I have to wonder, at what point do you accept that maybe LLM's are useful? Like how many people need to say that they find it makes them more productive before you'll shift your perspective?
replies(5): >>44490744 #>>44490784 #>>44490992 #>>44492429 #>>44493343 #
6. mike_hearn ◴[] No.44490658{3}[source]
Yes, but you can just impose timeouts to solve that. If it's unsupervised the only cost is computation.
7. candiddevmike ◴[] No.44490744[source]
People say they are more productive using visual basic, but that will never shift my perspective on it.

Code is a liability. Code you didn't write is a ticking time bomb.

8. airstrike ◴[] No.44490758[source]
IMHO LLMs are notoriously bad at test coverage. They usually hard code a value to have the test pass, since they lack the reasoning required to understand why the test exists or the concept of assertion, really
replies(1): >>44491371 #
9. psychoslave ◴[] No.44490784[source]
That's a tool, and it depends what you need to do. If it fits someone need and make them more productive, or even simply enjoy more the activity, good.

Just because two people are fixing something on the whole doesn't mean the same tool will hold fine. Gum, pushpin, nail, screw,bolts?

The parent thread did mention they use LLM successfully in small side project.

10. dragonwriter ◴[] No.44490992[source]
> I keep seeing this argument over and over again, and I have to wonder, at what point do you accept that maybe LLM's are useful?

The post you are responding to literally acknowledges that LLMs are useful in certain roles in coding in the first sentence.

> Like how many people need to say that they find it makes them more productive before you'll shift your perspective?

Argumentum ad populum is not a good way of establishing fact claims beyond the fact of a belief being popular.

replies(1): >>44493539 #
11. wrs ◴[] No.44491371{3}[source]
I don’t know, Claude is very good at writing that utterly useless kind of unit test where every dependency is mocked out and the test is just the inverted dual of the original code. 100% coverage, nothing tested.
replies(2): >>44492545 #>>44495049 #
12. blitzar ◴[] No.44491513[source]
Do the opposite - integrate your CI into your LLM.

Make it run tests after it changes your code and either confirm it didnt break anything or go back and try again.

13. ninetyninenine ◴[] No.44492429[source]
They say it’s only effective for personal projects but there’s literally evidence of LLMs being used for what he says can’t be used. Actual physical evidence.

It’s self delusion. And also the pace of AI is so fast he may not be aware of how fast LLMs are integrating into our coding environments. Like 1 year ago what he said could be somewhat true but right now what he said is clearly not true at all.

14. conradkay ◴[] No.44492545{4}[source]
Yeah and that's even worse because there's not an easy metric you can have the agent work towards and get feedback on.

I'm not that into "prompt engineering" but tests seem like a big opportunity for improvement. Maybe something like (but much more thorough):

1. "Create a document describing all real-world actions which could lead to the code being used. List all methods/code which gets called before it (in order) along with their exact parameters and return value. Enumerate all potential edge cases and errors that could occur and if it ends up influencing this task. After that, write a high-level overview of what need to occur in this implementation. Don't make it top down where you think about what functions/classes/abstractions which are created, just the raw steps that will need to occur" 2. Have it write the tests 3. Have it write the code

Maybe TDD ends up worse but I suspect the initial plan which is somewhat close to code makes that not the case

Writing the initial doc yourself would definitely be better, but I suspect just writing one really good one, then giving it as an example in each subsequent prompt captures a lot of the improvement

replies(1): >>44497462 #
15. MangoToupe ◴[] No.44493343[source]
> at what point do you accept that maybe LLM's are useful?

LLMs are useful, just not for every task and price point.

16. kraftman ◴[] No.44493539{3}[source]
...and my comment clearly isnt talking about that, but at the suggestion that its useless to write code with an LLM because you'll end up rewriting 50% of it.

If everyone has an opinion different to mine, I dont instantly change my opinion, but I do try and investigate the source of the difference, to find out what I'm missing or what they are missing.

The polarisation between people that find LLMs useful or not is very similar to the polarisation between people that find automated testing useful or not, and I have a suspicion they have the same underlying cause.

replies(1): >>44494103 #
17. nwienert ◴[] No.44494103{4}[source]
You seem to think everyone shares your view, around me I see a lot of people acknowledging they are useful to a degree, but also clearly finding limits in a wide array of cases, including that they really struggle with logical code, architectural decisions, re-using the right code patterns, larger scale changes that aren’t copy paste, etc.

So far what I see is that if I provide lots of context and clear instructions to a mostly non-logical area of code, I can speed myself up about 20-40%, but only works in about 30-50% of the problems I solve day to day at a day job.

So basically - it’s about a rough 20% improvement in my productivity - because I spend most of my time of the difficult things it can’t do anyway.

Meanwhile these companies are raising billion dollar seed rounds and telling us that all programming will be done by AI by next year.

replies(1): >>44497459 #
18. astrange ◴[] No.44495049{4}[source]
This is why unit tests are the least useful kind of test and regression tests are the most useful.

I think unit tests are best written /before/ the real code and thrown out after. Of course, that's extremely situational.

19. girvo ◴[] No.44497459{5}[source]
> Meanwhile these companies are raising billion dollar seed rounds and telling us that all programming will be done by AI by next year.

Which is the same thing they said last year, and hasn't panned out. But surely this time it'll be right...

20. girvo ◴[] No.44497462{5}[source]
I've not gone into it yet, but I think BDD would fit reasonably well with agents and generating tests that aren't entirely useless.