Most active commenters
  • steveklabnik(9)
  • jstummbillig(5)
  • simonw(4)
  • pdabbadabba(4)
  • blibble(3)
  • Terr_(3)

←back to thread

688 points dheerajvs | 62 comments | | HN request time: 2.152s | source | bottom
Show context
simonw ◴[] No.44523442[source]
Here's the full paper, which has a lot of details missing from the summary linked above: https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf

My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.

This study had 16 participants, with a mix of previous exposure to AI tools - 56% of them had never used Cursor before, and the study was mainly about Cursor.

They then had those 16 participants work on issues (about 15 each), where each issue was randomly assigned a "you can use AI" v.s. "you can't use AI" rule.

So each developer worked on a mix of AI-tasks and no-AI-tasks during the study.

A quarter of the participants saw increased performance, 3/4 saw reduced performance.

One of the top performers for AI was also someone with the most previous Cursor experience. The paper acknowledges that here:

> However, we see positive speedup for the one developer who has more than 50 hours of Cursor experience, so it's plausible that there is a high skill ceiling for using Cursor, such that developers with significant experience see positive speedup.

My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learing curve.

replies(33): >>44523608 #>>44523638 #>>44523720 #>>44523749 #>>44523765 #>>44523923 #>>44524005 #>>44524033 #>>44524181 #>>44524199 #>>44524515 #>>44524530 #>>44524566 #>>44524631 #>>44524931 #>>44525142 #>>44525453 #>>44525579 #>>44525605 #>>44525830 #>>44525887 #>>44526005 #>>44526996 #>>44527368 #>>44527465 #>>44527935 #>>44528181 #>>44528209 #>>44529009 #>>44529698 #>>44530056 #>>44530500 #>>44532151 #
grey-area ◴[] No.44524005[source]
Well, there are two possible interpretations here of 75% of participants (all of whom had some experience using LLMs) being slower using generative AI:

LLMs have a v. steep and long learning curve as you posit (though note the points from the paper authors in the other reply).

Current LLMs just are not as good as they are sold to be as a programming assistant and people consistently predict and self-report in the wrong direction on how useful they are.

replies(6): >>44524525 #>>44524552 #>>44525186 #>>44525216 #>>44525303 #>>44526981 #
1. steveklabnik ◴[] No.44524552[source]
> Current LLMs

One thing that happened here is that they aren't using current LLMs:

> Most issues were completed in February and March 2025, before models like Claude 4 Opus or Gemini 2.5 Pro were released.

That doesn't mean this study is bad! In fact, I'd be very curious to see it done again, but with newer models, to see if that has an impact.

replies(1): >>44524740 #
2. blibble ◴[] No.44524740[source]
> One thing that happened here is that they aren't using current LLMs

I've been hearing this for 2 years now

the previous model retroactively becomes total dogshit the moment a new one is released

convenient, isn't it?

replies(10): >>44524758 #>>44524891 #>>44524893 #>>44524975 #>>44525030 #>>44525035 #>>44526195 #>>44526545 #>>44526712 #>>44535270 #
3. simonw ◴[] No.44524758[source]
The previous model retroactively becomes not as good as the best available models. I don't think that's a huge surprise.
replies(2): >>44524856 #>>44525150 #
4. cwillu ◴[] No.44524856{3}[source]
The surprise is the implication that the crossover between net-negative and net-positive impact happened to be in the last 4 months, in light of the initial release 2 years ago and sufficient public attention for a study to be funded and completed.

Yes, it might make a difference, but it is a little tiresome that there's always a “this is based on a model that is x months old!” comment, because it will always be true: an academic study does not get funded, executed, written up, and published in less time.

replies(1): >>44525066 #
5. pdabbadabba ◴[] No.44524891[source]
Maybe it's convenient. But isn't it also just a fact that some of the models available today are better than the ones available five months ago?
replies(2): >>44524999 #>>44525074 #
6. steveklabnik ◴[] No.44524893[source]
Sorry, that’s not my take. I didn’t think these tools were useful until the latest set of models, that is, they crossed the threshold of usefulness to me.

Even then though, “technology gets better over time” shouldn’t be surprising, as it’s pretty common.

replies(2): >>44525041 #>>44525078 #
7. jstummbillig ◴[] No.44524975[source]
Convenient for whom and what...? There is nothing tangible to gain from you believing or not believing that someone else does (or does not) get a productivity boost from AI. This is not a religion and it's not crypto. The AI users' net worth is not tied to another ones use of or stance on AI (if anything, it's the opposite).

More generally, the phenomenon this is quite simply explained and nothing surprising: New things improve, quickly. That does not mean that something is good or valuable but it's how new tech gets introduced every single time, and readily explains changing sentiment.

replies(3): >>44525177 #>>44525199 #>>44525836 #
8. bryanrasmussen ◴[] No.44524999{3}[source]
sure, but after having spent some time trying to get anything useful - programmatically - out of previous models and not getting anything once a new one is announced how much time should one spend.

Sure you may end up missing out on a good thing and then having to come late to the party, but coming early to the party too many times and the beer is watered down and the food has grubs is apt to make you cynical the next time a party announcement comes your way.

replies(1): >>44525321 #
9. cfst ◴[] No.44525030[source]
The current batch of models, specifically Claude Sonnet and Opus 4, are the first I've used that have actually been more helpful than annoying on the large mixed-language codebases I work in. I suspect that dividing line differs greatly between developers and applications.
10. nalllar ◴[] No.44525035[source]
If you interact with internet comments and discussions as an amorphous blob of people you'll see a constant trickle of the view that models now are useful, and before were useless.

If you pay attention to who says it, you'll find that people have different personal thresholds for finding llms useful, not that any given person like steveklabnik above keeps flip-flopping on their view.

This is a variant on the goomba fallacy: https://englishinprogress.net/gen-z-slang/goomba-fallacy-exp...

11. mattmanser ◴[] No.44525041{3}[source]
Do you really see a massive jump?

For context, I've been using AI, a mix of OpenAi + Claude, mainly for bashing out quick React stuff. For over a year now. Anything else it's generally rubbish and slower than working without. Though I still use it to rubber duck, so I'm still seeing the level of quality for backend.

I'd say they're only marginally better today than they were even 2 years ago.

Every time a new model comes out you get a bunch of people raving how great the new one is and I honestly can't really tell the difference. The only real difference is reasoning models actually slowed everything down, but now I see its reasoning. It's only useful because I often spot it leaving out important stuff from the final answer.

replies(5): >>44525090 #>>44525193 #>>44525866 #>>44526601 #>>44531993 #
12. Ntrails ◴[] No.44525066{4}[source]
Some of it is just that (probably different) people said the same damn things 6 months ago.

"No, the 2.8 release is the first good one. It massively improves workflows"

Then, 6 months later, the study comes out.

"Ah man, 2.8 was useless, 3.0 really crossed the threshold on value add"

At some point, you roll your eyes and assume it is just snake oil sales

replies(2): >>44525328 #>>44525336 #
13. Terr_ ◴[] No.44525074{3}[source]
That's not the issue. Their complaint is that proponents keep revising what ought to be fixed goalposts... Well, fixed unless you believe unassisted human developers are also getting dramatically better at their jobs every year.

Like the boy who cried wolf, it'll eventually be true with enough time... But we should stop giving them the benefit of the doubt.

_____

Jan 2025: "Ignore last month's models, they aren't good enough to show a marked increase in human productivity, test with this month's models and the benefits are obvious."

Feb 2025: "Ignore last month's models, they aren't good enough to show a marked increase in human productivity, test with this month's models and the benefits are obvious."

Mar 2025: "Ignore last month's models, they aren't good enough to show a marked increase in human productivity, test with this month's models and the benefits are obvious."

Apr 2025: [Ad nauseam, you get the idea]

replies(1): >>44525557 #
14. ipaddr ◴[] No.44525078{3}[source]
Wait until the next set. You will find you the previous ones weren't useful after all.
replies(1): >>44525215 #
15. hombre_fatal ◴[] No.44525090{4}[source]
I see a massive jump every time.

Just two years ago, this failed.

> Me: What language is this: "esto está escrito en inglés"

> LLM: English

Gemini and Opus have solved questions that took me weeks to solve myself. And I'll feed some complex code into each new iteration and it will catch a race condition I missed even with testing and line by line scrutiny.

Consider how many more years of experience you need as a software engineer to catch hard race conditions just from reading code than someone who couldn't do it after trying 100 times. We take it for granted already since we see it as "it caught it or it didn't", but these are massive jumps in capability.

16. foobarqux ◴[] No.44525150{3}[source]
That's not the argument being made though, which is that it does "work" now and implying that actually it didn't quite work before; except that that is the same thing the same people say for every model release, including at the time or release of the previous one, which is now acknowledged to be seriously flawed; and including the future one, at which time the current models will similarly be acknowledged to be, not only less performant that the future models, but inherently flawed.

Of course it's possible that at some point you get to a model that really works, irrespective of the history of false claims from the zealots, but it does mean you should take their comments with a grain of salt.

replies(1): >>44525369 #
17. card_zero ◴[] No.44525177{3}[source]
I saw that edit. Indeed you can't predict that rejecting a new thing is part of a routine of being wrong. It's true that "it's strange and new, therefore I hate it" is a very human (and adorable) instinct, but sometimes it's reasonable.
replies(2): >>44525559 #>>44530847 #
18. steveklabnik ◴[] No.44525193{4}[source]
Yes. In January I would have told you AI tools are bullshit. Today I’m on the $200/month Claude Max plan.

As with anything, your miles may vary: I’m not here to tell anyone that thinks they still suck that their experience is invalid, but to me it’s been a pretty big swing.

replies(2): >>44525395 #>>44526058 #
19. grey-area ◴[] No.44525199{3}[source]
Honestly the hype cycle feels very like crypto, and just like crypto prominent vcs have a lot of money riding on the outcome.
replies(2): >>44525236 #>>44525632 #
20. steveklabnik ◴[] No.44525215{4}[source]
This makes no sense to me. I’m well aware that I’m getting value today, that’s not going to change in the future: it’s already happened.

Sure they may get even more useful in the future but that doesn’t change my present.

21. steveklabnik ◴[] No.44525236{4}[source]
I agree with you, and I think that’s coloring a lot of people’s perceptions. I am not a crypto fan but am an LLM fan.

Every hype cycle feels like this, and some of them are nonsense and some of them are real. We’ll see.

22. Terr_ ◴[] No.44525321{4}[source]
Plus it's not even possible to miss the metaphorical party: If it gets going, it will be quite obvious long before it peaks.

(Unless one believes the most grandiose prophecies of a technological-singularity apocalypse, that is.)

23. Filligree ◴[] No.44525328{5}[source]
Or you accept that different people have different skill levels, workflows and goals, and therefore the AIs reach usability at different times.
replies(1): >>44530511 #
24. steveklabnik ◴[] No.44525336{5}[source]
There’s a lot of confounding factors here. For example, you could point to any of these things in the last ~8 months as being significant changes:

* the release of agentic workflow tools

* the release of MCPs

* the release of new models, Claude 4 and Gemini 2.5 in particular

* subagents

* asynchronous agents

All or any of these could have made for a big or small impact. For example, I’m big on agentic tools, skeptical of MCPs, and don’t think we yet understand subagents. That’s different from those who, for example, think MCPs are the future.

> At some point, you roll your eyes and assume it is just snake oil sales

No, you have to realize you’re talking to a population of people, and not necessarily the same person. Opinions are going to vary, they’re not literally the same person each time.

There are surely snake oil salesman, but you can’t buy anything from me.

replies(1): >>44534117 #
25. steveklabnik ◴[] No.44525369{4}[source]
> That's not the argument being made though, which is that it does "work" now and implying that actually it didn't quite work before

Right.

> except that that is the same thing the same people say for every model release,

I did not say that, no.

I am sure you can find someone who is in a Groundhog Day about this, but it’s just simpler than that: as tools improve, more people find them useful than before. You’re not talking to the same people, you are talking to new people each time who now have had their threshold crossed.

replies(1): >>44525598 #
26. Uehreka ◴[] No.44525395{5}[source]
> In January I would have told you AI tools are bullshit. Today I’m on the $200/month Claude Max plan.

Same. For me the turning point was VS Code’s Copilot Agent mode in April. That changed everything about how I work, though it had a lot of drawbacks due to its glitches (many of these were fixed within 6 or so weeks).

When Claude Sonnet 4 came out in May, I could immediately tell it was a step-function increase in capability. It was the first time an AI, faced with ambiguous and complicated situations, would be willing to answer a question with a definitive and confident “No”.

After a few weeks, it became clear that VS Code’s interface and usage limits were becoming the bottleneck. I went to my boss, bullet points in hand, and easily got approval for the Claude Max $200 plan. Boom, another step-function increase.

We’re living in an incredibly exciting time to be a skilled developer. I understand the need to stay skeptical and measure the real benefits, but I feel like a lot of people are getting caught up in the culture war aspect and are missing out on something truly wonderful.

27. pdabbadabba ◴[] No.44525557{4}[source]
Fair enough. For what it's worth, I've always thought that the more reasonable claim is that AI tools make poor-average developers more productive, not necessarily expert developers.
replies(1): >>44526668 #
28. jstummbillig ◴[] No.44525559{4}[source]
"I saw that edit" lol
replies(1): >>44525611 #
29. blibble ◴[] No.44525598{5}[source]
> You’re not talking to the same people, you are talking to new people each time who now have had their threshold crossed.

no, it's the same names, again and again

replies(1): >>44525880 #
30. card_zero ◴[] No.44525611{5}[source]
Sorry, just happened to. Slightly rude of me.
replies(1): >>44525716 #
31. jstummbillig ◴[] No.44525632{4}[source]
Of course, lot's of hype, but my point is that the reason why is very different and it matters: As an early bc adopter making your believe in bc is super important to my net worth (and you not believing in bc makes me look like an idiot and lose a lot of money).

In contrast, what do I care if you believe in code generation AI? If you do, you are probably driving up pricing. I mean, I am sure that there are people that care very much, but there is little inherent value for me in you doing so, as long as the people who are building the AI are making enough profit to keep it running.

With regards to the VCs, well, how many VCs are there in the world? How many of the people who have something good to say about AI are likely VCs? I might be off by an order of magnitude, but even then it would really not be driving the discussion.

replies(1): >>44525865 #
32. jstummbillig ◴[] No.44525716{6}[source]
Ah, you do you. It's just a fairly kindergarten thing to point out and not something I was actively trying to hide. Whatever it was.

Generally, I do a couple of edits for clarity after posting and reading again. Sometimes that involves removing something that I feel could have been said better. If it does not work, I will just delete the comment. Whatever it was must not have been a super huge deal (to me).

replies(1): >>44527940 #
33. leshow ◴[] No.44525836{3}[source]
I think you're missing the broader context. There is a lot of people very invested in the maximalist outcome which does create pressure for people to be boosters. You don't need a digital token for that to happen. There's a social media aspect as well that creates a feedback loop about claims.

We're in a hype cycle, and it means we should be extra critical when evaluating the tech so we don't get taken in by exaggerated claims.

replies(1): >>44526326 #
34. leshow ◴[] No.44525865{5}[source]
I don't find that a compelling argument, lots of people get taken in by hype cycles even when they don't profit directly from it.
35. simonw ◴[] No.44525866{4}[source]
The massive jump in the last six months is that the new set of "reasoning" models got really good at reasoning about when to call tools, and were accompanied is by a flurry of tools-in-loop coding agents - Claude Code, OpenAI Codex, Cursor in Agent mode etc.

An LLM that can test the code it is writing and then iterate to fix the bugs turns out to be a huge step forward from LLMs that just write code without trying to then exercise it.

36. simonw ◴[] No.44525880{6}[source]
Got receipts?

That sounds like a claim you could back up with a little bit of time spent using Hacker News search or similar.

(I might try to get a tool like o3 to run those searches for me.)

replies(1): >>44526026 #
37. blibble ◴[] No.44526026{7}[source]
try asking it what sealioning is
replies(1): >>44527616 #
38. mattmanser ◴[] No.44526058{5}[source]
Ok, I'll have to try it out then. I've got a side project I've 3/4 finished and will let it loose on it.

So are you using Claude Code via the max plan, Cursor, or what?

I think I'd definitely hit AI news exhaustion and was viewing people raving about this agentic stuff as yet more AI fanbois. I'd just continued using the AI separate as setting up a new IDE seemed like too much work for the fractional gains I'd been seeing.

replies(3): >>44526143 #>>44528857 #>>44536836 #
39. steveklabnik ◴[] No.44526143{6}[source]
I had a bad time with Cursor. I use Claude Code inside of VS: Code. You don't necessarily need Max, but you can spend a lot of money very quickly on API tokens, so I'd recommend to anyone trying, start with the $20/month one, no need to spend a ton of money just to try something out.

There is a skill gap, like, I think of it like vim: at first it slows you down, but then as you learn it, you end up speeding up. So you may also find that it doesn't really vibe with the way you work, even if I am having a good time with it. I know people who are great engineers who still don't like this stuff, just like I know ones that do too.

replies(1): >>44527419 #
40. bix6 ◴[] No.44526195[source]
Everything actually got better. Look at the image generation improvements as an easily visible benchmark.

I do not program for my day job and I vibe coded two different web projects. One in twenty mins as a test with cloudflare deployment having never used cloudflare and one in a week over vacation (and then fixed a deep safari bug two weeks later by hammering the LLM). These tools massively raise the capabilities for sub-average people like me and decrease the time / brain requirements significantly.

I had to make a little update to reset the KV store on cloudflare and the LLM did it in 20s after failing the syntax twice. I would’ve spent at least a few minutes looking it up otherwise.

41. jstummbillig ◴[] No.44526326{4}[source]
I mostly don't agree. Yes, there is always social pressure with these things, and we are in a hype cycle, but the people "buying in" are simply not doing much at all. They are mostly consumers, waiting for the next model, which they have no control over or stake in creating (by and large).

The people not buying into the hype, on the other hands, are actually the ones that have a very good reason to be invested, because if they turn out to be wrong they might face some very uncomfortable adjustments in the job landscape and a lot of the skills that they worked so hard to gain and believed to be valuable.

As always, be weary of any claims, but the tension here is very much the reverse of crypto and I don't think that's very appreciated.

42. Aeolun ◴[] No.44526545[source]
It’s true though? Previous models could do well in specifically created settings. You can throw practically everything at Opus, and it’ll work mostly fine.
43. vidarh ◴[] No.44526601{4}[source]
I've gone from asking the tools how to do things, and cut and pasting the bits (often small) that'd be helpful, via using assistants that I'd review every decision of and often having to start over, to now often starting an assistant with broad permissions and just reviewing the diff later, after they've made the changes pass the test suite, run a linter and fixed all the issues it brought up, and written a draft commit message.

The jump has been massive.

44. bluefirebrand ◴[] No.44526668{5}[source]
Personally I don't want poor-average developers to be more productive, I want them to be more expert
replies(2): >>44527605 #>>44527673 #
45. mwigdahl ◴[] No.44526712[source]
I've been a proponent for a long time, so I certainly fit this at least partially. However, the combination of Claude Code and the Claude 4 models has pushed the response to my demos of AI coding at my org from "hey, that's kind of cool" to "Wow, can you get me an API key please?"

It's been a very noticeable uptick in power, and although there have been some nice increases with past model releases, this has been both the largest and the one that has unlocked the most real value since I've been following the tech.

replies(1): >>44526752 #
46. achierius ◴[] No.44526752{3}[source]
Is that really the case vs. 3.7? For me that was the threshold, and since then the improvements have been nice but not as significant.
replies(1): >>44526968 #
47. mwigdahl ◴[] No.44526968{4}[source]
I would agree with you that the jump from Sonnet 3.7 to Sonnet 4 feels notable but not shocking. Opus 4 is considerably better, and Opus 4 combined with the Claude Code harness is what really unlocks the value for me.
48. mh- ◴[] No.44527419{7}[source]
Worth noting for the folks asking: there's an official Claude Code extension for VS Code now [0]. I haven't tried it personally, but that's mostly because I mainly use the terminal and vim.

[0]: https://marketplace.visualstudio.com/items?itemName=anthropi...

replies(1): >>44527989 #
49. Terr_ ◴[] No.44527605{6}[source]
"Compared to last quarter, we've shipped 40% more spaghetti-code!"
50. maxbond ◴[] No.44527616{8}[source]
You've no obligation to answer, no one is entitled to your time, but it's a reasonable request. It's not sealioning to respectfully ask for directly relevant evidence that takes about 10-15m to get.
51. pdabbadabba ◴[] No.44527673{6}[source]
Sure. But what would you suppose the ratio is between expert, average, and mediocre coders in the average organization? I think a small minority would be in the first category, and I don’t see a technology on the horizon that will change that except for LLMs, which seem like they could make mediocre coders both more productive and produce higher quality output.
replies(1): >>44528610 #
52. maxbond ◴[] No.44527940{7}[source]
FYI there's a "delay" setting in your profile that allows you to make your comment invisible for up to ten minutes.
53. steveklabnik ◴[] No.44527989{8}[source]
Yes, it’s not necessary but it is convenient for viewing diffs in Code’s diff view. The terminal is a fine way to interact with it though.
54. bluefirebrand ◴[] No.44528610{7}[source]
They definitely aren't producing higher quality output imo, but definitely producing low quality output faster

That's not a tradeoff that I like

replies(1): >>44532838 #
55. 8note ◴[] No.44528857{6}[source]
id say thats not gonna be the best use for it, unless what you really want is to first document in detail everything about it.

im using claude + vscode's cline extension for the most part, but where it tends to excel is helping you write documentation, and then using that documentation to write reasonable code.

if you're 3/4 of the way done, a lot of the docs of what it wants to work well are gonna be missing, and so a lot of your intentions about why you did or didnt make certain choices will be missing. if you've got good docs, make sure to feed those in as context.

the agentic tool on its own is still kinda meh, if you only try to write code directly from it. definitely better than the non-agentic stuff, but if you start with trying to get it to document stuff, and ask you questions about what it should know in order to make the change its pretty good.

even if you dont get perfect code, or it spins in a feedback loop where its lost the plot, those questions it asks can be super handy in terms of code patterns that you havent thought about that apply to your code, and things that would usually be undefined behaviour.

my raving is that i get to leave behind useful docs in my code packages, and my team members get access to and use those docs, without the usual discoverability problems, and i get those docs for... somewhat slower than i could have written the code myself, but much much faster than if i also had to write those docs

56. rsynnott ◴[] No.44530511{6}[source]
The complication is that, as noted in the above paper, _people are bad at self-reporting on whether the magic robot works for them_. Just because someone _believes_ they are more effective using LLMs is not particularly strong evidence that they actually are.
57. saturneria ◴[] No.44530847{4}[source]
It is an even more human reaction when the new strange thing directly threatens to upend and massively change the industry that puts food on your table.

The steam-powered loom was not good for the luddites either. Good for society at large in the long term but all the negative points that a 40 year old knitter in 1810 could make against the steam-powered loom would have been perfectly reasonable and accurate judged on that individual's perspective.

58. otabdeveloper4 ◴[] No.44531993{4}[source]
> but now I see its reasoning

It's not showing its reasoning. "Reasoning" models are trained to output more tokens in the hope that more tokens means less hallucinations.

It's just a marketing trick and there is no evidence this sort of fake ""reasoning"" actually gives any benefit.

59. pdabbadabba ◴[] No.44532838{8}[source]
That's the study I'm really interested in: does AI use improve the output of lower-skill developers (not experts). My intuitions point me in the opposite direction. I think AI would improve their work. But I'm not aware of any hard data that would help answer this question.
60. Ntrails ◴[] No.44534117{6}[source]
> you have to realize you’re talking to a population of people, and not necessarily the same person. Opinions are going to vary, they’re not literally the same person each time.

I pointed this out in my post for a reason. I get it. But even given a different person is saying the same thing every time a new release comes out - the effect on my prior is the same.

61. itsoktocry ◴[] No.44535270[source]
>the previous model retroactively becomes total dogshit the moment a new one is released

Keep writing your code manually, nobody cares.

62. jpc0 ◴[] No.44536836{6}[source]
Takes this with a massive grain of salt but my experience with Google Code CLI recently, we pay for google products but not others internally, I can’t change that decision.

I asked it two implement two bicubic filters, a high pass filter and a high shelf filter. Some context, using the gemini webapp it would split out the exact code I need with the interfaces I require one shot because this is truly trivial C++ code to write.

15 million tokens and an hour and a half later I now had a project that could not build, the filters were not implemented and my trust in AI agentic workflows broken.

It cost me nothing, I just reset the repo and I was watching youtube videos for that hour and a half.

Your mileage may vary and I’m very sure if this was golang or typescript it might have done significantly better, but even compared to the exact same model in a chat interface my experience was horrible.

I’m sticking to the slightly “worse” experience of using the chat interface which does give me significant improvements in productivity vs letting the agent burn money and time and not produce working code.