←back to thread

152 points GavinAnderegg | 1 comments | | HN request time: 0.297s | source
Show context
nico ◴[] No.44460596[source]
The article reads almost like an ad for o3 and spending a lot of money on LLM APIs

In my experience, o4-mini-high is good enough, even just through the chat interface

Cursor et al can be more comfy because they have access to the files directly. But when working on a sufficiently large/old/complex code base, the main limitation is the human in the loop and managing context, so things end up evening out. Not only that, but a lot of times it’s just easier/better to manually feed things to ChatGPT/Claude - that way you get to more carefully curate and understand the tasks and the changes

I still haven’t seen any convincing real life scenario with larger in-production code bases in which agents are able to autonomously write most of the code

If anyone has a video/demo, would love to see it

replies(1): >>44460664 #
cma ◴[] No.44460664[source]
It's faster than me at drilling through all the layers of abstractions in large codebases to answer questions about how something is implemented and where the actual calculation or functionality gets done, so with that alone it's much more useful than web chat interfaces.
replies(2): >>44460761 #>>44462617 #
1. jpc0 ◴[] No.44462617[source]
I know it is difficult to run software locally in some instances but at this point I feel it is probably quicker to implement tracing or improving the local development flow.

A breakpoint in a debugger is much much quicker than feeding the AI all the context needed and then confirming it didn’t miss some flow in some highly abstract code