←back to thread

390 points meetpateltech | 3 comments | | HN request time: 0.485s | source
1. asdev ◴[] No.44008592[source]
is the point of this to actually assign tasks to an AI to complete end to end? Every task I do with AI requires atleast some bit of hand holding, sometimes reprompting etc. So I don't see why I would want to run tasks in parallel, I don't think it would increase throughput. Curious if others have better experiences with this
replies(2): >>44010580 #>>44011402 #
2. nmca ◴[] No.44010580[source]
with a bad ai it is pointless, with a good ai it is powerful.

codex-1 has been quite good in my experience

3. masterj ◴[] No.44011402[source]
The example use-cases in the videos are pretty compelling and much smaller scope.

“Here’s an error reported to the oncall. Give a try fixing it” (Could be useful even if it fails)

Refactor this small piece I noticed while doing something else. Small-scoped stuff that likely wouldn’t get done otherwise.

I wouldn’t ask LLMs for full-features in a real codebase but these examples seem within the scope of what they might be able to accomplish end-to-end