←back to thread

511 points meetpateltech | 4 comments | | HN request time: 0.72s | source
1. asdev ◴[] No.44008592[source]
is the point of this to actually assign tasks to an AI to complete end to end? Every task I do with AI requires atleast some bit of hand holding, sometimes reprompting etc. So I don't see why I would want to run tasks in parallel, I don't think it would increase throughput. Curious if others have better experiences with this
replies(3): >>44010580 #>>44011402 #>>44012173 #
2. nmca ◴[] No.44010580[source]
with a bad ai it is pointless, with a good ai it is powerful.

codex-1 has been quite good in my experience

3. masterj ◴[] No.44011402[source]
The example use-cases in the videos are pretty compelling and much smaller scope.

“Here’s an error reported to the oncall. Give a try fixing it” (Could be useful even if it fails)

Refactor this small piece I noticed while doing something else. Small-scoped stuff that likely wouldn’t get done otherwise.

I wouldn’t ask LLMs for full-features in a real codebase but these examples seem within the scope of what they might be able to accomplish end-to-end

4. sagarpatil ◴[] No.44012173[source]
I am working with a 3rd party API (Exa.ai) and I hacked together a python script. I ran a remote agent to do these tasks simultaneously (augment.new, I’m not affiliated, I have early access)

Agent 1: write tests, make sure all the tests pass.

Agent 2: concert python script to fastapi

Agent 3: create frontend based on fastapi endpoints

I get a PR, I check code and see if it works and then merge to main. All three PR’s worked flawlessly (front end wasn’t pretty).