←back to thread

502 points alazsengul | 2 comments | | HN request time: 0s | source
Show context
rwyinuse ◴[] No.44564598[source]
I don't see a justification for high valuations of companies that aim to build an "AI Software engineer". If something like Devin really succeeds, then anyone can use their product to simply build their own competing AI engineer. There's no moat, it's just another LLM wrapper SaaS.
replies(6): >>44564705 #>>44564802 #>>44564806 #>>44564808 #>>44564819 #>>44564831 #
ar_lan ◴[] No.44564705[source]
This is my exact takeaway too, and I'm always surprised it doesn't get mentioned often. If AI is truly groundbreaking, then shouldn't AI be able to re-implement itself? Which, to me, would imply that every AI company is not only full of software devs cannibalizing themselves, but the companies themselves also are.
replies(1): >>44575077 #
1. SJC_Hacker ◴[] No.44575077[source]
This is my watershed for true AGI. It should be able to create a smarter version of itself.

Last I checked, feeding the output of an LLM back into its training data leads to a progressively worse LLM. (Note I'm not talking about distillation, which involves training a smaller model, by sacrificing accuracy. I'm referring to an equal or greater number of model parameters)

replies(1): >>44575278 #
2. fragmede ◴[] No.44575278[source]
If the LLM is given the code for its training and is able to improve that, does that count? Because it seems like a safe bet that we're already there, the only problem is latency of training runs.