Most active commenters

    ←back to thread

    502 points alazsengul | 14 comments | | HN request time: 0.649s | source | bottom
    1. rwyinuse ◴[] No.44564598[source]
    I don't see a justification for high valuations of companies that aim to build an "AI Software engineer". If something like Devin really succeeds, then anyone can use their product to simply build their own competing AI engineer. There's no moat, it's just another LLM wrapper SaaS.
    replies(6): >>44564705 #>>44564802 #>>44564806 #>>44564808 #>>44564819 #>>44564831 #
    2. ar_lan ◴[] No.44564705[source]
    This is my exact takeaway too, and I'm always surprised it doesn't get mentioned often. If AI is truly groundbreaking, then shouldn't AI be able to re-implement itself? Which, to me, would imply that every AI company is not only full of software devs cannibalizing themselves, but the companies themselves also are.
    replies(1): >>44575077 #
    3. adamoshadjivas ◴[] No.44564802[source]
    I don't see this. The ai software engineer that succeeds, maybe it's because of a mixture of very complicated architecture derived from novel research etc. You can't replicate that with just hiring more human engineers, it takes time and effort and elite hiring. Plus enterprise support etc.

    Devin etc will give you let's say 10x more engineering power, but not necessarily elite one.

    4. UltraSane ◴[] No.44564806[source]
    This is true for LLMs themselves. If a new LLM is really better than all the other ones then it can be used to help improve other LLMs.
    replies(1): >>44575106 #
    5. alfalfasprout ◴[] No.44564808[source]
    Yep. The reality is folks building these types of companies are trying to get acquired as quickly as possible before the house of cards fall. This has led to a huge speculative rush of acquisitions to avoid FOMO later.

    The technology is nowhere close to what they're hoping for and incremental progress isn't getting us there.

    If we get true AGI agents, anyone can also build a multi-billion dollar tech companies on the cheap.

    replies(1): >>44565007 #
    6. taejavu ◴[] No.44564819[source]
    There are any number of tools that already make that promise. Turns out it’s still hard to complete projects and bring them to market.
    7. swyx ◴[] No.44564831[source]
    i advise you to not take marketing lines too literally and be so casually dismissive as a result. you will miss a lot of good investments and startups this way and (worse) be lulled into a false sense of comfort and security.
    8. 4dm1r4lg3n3r4l ◴[] No.44565007[source]
    > If we get true AGI agents, anyone can also build a multi-billion dollar tech companies on the cheap.

    That's not how the economy works...

    replies(1): >>44565377 #
    9. geor9e ◴[] No.44565377{3}[source]
    You're right - AGI would be unfathomable, it would be more productive than a quadrillion earths entirely populated by MIT valedictorians who just drank 2 espressos each. "Multi-billion dollar" would be a silly valuation.
    replies(1): >>44574389 #
    10. metalliqaz ◴[] No.44574389{4}[source]
    I can't tell if you're joking or serious.
    replies(1): >>44601060 #
    11. SJC_Hacker ◴[] No.44575077[source]
    This is my watershed for true AGI. It should be able to create a smarter version of itself.

    Last I checked, feeding the output of an LLM back into its training data leads to a progressively worse LLM. (Note I'm not talking about distillation, which involves training a smaller model, by sacrificing accuracy. I'm referring to an equal or greater number of model parameters)

    replies(1): >>44575278 #
    12. SJC_Hacker ◴[] No.44575106[source]
    Is it? Last I checked when you trained an LLM on another's output, at best you got the same performance as the original, and it was more likely you significantly degraded usefulness. (I'm not talking about distillation, where that tradeoff is known in return for a smaller, more efficient parameter set)
    13. fragmede ◴[] No.44575278{3}[source]
    If the LLM is given the code for its training and is able to improve that, does that count? Because it seems like a safe bet that we're already there, the only problem is latency of training runs.
    14. geor9e ◴[] No.44601060{5}[source]
    AGI is a distant dream in sci fi novels. Don't confuse it with today's AI technology.