Most active commenters

    ←back to thread

    334 points mooreds | 12 comments | | HN request time: 0.615s | source | bottom
    Show context
    raspasov ◴[] No.44485275[source]
    Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

    - trying to sell something

    - high on their own stories

    - high on exogenous compounds

    - all of the above

    LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

    Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

    The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

    replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
    1. andyfilms1 ◴[] No.44486182[source]
    Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs. Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life with the wealth they've accrued.

    I don't know about you, but I can't imagine that ever happening. To me, that alone is a tip off that this tech, while amazing, can't live up to the hype in the long term.

    replies(6): >>44486258 #>>44486478 #>>44486521 #>>44486523 #>>44486564 #>>44486743 #
    2. sleepybrett ◴[] No.44486258[source]
    Every few weeks I give LLMs a chance to code something for me.

    Friday I laid out a problem very cleanly. Take this datastructure and tranform it into this other datastructure in terraform. With examples of the data in both formats.

    After the seventh round of back and forth where it would give me code that would not compile or code that gave me a totally different datastructure, giving it more examples and clarifications all the while I gave up. I gave the problem to a junior and they came back with the answer in about an hour.

    Next time an AI bro tells you that AI can 'replace your juniors' tell him to go to hell.

    3. theossuary ◴[] No.44486478[source]
    I don't think anyone is being laid off because of AI. People are being laid off because the market is bad for a myriad of reasons, and companies are blaming AI because it helps them deflect worry that might lower their stock price.

    Companies say "we've laid people off because we're using AI,x but they mean "we had to lay people off, were hoping we can make up for them with AI."

    replies(1): >>44486538 #
    4. hn_throwaway_99 ◴[] No.44486521[source]
    > Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs.

    I don't think the "implying the AI is as good or better as humans" part is correct. While they may not be saying it loudly, I think most folks making these decisions around AI and staffing are quite clear that AI is not as good as human workers.

    They do, however, think that in many cases it is "good enough". Just look at like 90%+ of the physical goods we buy these days. Most of them are almost designed to fall apart after a few years. I think it's almost exactly analogous to the situation with the Luddites (which is often falsely remembered as the Luddites being "anti-technology", when in reality they were just "pro-not-starving-to-death"). In that case, new mechanized looms greatly threatened the livelihood of skilled weavers. The quality of the fabric from these looms tended to be much worse than those of the skilled weavers. But it was still "good enough" for most people such that most consumers preferred the worse but much cheaper cloth.

    It's the same thing with AI. It's not that execs think it's "as good as humans", it's that if AI costs X to do something, and the human costs 50X (which is a fair differential I think), execs think people will be willing to put up with a lot shittier quality if the can be delivered something much more cheaply.

    One final note - in some cases people clearly do prefer the quality of AI. There was an article on HN recently discussing that folks preferred Waymo taxis, even though they're more expensive.

    replies(1): >>44488063 #
    5. unscaled ◴[] No.44486523[source]
    Some employees can be replaced by AI. That part is true. It's not revolutionary (at least not yet) — it's pretty much the same as other post-industrial technologies that have automated some types of work in the past. It also takes time for industries to adapt to these changes. Replacing workers couldn't possibly happen in one year, even if our AI models were more far more capable than they are in practice

    I'm afraid that what we're seeing instead are layoffs that are purely oriented at the stock market. As long as layoffs and talk about AI are seen as a positive signal for investors and as long as corporate leadership is judged by the direction the stock price goes, we will see layoffs (as well as separate hiring sprees for "AI Engineers").

    It's a telltale sign that we're seeing a large number of layoffs in the tech sector. It is true that tech companies are poised to adapt AI more quickly than others but that doesn't seem to be what's happening. What seem to be happening is that tech companies have been overhiring throughout the decade leading up to the end of COVID-19. At that time hiring was a positive signal — now firing is.

    I don't think these massive layoffs are good for tech companies in the long term, but since they mostly affect things that don't touch direct revenue generating operations, they won't hurt in the near-term and by the time company starts feeling the pain, the cause would be too long in the past to be remembered.

    replies(1): >>44487071 #
    6. hn_throwaway_99 ◴[] No.44486538[source]
    > I don't think anyone is being laid off because of AI.

    I think that's demonstratively false. While many business leaders may be overstating it, there are some pretty clear cut cases of people losing their jobs to AI. Here are 2 articles from the Washington Post from 2 years ago:

    https://archive.vn/C5syl "ChatGPT took their jobs. Now they walk dogs and fix air conditioners."

    https://archive.vn/cFWmX "ChatGPT provided better customer service than his staff. He fired them."

    7. deepsun ◴[] No.44486564[source]
    The wave of layoffs started couple of years before the AI craze (ChatGPT).
    8. visarga ◴[] No.44486743[source]
    > Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life

    One thing that doesn't get mentioned is AI capability for being held accountable. AI is fundamentally unaccountable. Like the genie from the lamp, it will grant you the 3 wishes but you bear the consequences.

    So what can we do when the tasks are critically important, like deciding on an investment or spending much time and resources on a pursuit? We still need the managers. We need humans for all tasks of consequence where risks are taken. Not because humans are smarter, but because we have skin.

    Even on the other side, that of goals, desires, choosing problems to be solved - AI has nothing to say. It has no desires of its own. It needs humans to expose the problem space inside which AI could generate value. It generates no value of its own.

    This second observation means AI value will not concentrate in the hands of a few, but instead will be widespread. It's no different than Linux, yes, it has a high initial development cost, but then it generates value in the application layer which is as distributed as it gets. Each human using Linux exposes their own problems to the software to get help, and value is distributed across all problem contexts.

    I have come to think that generating the opportunity for AI to provide value, and then incurring the outcomes, good or bad, of that work, are fundamentally human and distributed across society.

    9. aydyn ◴[] No.44487071[source]
    > Some employees can be replaced by AI.

    Yes, but not lets pretend that there aren't a lot of middle and even upper management that couldn't also be replaced by AI.

    Of course they won't be because they are the ones making the decisions.

    replies(1): >>44489562 #
    10. raspasov ◴[] No.44488063[source]
    Not surprising people like Waymos even though they are a bit more expensive. For a few more dollars you get:

    - arguably a very nice, clean car

    - same, ahem, Driver and driving style

    With the basic UberX it’s a crapshoot. Good drivers, wild drivers, open windows, no air-con. UberX Comfort is better but there’s still a range.

    11. weatherlite ◴[] No.44489562{3}[source]
    > Of course they won't be because they are the ones making the decisions.

    That's not accurate at all

    https://www.businessinsider.com/microsoft-amazon-google-embr...

    replies(1): >>44493663 #
    12. aydyn ◴[] No.44493663{4}[source]
    I stand corrected.