←back to thread

333 points mooreds | 1 comments | | HN request time: 0.199s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
andyfilms1 ◴[] No.44486182[source]
Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs. Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life with the wealth they've accrued.

I don't know about you, but I can't imagine that ever happening. To me, that alone is a tip off that this tech, while amazing, can't live up to the hype in the long term.

replies(6): >>44486258 #>>44486478 #>>44486521 #>>44486523 #>>44486564 #>>44486743 #
1. deepsun ◴[] No.44486564[source]
The wave of layoffs started couple of years before the AI craze (ChatGPT).