←back to thread

579 points paulpauper | 1 comments | | HN request time: 0.224s | source
Show context
dimal ◴[] No.43604252[source]
It seems like the models are getting more reliable at the things they always could do, but they’re not showing any ability to move past that goalpost. Whereas in the past, they could occasionally write some very solid code, but often return nonsense, the nonsense is now getting adequately filtered by so-called “reasoning”, but I see no indication that they could do software design.

> how the hell is it going to develop metrics for assessing the impact of AIs when they're doing things like managing companies or developing public policy?

Why on earth do people want AI to do either of these things? As if our society isn’t fucked enough, having an untouchable oligarchy already managing companies and developing public policies, we want to have the oligarchy’s AI do this, so policy can get even more out of touch with the needs of common people? This should never come to pass. It’s like people read a pile of 90s cyberpunk dystopian novels and decided, “Yeah, let’s do that.” I think it’ll fail, but I don’t understand how anyone with less than 10 billion in assets would want this.

replies(1): >>43607399 #
1. voidhorse ◴[] No.43607399[source]
> Why on earth do people want AI to do either of these things?

This is the really important question, and the only answer I can drum up is that people have been fed a consistent diet of propaganda for decades centered around a message that ultimately boils down to a justification of oligarchy and the concentration of wealth. That and the consumer-focus facade makes people think the LLMS are technology for them—they aren't. As soon as these things get good enough business owners aren't going to expect workers to use them to be more productive, they are just going to fire workers and/or use the tooling as another mechanism by which to let wages stagnate.