←back to thread

440 points pseudolus | 2 comments | | HN request time: 0s | source
Show context
jimmont ◴[] No.45059611[source]
Organizations are choosing to eliminate workers rather than amplify them with AI because they'd rather own 100% of diminished capacity than share proceeds from exponentially increased capacity. That's the rent extraction model consuming its own productive infrastructure. The Stanford study documents organizations systematically choosing inferior economic strategies because their rent-extraction frameworks cannot conceptualize workers as productive assets to amplify. This reveals that these organizations are economic rent-seekers that happen to have productive workers, not production companies that happen to extract rents. When forced to choose between preserving rent extraction structures or maximizing value creation, they preserve extraction even at the cost of destroying productive capacity. So what comes next?
replies(11): >>45059745 #>>45059774 #>>45059838 #>>45059845 #>>45059868 #>>45059973 #>>45060115 #>>45060131 #>>45060415 #>>45061533 #>>45062067 #
lurk2 ◴[] No.45059973[source]
ChatGPT (might have) made a few superfluous email jobs obsolete and the people responding to this comment are acting like we’re standing on the threshold of Terminator 3.
replies(2): >>45060114 #>>45061718 #
1. sjw987 ◴[] No.45061718[source]
Implying "superfluous email jobs" isn't a significant portion of the international job market. Most people that work in offices fit under this definition.
replies(1): >>45063393 #
2. lurk2 ◴[] No.45063393[source]
> Most people that work in offices fit under this definition.

Not at all. The majority of office jobs can't be automated by current generation LLMs, because the jobs themselves serve either creative or supervisory functions. Generative AI might be able to fill in creative functions one day, but the whole point of a supervisory role is to verify the status of inputs and outputs. A lot of these roles already have legal moats around them (e.g. you can't have an LLM sign financial statements), but even if we assume that regulations would change, the technical problem of creating supervisory "AI" hasn't been solved; even if it was, implementation won't be trivial.