←back to thread

120 points lsharkey602 | 1 comments | | HN request time: 0.209s | source
Show context
ttul ◴[] No.44423514[source]
I run a mature software company that is being driven for profit (we are out of the fantastic future phase and solidly in the “make money” phase). Even with all the pressure to cut costs and increase automation, the most valuable use of LLMs is to make the software developers work more effectively, producing the feature improvements that customers want so that we can ensure customers will renew and upgrade. And to the extent that we are cutting costs, we are using AI to help us write code that lets us use infrastructure more efficiently (because infrastructure is the bulk of our costs).

But this is a software company. I think out in the “real world,” there are some low hanging fruit wins where AI replaces extremely routine boilerplate jobs that never required a lot of human intelligence in the first place. But even then, I’d say that the general drift is that the humans who were doing those low-level jobs have a chance to step up into jobs requiring higher-level intelligence where humans have a chance to really shine. And companies are competing not by just getting rid of salaries, but by providing much better service by being able to afford to have more higher-tier people on the payroll. And by higher-tier, I don’t necessarily mean more expensive. It can be the same people that were doing the low-level jobs; they just now can spend their human-level intelligence doing more interesting and challenging work.

replies(4): >>44423568 #>>44423776 #>>44423805 #>>44424131 #
gruez ◴[] No.44424131[source]
>I’d say that the general drift is that the humans who were doing those low-level jobs have a chance to step up into jobs requiring higher-level intelligence where humans have a chance to really shine. And companies are competing not by just getting rid of salaries, but by providing much better service by being able to afford to have more higher-tier people on the payroll. And by higher-tier, I don’t necessarily mean more expensive. It can be the same people that were doing the low-level jobs; they just now can spend their human-level intelligence doing more interesting and challenging work.

That was the narrative last year (ie. that low performers have the most to gain from AI, and therefore AI would reduce inequality), but new evidence seems to be pointing in the opposite direction: https://archive.is/tBcXE

>More recent findings have cast doubt on this vision, however. They instead suggest a future in which high-flyers fly still higher—and the rest are left behind. In complex tasks such as research and management, new evidence indicates that high performers are best positioned to work with AI (see table). Evaluating the output of models requires expertise and good judgment. Rather than narrowing disparities, AI is likely to widen workforce divides, much like past technological revolutions.

replies(1): >>44424791 #
1. benreesman ◴[] No.44424791[source]
I think my personal anecdote supports this observation with the treatment group being "me in the zone" and control group "me not in the zone".

When I'm pulling out all the stops, leaving nothing for the swim back the really powerful (and expensive!) agents are like any of the other all out measures: cut all distractions, 7 days a week, medicate the ADHD, manage the environment ruthlessly, attempt something slightly past my abilities every day. In that zone the truly massive frontier behemoths are that last 5-20% that makes things at the margin possible.

But in any other zone its way too easy to get into "hi agent plz do my job today I'm not up for it" mode, which is just asking to have some paper-mache, plausible if you squint, net liability thing pop out and kind of slide above the "no fucking way" bar with a half life until collapse of a week or maybe month.

These are power user tools for monomaniacal overachievers and Graeberism detectors for everyone else (in the "who am I today" sense, not bucketing people forever sense).