Most active commenters
  • eru(3)

←back to thread

120 points lsharkey602 | 17 comments | | HN request time: 1.709s | source | bottom
1. ttul ◴[] No.44423514[source]
I run a mature software company that is being driven for profit (we are out of the fantastic future phase and solidly in the “make money” phase). Even with all the pressure to cut costs and increase automation, the most valuable use of LLMs is to make the software developers work more effectively, producing the feature improvements that customers want so that we can ensure customers will renew and upgrade. And to the extent that we are cutting costs, we are using AI to help us write code that lets us use infrastructure more efficiently (because infrastructure is the bulk of our costs).

But this is a software company. I think out in the “real world,” there are some low hanging fruit wins where AI replaces extremely routine boilerplate jobs that never required a lot of human intelligence in the first place. But even then, I’d say that the general drift is that the humans who were doing those low-level jobs have a chance to step up into jobs requiring higher-level intelligence where humans have a chance to really shine. And companies are competing not by just getting rid of salaries, but by providing much better service by being able to afford to have more higher-tier people on the payroll. And by higher-tier, I don’t necessarily mean more expensive. It can be the same people that were doing the low-level jobs; they just now can spend their human-level intelligence doing more interesting and challenging work.

replies(4): >>44423568 #>>44423776 #>>44423805 #>>44424131 #
2. 486sx33 ◴[] No.44423568[source]
So basically compressing the pay scale even further …
replies(1): >>44423597 #
3. eru ◴[] No.44423597[source]
Well, many people complain about pay inequality. Compressing scales is the opposite of that, so should be welcomed?
replies(2): >>44423683 #>>44423727 #
4. landl0rd ◴[] No.44423683{3}[source]
Most of those people aren’t working in highly-paid disciplines like high tech. Generally those disciplines necessarily have wider spreads. I am perfectly fine with this.

If I suddenly have to think really hard at my job all day and do terribly if I’m undersea and still get paid the same or less, I will be left pretty bitter.

replies(1): >>44423826 #
5. spookie ◴[] No.44423727{3}[source]
the compression is happening only to those still hired, though
replies(1): >>44424099 #
6. knowitnone ◴[] No.44423776[source]
there is plenty of automation to be done. Last company I was with claimed to be a "tech company" which they kind of are but their internal tech stack was junk and automation was just as bad (at least in the unit I was with). AI certainly won't do anything about that unless a person told it exactly what and how to automate.
7. throwawaysleep ◴[] No.44423805[source]
> the most valuable use of LLMs is to make the software developers work more effectively

Which means you should need fewer of them, no?

> It can be the same people that were doing the low-level jobs; they just now can spend their human-level intelligence doing more interesting and challenging work.

Why were you using capable humans on lower level work in the first place? Wouldn't you use cheaper and less skilled workers (entry level) for that work?

replies(2): >>44423870 #>>44424460 #
8. eru ◴[] No.44423826{4}[source]
Wouldn't AI mean you have to think less hard than before?
replies(1): >>44424232 #
9. brigandish ◴[] No.44423870[source]
Has the improved effectiveness of computers or software led you to need fewer of them?
10. gruez ◴[] No.44424131[source]
>I’d say that the general drift is that the humans who were doing those low-level jobs have a chance to step up into jobs requiring higher-level intelligence where humans have a chance to really shine. And companies are competing not by just getting rid of salaries, but by providing much better service by being able to afford to have more higher-tier people on the payroll. And by higher-tier, I don’t necessarily mean more expensive. It can be the same people that were doing the low-level jobs; they just now can spend their human-level intelligence doing more interesting and challenging work.

That was the narrative last year (ie. that low performers have the most to gain from AI, and therefore AI would reduce inequality), but new evidence seems to be pointing in the opposite direction: https://archive.is/tBcXE

>More recent findings have cast doubt on this vision, however. They instead suggest a future in which high-flyers fly still higher—and the rest are left behind. In complex tasks such as research and management, new evidence indicates that high performers are best positioned to work with AI (see table). Evaluating the output of models requires expertise and good judgment. Rather than narrowing disparities, AI is likely to widen workforce divides, much like past technological revolutions.

replies(1): >>44424791 #
11. bluefirebrand ◴[] No.44424232{5}[source]
No, solving problems yourself is easier than understanding solutions that AI serves to you
replies(2): >>44424699 #>>44424803 #
12. ativzzz ◴[] No.44424460[source]
> Which means you should need fewer of them, no?

I've never worked at a company that didn't have an endless backlog of work that needs to be done. In theory, AI should enable devs to churn through that work slightly faster, but at the same time, AI will also allow PMs/work creators to create even more work to do.

I don't think AI fundamentally changes companies hiring strategies for knowledge workers. If a company wants to cheap out and do the same amount of work with less workers, then they're leaving space for their competitors to come and edge them out

13. malfist ◴[] No.44424699{6}[source]
Yeah, but I could just not understand the AI solution and just run with whatever it give me. No effort there. If something doesn't work, I can just tell the AI to fix it.

(Not a serious suggestion, but I do see this in the wild a lot)

replies(1): >>44425825 #
14. benreesman ◴[] No.44424791[source]
I think my personal anecdote supports this observation with the treatment group being "me in the zone" and control group "me not in the zone".

When I'm pulling out all the stops, leaving nothing for the swim back the really powerful (and expensive!) agents are like any of the other all out measures: cut all distractions, 7 days a week, medicate the ADHD, manage the environment ruthlessly, attempt something slightly past my abilities every day. In that zone the truly massive frontier behemoths are that last 5-20% that makes things at the margin possible.

But in any other zone its way too easy to get into "hi agent plz do my job today I'm not up for it" mode, which is just asking to have some paper-mache, plausible if you squint, net liability thing pop out and kind of slide above the "no fucking way" bar with a half life until collapse of a week or maybe month.

These are power user tools for monomaniacal overachievers and Graeberism detectors for everyone else (in the "who am I today" sense, not bucketing people forever sense).

15. eru ◴[] No.44424803{6}[source]
I'm not so sure.

An analogy: any idiot can take a calculus class today, but it took Leibniz and Newton to come up with it in the first place. (And even those geniuses didn't do it properly: it took until the likes of Karl Weierstrass and friends to put analysis on a firm footing.)

replies(1): >>44433951 #
16. bluefirebrand ◴[] No.44425825{7}[source]
Yes, I do unfortunately see this in the wild a lot as well

Maybe I'm the one who is ultimately a sucker, because I take too much pride in my work to do this

But I always thought that the quality of my work and my effort would be tied to my reputation, but I don't think the world works that way unless you are very well known somehow

17. sublinear ◴[] No.44433951{7}[source]
AI generated code in all but the most trivial cases is never production ready.

In the real world, the resulting code that correctly and efficiently solves a problem ends up being unique enough such that an AI wouldn't learn much even if you fed this result back in. It's just going to average away all the important parts that made it correct.

If real world code wasn't so sparse and time consuming to develop it wouldn't be so valuable in the first place. Not to mention the fact that the maintenance is what you really care about and where the real costs are. Application code doesn't stand still.