←back to thread

440 points pseudolus | 1 comments | | HN request time: 0s | source
Show context
muldvarp ◴[] No.45052736[source]
Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".

Was good while it lasted though.

replies(15): >>45052803 #>>45052830 #>>45052911 #>>45052938 #>>45053022 #>>45053037 #>>45056787 #>>45056886 #>>45057129 #>>45057182 #>>45057448 #>>45057657 #>>45057837 #>>45058585 #>>45063626 #
grim_io ◴[] No.45052911[source]
Maybe it's just the nature of being early adopters.

Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.

In the meantime, memes aside, I'm not too worried about being completely automated away.

These models are extremely unreliable when unsupervised.

It doesn't feel like that will change fundamentally with just incrementally better training.

replies(2): >>45053115 #>>45053192 #
ACCount37 ◴[] No.45053115[source]
Does it have to? Stack enough "it's 5% better" on top of each other and the exponent will crush you.
replies(3): >>45053218 #>>45056876 #>>45057099 #
cjs_ac ◴[] No.45053218[source]
Are LLMs stackable? If they keep misunderstanding each other, it'll look more like successive applications of JPEG compression.
replies(1): >>45053339 #
ACCount37 ◴[] No.45053339[source]
By all accounts, yes.

"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.

The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.

replies(2): >>45053405 #>>45056905 #
grim_io ◴[] No.45053405[source]
The jpeg compression argument is still valid.

It's lossy compression at the core.

replies(2): >>45054053 #>>45056817 #
ACCount37 ◴[] No.45054053[source]
I don't think it is.

Sure, you can view an LLM as a lossy compression of its dataset. But people who make the comparison are either trying to imply a fundamental deficiency, a performance ceiling, or trying to link it to information theory. And frankly, I don't see a lot of those "hardcore information theory in application to modern ML" discussions around.

The "fundamental deficiency/performance ceiling" argument I don't buy at all.

We already know that LLMs use high level abstractions to process data - very much unlike traditional compression algorithms. And we already know how to use tricks like RL to teach a model tricks that its dataset doesn't - which is where an awful lot of recent performance improvements is coming from.

replies(1): >>45054194 #
grim_io ◴[] No.45054194[source]
Sure, you can upscale a badly compressed jpeg using ai into something better looking.

Often the results will be great.

Sometimes the hallucinated details will not match the expectations.

I think this applies fundamentally to all of the LLM applications.

replies(1): >>45054739 #
muldvarp ◴[] No.45054739[source]
And if you get that "sometimes" down to "rarely" and then "very rarely" you can replace a lot of expensive and inflexible humans with cheap and infinitely flexible computers.

That's pretty much what we're experiencing currently. Two years ago code generation by LLMs was usually horrible. Now it's generally pretty good.

replies(2): >>45055160 #>>45056908 #
grim_io ◴[] No.45055160[source]
I think you are selling yourself short if you believe you can be replaced by a next token predictor :)
replies(3): >>45055343 #>>45056920 #>>45061169 #
ACCount37 ◴[] No.45055343[source]
I think humans who think they can't be replaced by a next token predictor think too highly of themselves.

LLMs show it plain and clear: there's no magic in human intelligence. Abstract thinking is nothing but fancy computation. It can be implemented in math and executed on a GPU.

replies(3): >>45056912 #>>45057287 #>>45058681 #
anthem2025 ◴[] No.45056912[source]
LLMs have no ability to reason whatsoever.

They do have the ability to fool people and exacerbate or cause mental problems.

replies(1): >>45061156 #
1. muldvarp ◴[] No.45061156{3}[source]
LLMs are actually pretty good at reasoning. They don't need to be perfect, humans aren't either.