←back to thread

440 points pseudolus | 2 comments | | HN request time: 0s | source
Show context
muldvarp ◴[] No.45052736[source]
Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".

Was good while it lasted though.

replies(15): >>45052803 #>>45052830 #>>45052911 #>>45052938 #>>45053022 #>>45053037 #>>45056787 #>>45056886 #>>45057129 #>>45057182 #>>45057448 #>>45057657 #>>45057837 #>>45058585 #>>45063626 #
grim_io ◴[] No.45052911[source]
Maybe it's just the nature of being early adopters.

Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.

In the meantime, memes aside, I'm not too worried about being completely automated away.

These models are extremely unreliable when unsupervised.

It doesn't feel like that will change fundamentally with just incrementally better training.

replies(2): >>45053115 #>>45053192 #
ACCount37 ◴[] No.45053115[source]
Does it have to? Stack enough "it's 5% better" on top of each other and the exponent will crush you.
replies(3): >>45053218 #>>45056876 #>>45057099 #
cjs_ac ◴[] No.45053218[source]
Are LLMs stackable? If they keep misunderstanding each other, it'll look more like successive applications of JPEG compression.
replies(1): >>45053339 #
ACCount37 ◴[] No.45053339[source]
By all accounts, yes.

"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.

The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.

replies(2): >>45053405 #>>45056905 #
grim_io ◴[] No.45053405[source]
The jpeg compression argument is still valid.

It's lossy compression at the core.

replies(2): >>45054053 #>>45056817 #
elif ◴[] No.45056817{4}[source]
In 2025 you can add quality to jpegs. Your phone does it and you don't even notice. So the rhetorical metaphor employed holds up, in that AI is rapidly changing the fundamentals of how technology functions beyond our capacity to anticipate or keep up with it.
replies(2): >>45057345 #>>45058676 #
lm28469 ◴[] No.45057345{5}[source]
> add quality to jpegs

Define "quality", you can make an image subjectively more visually pleasing but you can't recover data that wasn't there in the first place

replies(1): >>45058287 #
1. oldpersonintx2 ◴[] No.45058287{6}[source]
You can if you know what to fill from other sources.

Like, the grill of a car. If we know the make and year, we can add detail with each zoom by filling in from external sources

replies(1): >>45059162 #
2. hcs ◴[] No.45059162[source]
This is an especially bad example, a nice shiny grille is going to be strongly reflecting stuff that isn't already part of the image (and likely isn't covered well by adjacent pixels due the angle doubling of reflection).