←back to thread

440 points pseudolus | 2 comments | | HN request time: 0.501s | source
Show context
muldvarp ◴[] No.45052736[source]
Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".

Was good while it lasted though.

replies(15): >>45052803 #>>45052830 #>>45052911 #>>45052938 #>>45053022 #>>45053037 #>>45056787 #>>45056886 #>>45057129 #>>45057182 #>>45057448 #>>45057657 #>>45057837 #>>45058585 #>>45063626 #
grim_io ◴[] No.45052911[source]
Maybe it's just the nature of being early adopters.

Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.

In the meantime, memes aside, I'm not too worried about being completely automated away.

These models are extremely unreliable when unsupervised.

It doesn't feel like that will change fundamentally with just incrementally better training.

replies(2): >>45053115 #>>45053192 #
ACCount37 ◴[] No.45053115[source]
Does it have to? Stack enough "it's 5% better" on top of each other and the exponent will crush you.
replies(3): >>45053218 #>>45056876 #>>45057099 #
cjs_ac ◴[] No.45053218[source]
Are LLMs stackable? If they keep misunderstanding each other, it'll look more like successive applications of JPEG compression.
replies(1): >>45053339 #
ACCount37 ◴[] No.45053339[source]
By all accounts, yes.

"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.

The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.

replies(2): >>45053405 #>>45056905 #
1. Aloisius ◴[] No.45056905[source]
Given the number of academic papers about it, model collapse is a popular idea among the people who know a lot about AI as well.

Model collapse is something demonstrated when models are recursively trained largely or entirely on their own output. Given most training data is still generated or edited by humans or synthetic, I'm not entirely certain why one would expect to see evidence of model collapse happening right now, but to dismiss it as something that can't happen in the real world seems a bit premature.

replies(1): >>45057343 #
2. ACCount37 ◴[] No.45057343[source]
We've found in what conditions does model collapse happen slower or fails to happen altogether. Basically all of them are met in real world datasets. I do not expect that to change.