Was good while it lasted though.
Was good while it lasted though.
Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.
In the meantime, memes aside, I'm not too worried about being completely automated away.
These models are extremely unreliable when unsupervised.
It doesn't feel like that will change fundamentally with just incrementally better training.
"Model collapse" is a popular idea among the people who know nothing about AI, but it doesn't seem to be happening in real world. Dataset quality estimation shows no data quality drop over time, despite the estimates of "AI contamination" trickling up over time. Some data quality estimates show weak inverse effects (dataset quality is rising over time a little?), which is a mindfuck.
The performance of frontier AI systems also keeps improving, which is entirely expected. So does price-performance. One of the most "automation-relevant" performance metrics is "ability to complete long tasks", and that shows vaguely exponential growth.
It's lossy compression at the core.
Define "quality", you can make an image subjectively more visually pleasing but you can't recover data that wasn't there in the first place
Like, the grill of a car. If we know the make and year, we can add detail with each zoom by filling in from external sources