←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.208s | source
1. Bjorkbat ◴[] No.42142943[source]
It's kind of, I don't know, "weird", observing how there's all these news outlets reporting on how essentially every up-and-coming model has not performed as expected, while all the employees at these labs haven't changed their tune in the slightest.

And there's a number of reasons why, mostly likely being that they've found other ways to get improvements out of AI models, so diminishing returns on training aren't that much of a problem. Or, maybe the leakers are lying, but I highly doubt that considering the past record of news outlets reporting on accurate leaked information.

Still though, it's interesting how basically ever frontier lab created a model that didn't live up to expectations, and every employee at these labs on Twitter has continued to vague-post and hype as if nothing ever happened.

It's honestly hard to tell whether or not they really know something we don't, or if they have an irrational exuberance for AGI bordering on cult-like, and they will never be able to mentally process, let alone admit, that something might be wrong.