idk AI is just a speck outside of the HN and SV info-bubbles
still early to mass adoption like the smartphone or the internet, mostly nerds playing w it
> still early to mass adoption like the smartphone or the internet, mostly nerds playing w it
Rather: outside of the HN and SV bubbles, the A"I"s and the fact how one can fall for this kind of hype and dupery is commonly ridiculed.
Then again I remember when people here were convinced that crypto was going to change the world, democratize money, end fiat currency, and that was just the start! Programs of enormous complexity and freedom would run on the blockchain, games and hell even societies would be built on the chain.
A lot of people here are easily blinded by promises of big money coming their way, and there's money in loudly falling for successive hype storms.
I'm a ChatGPT paying user but I know no one who's not a developer on my personal circles who also is one.
maybe im an exeception
edit: I guess 400M global users being the US 300M citizens isn't out of scope for such a highly used product amongst a 7B population
But social media like instagram or fb feels like had network effects going for them making their growth faster
and thus maybe why openai is exploring that idea idk
I figure this problem is why the billionaires are chasing social media dominance, but even on social media I don't know how they'll differentiate organic content from AI content.
As per spammy applications, hasn't always been this the case and now made worse due to the cheapness of -generating- plausible data?
I think ghost-applicants where existent already before AI where consultant companies would pool people to try and get a position on a high paying job and just do consultancy/outsourcing things underneath, many such cases before the advent of AI.
AI just accelerates no?
What makes AI fundamentally different than smartphones or the internet? Will it change the world? Probably, already has.
Will it end it as we know it? Probably not?
Really it does not understand a thing, sadly. It can barely analyze language and spew out a matching response chain.
To actually understand something, it must be capable of breaking it down into constituent parts, synthesizing a solution and then phrasing the solution correctly while explaining the steps it took.
And that's not even what huge 62B LLM with the notepad chain of thought (like o3, GPT-4.1 or Claude 3.7) can really properly do.
Further, it has to be able to operate on sub-token level. Say, what happens if I run together truncated version of words or sentences? Even a chimpanzee can handle that. (in sign language)
It cannot do true multimodal IO either. You cannot ask it to respond with at least two matching syllables per word and two pictures of syllables per word, in addition to letters. This is a task a 4 year old can do.
Prediction alone is not indicative of understanding. Pasting together answers like lego is also not indicative of understanding. (Afterwards ask it how it felt about the task. And to spot and explain some patterns in a picture of clouds.)
LLMs still hallucinate and make simple mistakes.
And the progress seams to be in the benchmarks only
https://www.instagram.com/reel/DE0lldzTHyw/
These maybe satire but I feel like they capture what’s happening. It’s more than Google.
> And the progress seams to be in the benchmarks only
This seems to be mostly wrong given peoples' reactions to e.g. o3 that was released today. Either way, progress having stalled for the last year doesn't seem that big considering how much progress there has been for the previous 15-20 years.
How do you know they are possible to do today? Errors gets much worse at scale, especially when systems starts to depend on each other, so it is hard to say what can be automated and not.
Like if you have a process A->B, automating A might be fine as long as a human does B and vice versa, but automating both could not be.
If you explain a concept to a child you check for understanding by seeing if the output they produce checks out with your understanding of the concept. You don't peer into their brain and see if there are neurons and consciousness happening
This is an example I saw 2 days ago without even searching. Here ChatGPT is telling someone that it independently ran a benchmark on it's MacBook: https://pbs.twimg.com/media/Goq-D9macAApuHy?format=jpg
I'm reasonably sure ChatGPT doesn't have a Macbook, and didn't really run the benchmarks. But It DID produce exactly what you would expect a human to say, which is what it is programmed to do. No understanding, just rote repetition.
I won't post more because there are a billion of them. LLMs are great, but they're not intelligent, they don't understand, and the output still needs validated before use. We have a long way to go, and that's ok.