←back to thread

206 points baylearn | 1 comments | | HN request time: 0.208s | source
Show context
empiko ◴[] No.44471933[source]
Observe what the AI companies are doing, not what they are saying. If they would expect to achieve AGI soon, their behaviour would be completely different. Why bother developing chatbots or doing sales, when you will be operating AGI in a few short years? Surely, all resources should go towards that goal, as it is supposed to usher the humanity into a new prosperous age (somehow).
replies(10): >>44471988 #>>44471991 #>>44472148 #>>44472874 #>>44473259 #>>44473640 #>>44474131 #>>44475570 #>>44476315 #>>44482884 #
imiric ◴[] No.44473259[source]
Related to your point: if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

This is the main point that proves to me that these companies are mostly selling us snake oil. Yes, there is a great deal of utility from even the current technology. It can detect patterns in data that no human could; that alone can be revolutionary in some fields. It can generate data that mimics anything humans have produced, and certain permutations of that can be insightful. It can produce fascinating images, audio, and video. Some of these capabilities raise safety concerns, particularly in the wrong hands, and important questions that society needs to address. These hurdles are surmountable, but they require focusing on the reality of what these tools can do, instead of on whatever a group of serial tech entrepreneurs looking for the next cashout opportunity tell us they can do.

The constant anthropomorphization of this technology is dishonest at best, and harmful and dangerous at worst.

replies(4): >>44473413 #>>44474036 #>>44474147 #>>44474204 #
richk449 ◴[] No.44474147[source]
> if these tools are close to having super-human intelligence, and they make humans so much more productive, why aren't we seeing improvements at a much faster rate than we are now? Why aren't inherent problems like hallucination already solved, or at least less of an issue? Surely the smartest researchers and engineers money can buy would be dogfooding, no?

Hallucination does seem to be much less of an issue now. I hardly even hear about it - like it just faded away.

As far as I can tell smart engineers are using AI tools, particularly people doing coding, but even non-coding roles.

The criticism feels about three years out of date.

replies(10): >>44474186 #>>44474349 #>>44474366 #>>44474767 #>>44475291 #>>44475424 #>>44475442 #>>44475678 #>>44476445 #>>44476449 #
nunez ◴[] No.44475442[source]
The few times I've used Google to search for something (Kagi is amazing!), it's Gemini Assistant at the top fabricated something insanely wrong.

A few days ago, I asked free ChatGPT to tell me the head brewer of a small brewery in Corpus Christi. It told me that the brewery didn't exist, which it did, because we were going there in a few minutes, but after re-prompting it, it gave me some phone number that it found in a business filing. (ChatGPT has been using web search for RAG for some time now.)

Hallucinations are still a massive problem IMO.

replies(2): >>44475546 #>>44479105 #
1. seanhunter ◴[] No.44479105[source]
The google AI clippy thing at the top of search has to be one of the most pointless, ill-advised and brand-damaging stunts they could have done. Because compute is expensive at scale (even for them) it’s running a small model, so the suggestions are pretty terrible. That leads people to who don’t understand what’s happening to think their AI is just bad in general.

That’s not the case in my experience. Gemini is almost as good as Claude for most of the things I try.

That said, for queries tgat don’t use agentic search or rag, hallucination is as bad a problem as ever and it won’t improve because hallucination is all these models do. In Karpathy’s phrase they “dream text”. Agentic search and rag and similar techniques disguise the issue because they stuff the context of the model with real results, so the scope for it to go noticeably off the rails is less. But it’s still very visible if you ask for references, links etc many/most/sometimes all will be hallucinations depending on the prompt.