Throwing dozens of articles, social media posts and why not even videos. Hallucinations really don't matter at scale. And enough content is already generating enough views to make it somewhat viable strategy.
I guess humans are worthless as well since they are notoriously unreliable. Or maybe it just means that artificial intelligence is more realistic than we want to admit, since it mimics humans exactly as we are, deficiencies and all.
This is kind of like the self-driving car debate. We don't want to allow self-driving cars until we can guarantee that they have a zero percent failure rate.
Meanwhile we continue to rely on human drivers which leads to 50,000 deaths per year in America alone, all because we refuse to accept a failure rate of even one accident from a self-driving car.
Similarly I think people will be ok with other AI if it performs well.
I believe that is so far off the mark for a couple reasons:
1) It's possible to work around hallucinations in a more cost effective way than relying on humans to always be correct.
2) There are many use cases where hallucinations aren't such a bad thing (or even a good thing) for which we've never really had a system as powerful as LLMs to build for.
There's absolutely very large use cases for LLMs and it will be pretty disruptive. But it will also create net new value that wasn't possible before.
I say that as someone who thinks we have enough technology as it is and don't need any more.
I kind of like the Chipotle approach. I have a problem with my order, it just refunds me instantly and sometimes gives me a add-on for free.
Honestly I only use LLM for one thing - I give it a set of TS definitions and user input, and ask it to fit those schemas if it can and to not force something if it isn't 100% confident.
I know some people whose whole company is based around the use of AI to send emails or messages, and in reality they're logged into their terminals real time fixing errors before actually sending out the emails. Basically, they are mechanical turks and they even say they're looking at labor in India or Africa to pay them peanuts to address these.
I think for some niches, the former can for a brief period precede the latter. But eventually the market catches up and roots out that which lacks actual value.
More concretely, I suspect the advertising apparatus is going to increasingly devalue unattributed content online, favouring curated platforms and eventually resembling a more hands on media distribution with human platform relationships (where media == the actual medium of distribution not content).
That is already a thing, where for example an instagrammer promoting your product is more valuable than the automated ad-network on instagram itself.
At which point, hopefully, automated content and spam loses legitimacy and value as ad-media.
With good RAG, hallucinations are non-existent.
Spend some more time working with them and you might realize the value they contain.
There can be uses, but if you you're falling on deaf ears as a B2B if you don't solve this problem. Consumers accept inaccuracies, not businesses. And that's also sadly where it works best and why consumers soured on it. It's being used to work as chatbots that give worse service, and make consumers work more for something an employee could resolve in seconds.
as it's worked for millenia, human have accountability, and any disaster can start the PR spin by reprimanding/firing a human who messes up. We don't have that for AI yet. And obviously, no company wants to bear that burden.
So you see the issue. and the intent.
If you're not confident enough in your tech to be held liable, we're going to have issues. We figured out (sort of) human liability eons ago. So it doesn't matter if it's less safe. It matters that we can make sure to prune out and punish unsafe things. Like firing or jailing a human.
In theory, I save immense amount of time daily talking to Claude/4o when I need to ask something quick, but previously had to search at least x4 different search engines and wade through too many SEO spams disappointing me.
Also, the summarizer while a meme at this point is immensely useful. I put anything interesting looking throughout the day into a db, then a cronjob in cloudflare runs and tries to fetch the text content from each link and generates a summary using 4o and then stores it.
Over the weekend, I scroll through the summary of each links saved, if anything looks decently interesting, I will go and check it out and do further research.
In fact, I actually learned about SolidJS from one random article posted in 4th page of HN with few votes and the summary gave enough info for me to go ahead and check SolidJs instead of having to read through the article ranting about ReactJS.
An interesting idea would be to automate a cronjob to ask LLM to generate a random motivational quote(more hallucination is more beneficial) or random status and then post it. Then automate this to generate different posts for X/Bsky/Mastodon/LinkedIn/Insta and you have auto generated presence. There is a saying that, if you let 1000 monkies type on a type writer, you will eventually have a hamlet or something.. forgot the saying, but with an auto generated presence, this could be valuable for a particular crowd.
Once they reach critical mass, they inevitably start posting porn ads. Weird, weird dynamic we're in now.
And I don't think that's just for assisting experts: it would be extremely helpful to beginners too as long as they have the mindset that it can be wrong.