I don't, for two reasons. First, for looking up facts, they're nowhere near 100% dependable. If I need to check sources, might as well start there. Second, if someone made the effort to put useful content on the internet, I can grace them with a click.
For myself I noticed 2 bad effects in my daily usage:
- Search: impossible to reach any original content in the first positions. Almost everything sounds like AIsh. The punctuation, the commas, the semicolon, the narro vocabulary, and the derivative nature of the recent internet pages.
- Discovery: (looking directly to you Spotify and Instagram) here I would add in the “No AI” feature another one “Forget the past…” and then set the time. I personally like to listen some orthogonal genres seasonally. But once that you listen 2 songs in a very spontaneous manner Spotify will recommend that for a long time. I listened out of curiosity some math rock, and the “Discovery Weekly” took 9 weeks to not recommend that anymore.
https://www.androidauthority.com/how-to-turn-off-ai-overview...
> AI Overview > You can't completely disable Google's AI Overviews in your search results, but you can use workarounds to hide them.
The workarounds didn’t seem to work for me.
https://soitis.dev/ai-overview-hider-for-google
You could also just use the underlying CSS in something like Stylus, or convert it to uBlock filters:
https://github.com/insin/ai-overview-hider-for-google/blob/m...
Vs
Single moms in my area fucking
Yep, it works
Example from yesterday - I was updating my mum’s kindle fire tablet after it had been in a drawer for three years. It was stepping up fire-os versions one at a time and taking hours. So, a quick web search - what’s the latest version of fire os 7?
Google AI confidently answers 7.3.2.9, so cool, it’ll be done soon then. Nope, it kept on going into 7.3.3.x updates.
If you’re going to be confidently wrong, probably best not to try.
For example, you're required to provide accurate info about yourself when donating to a U.S. federal political campaign [0]. Is it possible that someone, somewhere in America is legally named John Fucksalot? Or works for a company named Fucks, Inc? Maybe! We're a huge country with wildly diverse cultural standards and senses of humor. But a John Fucksalot, CEO of Fucks Inc, who lives in Fuck City, Ohio 42069? Probably not, and the fact that this record exists says something about how the rules and laws regarding straw donors are readily enforce. And whether or not an enforcement action happened, what field in the FEC data indicates a revised record?
Seems like this tip can still be useful in the Age of LLMs. Not just for learning about the training data, but also how confident providers are in their models, and how many guardails they've needed to tack on to prevent it from giving unwanted answers.
[0] https://www.fec.gov/data/receipts/individual-contributions/?...
Kagi has been a joy to use.
"More complicated" is actually just as simple as going to https://tenbluelinks.org/ and following the instructions. It's so refreshing to just see links when you search, and it's unfortunate that the OP makes it out to be something that's prohibitively difficult to do.
Yes - a big part of that is that online content is so much worse than it used to be. But it became bad over time because of what Google incentivized.
I think it's because management at these companies has set AI usage metrics as a critical KPI. Thus teams are highly incentivized to just stick that shit in front of everyone and make sure it's hard or impossible to turn off. I actually think AI can be genuinely useful in the right context but this insane over-rotation to shoving it down our throats risks turning AI into this decade's Microsoft Bob - universally despised simply on general principle.
Safari extension: https://news.ycombinator.com/item?id=41298312
e: according to Kagi's pricing page they do have a 'no AI' tier, but it limits your number of searches to 300/month. Seems like a totally arbitrary limitation, but its still better than forced AI.
I understand the appeal of not wanting to wade through 20 links to find information (especially given SEO stuff that is often on top) but why will people continue to publish as traffic decreases due to AI summaries?
I mean I understand the urgent need to keep up but the problem with "theft" is eventually it drives out honest production and everyone is worse off.
> I work with ML and I am bullish with AI in general; said that, I would pay between 5 to 10 USD a feature or toggle called “No AI” for several services.
Hard fuck this. I am not giving a company money to un-ruin their service. Just go to a competitor.
I get with a bunch of these hyperscaled businesses it's borderline impossible to entirely escape them, but do what you can. I was an Adobe subscriber for years, and them putting their AI garbage into every app (along with them all steadily getting shittier and shittier to use) finally made me jump ship. I couldn't be happier. Yeah there was pain, there was adjustment period, but we need to cut these fuckers off already. No more eternal subscription fees for mediocre software.
Office is next. This copilot shit is getting worse by the day.
Unshittifying my daily use Internet sites, browser (Firefox) and operating system (Windows) is becoming extremely annoying. It used to just require an occasional tweak here or there. Now some new enshittification or regression pops up almost every week. Most of it's just removing new things they keep adding or restoring useful features killed because they weren't driving this quarter's KPI du jour or as part of some designer's misguided quest to achieve the Zen-like simplicity of 'perfect emptiness'.
Or maybe we'll finally get serious about web of trust crypto if we want to continue talking to humans.
1. Its own products weren't (as much) part of the search result quality problem. Today Google has hungry product managers from Ads, Youtube, and various AI products convincing higher-ups that their product deserves higher placement. That placement used to be sacrosanct.
2. The daily volume of AI-generated garbage content 13 years ago was probably a rounding error in today's volume
So Google was operating in a different landscape than Kagi of today. They had to do a lot less to achieve the quality they had.
I disagree that that Kagi "isn't anywhere near as good as Google was ~13 years ago". It's near. For me personally, it's better because I'm never served first-party ads.
It got so bad that I had to add a "No AI" flag to my image search app which limits the date range to earlier than 2022. Not a great solution but works in a pinch.
And for AI I'd usually rather have API access and use it with tools rather than a web chat.
Also the road safety sign, while a funny combination, is pretty standard and can be found all over Austria.
It's so strange how much money and time companies are pouring into "features" that the public continues to reject at every opportunity.
At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast numbers of employees out of work and allow companies to use the massive amounts of data they've collected about us against us more effectively. All the AI being shoehorned into products and services now are mostly to test, improve, and advertise for the AI being used, not to provide any value for users who'd rather have nothing to do with it.
"per capita gdp by country" - no AI
"population of Kansas" - no AI
"lisp interpreter" - no AI
"emacs vs. vim" - AI overview
"size of mit student body" - AI overview
What's going on?
It's this part.
Salaries and benefits are expensive. A computer program doesn't need a salary, retirement benefits, insurance, retirement, doesn't call in sick, doesn't take vacations, works 24/7, etc.
I think the AI summaries are super useful. 90% of the time it answers the question I have accurately and concisely, saving me time and effort.
The paragraph summaries give me a good overview of a topic, and the links to the original sources save me from scanning through tons of websites looking for details.
I never fully trust the AI summary, it's just a better way to look for information. I click on the reference link often to double check the source and the accuracy of the summary. I think I've only discovered a discrepancy once or twice.
Google isn't a utility, if you don't like Gemini, go use DDG or Bing.
I eventually just scripted a separate search engine query that's site specific to Amazon. It works but not as well as it could because it doesn't have access to my purchase history or Amazon's hidden granular category taxonomy.
Just an example where it isn't just making it harder to search for a profit motive but it's actually actively preventing (both Amazon and Google) from showing me the results or even ads for the product I actually want to buy.
If anyone has a good solution to this I would appreciate it, there is often a non-latex version of most all latex based products but finding them online is impossible if you don't already know the brand name!
Can’t remember the last time I used gmail in the browser. And gmail is only ever because startups use gsuite by default.
Google is only going to do more of this shit unless they start hurting from a drop in traffic.
It's absolutely still a scheme by companies to get rid of employees and get customers to do work for them for free, and there are still issues with the systems not working very well, but we at least have the option (in almost all cases) to queue up at the one or two registers with an employee doing the work. When it comes to AI, we're often not being given any choice at all. Even if we can avoid using it, or somehow avoid seeing it, we will still be training it.
The outcome that the large companies are banking on is replacing workers, even employees with rather modest compensation end up costing a significant amount if you consider overhead and training. There is (mostly) no AI feature that wall street or investors care about except replacing labor - everything else just seems to exist as a form of marketing.
That's certainly part of it.
However, at this point I think a lot of it is a kind of emotional sunk-cost. To stop now would require a lot of very wealthy and powerful people to admit they had personally made a very serious mistake.
Throwing "ai" into it is a simple addition, if it works, great, if it doesn't well the market just wasn't ready.
But if they have to actually talk to their users and solve their real problems that's a really hard pill to swallow, extremely hard to solve correctly, and basically impossible to sell to shareholders because you likely have to explain that your last 50 ideas and the tech debt they created are the problem that needs to be excised.
It may be less that people are unaware of the speculative bubble but are just hoping to get in and out before it pops.
google.com###m-x-content
google.com###B2Jtyd
Add this to AdGuard Preferences -> Filters -> User Rules. It makes search results load faster too!I think people are always resistant to change. People didn't like ATMs when they first came out either. I think it's improved things.
But this is normal. A new thing is discovered, the market runs lots of tests to discover where it works / doesn’t, there’s a crash, valid use cases are found / scaled and the market matures.
Y’all surely lived thru the mobile app hype cycle, where every startup was “uber for x”.
The amount of money being spent today pales in comparison to the long term money on even one use case that scales. It’s a good bet if you are a VC.
google.com###m-x-content
google.com###B2Jtyd
If someone hasn't already made a userscript to do this automatically, someone should, it would be very easy.
The normal tech innovation model is: 1. User problem identified, 2. Technology advancement achieved, 3. Application built to solve problem
With AI, it's turned into: 1. Technology advancement achieved, 2. Applications haphazardly being built to do anything with Technology, 3. Frantic search for users who might want to use applications.
I don't know how the industry thinks they're going to make any money out of the new model.
It's too bad because even 10 years ago Google and the internet in general were magical. You could find information on any topic, make connections and change your life. Now it is mostly santized dumbed down crap and the discovery of anything special is hidden under mountains of SEO spam, now even AI generated SEO spam that is transparently crap for any moderately intelligent user.
For a specific example I like to watch wildlife videos and specifically ones that give insight to how animal think and process the world. This comparative psychology can help us better understand ourselves.
If you want to watch Macaque monkeys for example google/youtube feeds you almost exclusively SEO videos from a handful of locations in Cambodia. There are plenty of other videos out there but they are hidden by the mass produced videos out of Cambodia.
If I find an interesting video and my view history is off the same video is often undiscoverable again even with the exact same search terms.
Search terms are disregarded or substituted willy nilly by Google AI who thinks it knows better what I want than myself.
But the most egregious thing for me as a viewer of nature videos is the AI generated content. It is obviously CGI and often ridiculous or physically impossible. For example let's say I want to see how a monkey interacts with a predatory python, I am allowed to watch that right??? Or are all the Serengeti lion hunting gazel videos to be banned in 2025? Lol. So I search "python attacks monkey" hoping to see a video in the natural setting. Instead I am greeted with maybe a handful of badly shot videos probably staged by humans and hundreds of CGI cartoons that are obviously not real. In one the monkey had a snake mouth! Lol. Who goes searching for real nature videos to see badly faked stuff?
Because of how I can not find anything on google or Youtube anymore without picking through a mountain of crap I use them less now. This is for almost any kind of topic not just nature videos.
Is that a win for advertisers? Less use? I don't think so.
In about 20 years of using the product the number of times a google or Youtube search has led to me actually purchasing a product or service DUE to an ad I saw, is I believe precisely zero.
Recently I have been seeing Temu (zero interest), disability fraud (how is this allowed), senior, and facebook ads. I am a non disabled, 30 something man. I saw an ad for burial insurance today.
Why is facebook paying to advertise "facebook" on youtube in 2025? Is this some ritual sacrifice to the god Mammon or something? Surely in 2025 everyone who would be interested in Facebook has heard of it. I have the Facebook app installed. Why the hell do facebook investors stand facebook paying google to advertise facebook non-selectively on youtube. It's the stupidest thing I ever saw.
I have not watched any political content in years. And yet when I search for a wild life video I get mountains of videos about Trump and a handful of mass produced low quality wildlife content interspersed.
Today I was treated to an irrelevant ad about "jowl reduction."
I know many of you use ad blockers but this is how horrendous it is without them. You can't find what you want, even what you just saw, and you are treated to a deluge of irrelevant, obnoxious content and ads.
Clearly it is about social control, turning our minds to mush to better serve us even more terrible ad content.
Huh, I always thought it was the opposite: If you're in a hurry, you go through traditional check-out. Nothing really matches the speed of an experienced and trained grocery store checkout clerk whizzing boxes past the scanner faster than you can load them into your cart. I think traditional checkout can blaze through 30 grocery items before I can even get three or four out of my cart, fumble around with them in front of the scanners, and then get chastised and stopped by the computer because I didn't place the item properly on the shelf next to the checkout machine.
I wish I could be certain that we're not doing that already.
No, it’s always been 1) Utter the current password in your pitch deck to unlock investor dollars. Recently it’s “AI” and “LLM,” but previously it was “Blockchain,” “Big Data,” etc.
I agree with you for large purchases.
we still don’t know what problems to solve, but we’re gonna use AI to help us figure that out.
once we do, it’s gonna be huge. this AI stuff is going to change everything!!!
The technologically disinclined or illiterate will continue to be oblivious, and simply use whatever is placed before them.
If this were not true, then app stores wouldn't have so much malware easily available to the masses. It wouldn't be profitable to release such things. It is profitable.
The masses will continue to amass stupidity as a miser amasses coin.
If by "be considered insane by society", you mean "find each other, attract new members, mobilize, and vote", I'd say you're spot on.
This was all a big mistake. To future generations, all I can say is that we meant well.
Plus traditional is where you buy alcohol.
I strongly doubt this "dichotomy AI" theory.
If AI (or any tech) could clean, do dishes, or cook (which is not a chore for many, I acknowledge that) it could potentially bring families together and improve the quality of peoples lives.
Instead they are introducing it as a tool to replace jobs, think for us, and mistrust each other ("you sound like an AI bot!/you just copied that from chatgpt! You didn't draw that! How do I know you're real?"
I don't know if they really thought through to an endgame, honestly. How much decimation can you inflict on the economy before the snake eats its own tail?
In fairness to the 99.99% they don't even know what a bootloader is and if they understood the situation and the risks many of them would also favor an open option.
I don't think the rejection of AI is primarily a HN thing though. It's my non-tech friends and family who have been most vocal in complaining about it. The folks here are more likely to have browser extensions and other workarounds or know about alternative services that don't force AI on you in the first place.
I'm surprised as well. Some people want it
There's nothing AI brings to the table that a competent human wouldn't, with the added benefit that you don't have to worry about AI making things up or not understanding you.
Or maybe they just want to try and convince the AI to give them things you wouldn't (https://arstechnica.com/tech-policy/2024/02/air-canada-must-...)
All the ad talked about was AI, nothing about specs, and barely a whisper of how it works, or even good demos of apps switching between open and closed.
Every phone has AI now, big deal. How about you tell me, Google, what is cool about the fold, instead of talking for 4 minutes about AI?!
True. And awareness and education is very important for useful discourse.
> if they understood the situation and the risks many of them would also favor an open option.
Raising my hand as one of those people who knows what a bootloader is and also doesn't currently care about an open option. Maybe at some time in the future I will again, but for now it is very far down on my list of concerns.
I suspect whether or not AI is useful/high-quality/"good"/etc is just not important to most poeple at the moment. If they are laid off from their jobs in the future and replaced with an AI, I suspect they'll start caring more.
But in the general case, I've found "caring ahead-of-time" (for want of a better phrase) is a very hard thing to encourage, despite the fact that it's one of the most effective things you can do if you direct it at the "right" avenues (i.e. those that will affect you directly in the future).
It's maddening because Amazon used to have a modern, reasonably capable search function. You could require terms. You could exclude terms. Terms could be phrases. I'm sure they still have all these capabilities, they've just decided to intentionally disable them because their A/B testing indicated that breaking their search would return a fractional percent more revenue by shoveling more unrelated results in front customers. It must work on someone but it's never worked even once on me, because I KNOW what I need and I'm only going to buy exactly that - if I can fucking find it.
I'd actually be okay to let Amazon annoy the NPCs who just clickety-click and buy whatever random shiny shit they shovel in front of them, IF they'd just add something for us technically-minded, engineering type people who are looking for one precise thing only. They can even hide it behind an arcane interface like REGEX. That'll keep the rabble out! :-)
They'll make you the perfect pizza with cheese that doesn't slide off because of the glue.
But seriously, I predict inadvisably-applied LLMs are going to eventually end up somewhere between the mistakes of Juicero and the mistakes of leaded gasoline.
When it's integrated into a product people are more likely to use it. Lowering the barrier to entry so to speak.
I was late to this, but G's default search had been becoming worse and worse. The trick is equivalent to clicking the "Web" tab when you do default search. In 99.9% cases the "Web" tab is what I need, it's pure and no noise. I do not mind clicking the "All" tab e.g. for a tennis player last name during AO to get all details I need. Actually, for sport events the default G's functionality is insanely useful, such as live score updates.
I don’t think the little “ChatGPT might be wrong, you should check” disclaimer is doing very much.
Sounds like somebody somewhere thinks that you're old, or that you know an old person. Maybe you live in an area with lots of old people. Maybe you've got aging parents. Maybe an old person had your IP before you did. Maybe just the fact that you're still using facebook is good enough to identify someone as being old the majority of the time.
https://m.youtube.com/watch?v=tksN5Jaan9E
Kinda on the nose
We have curated TV now, but just like before the people doing the curation aren't doing it based on what's good for you, the viewer. It's based on what will benefit their bottom line.
The things we try to resort to in order to figure out what to spend our time watching like review sites and social media are already gamed and astroturfed to death. Each new one that comes out gets less useful as time goes on because of it.
Good luck finding the real humans online among the countless AI generated curators PR firms churn out.
Why yes, it does.
Even setting aside that most AI hype: Yes, automation is in fact quite sinister if you do not go out of your way to deal with the downsides. Putting people out of a job is bad, actually.
Yes. The industrial revolution was a great boon to humanity that drastically improved quality of living and wealth. It also created horrific torment nexuses like mechanical looms into which we sent small children to get maimed.
And we absolutely could've had the former without the latter; Child labour laws handily proved it was possible, and should have been implemented far sooner.
Reddit sure isn't an ideal place for fact checks. It's full of PR bots and shills, but at least there are still humans commenting and I can't fault people for doing what they can in the best way they know how.
Google couldn’t just keep ignoring it. I do wish it were an option instead of on by default - except for searches they can monetize
So you are not far off from that concept of putting vast numbers of employees out of work, when influential figures like Andressen are openly stating that is their ambitions.
https://chatgpt.com/share/679d7f5f-d508-8010-94fa-df9d554b62...
(and then I just remembered that the free version doesn’t have web search)
-"tiananmen square 1902481358"
This way it won't interfere if you ever happen to actually want results that mention the place.Hmm, I'm not sure about my testing now, even with innocuous stuff the AI thing isn't back. Maybe something I did scared it off.
Tried to search user interface design for an ongoing project, and found that Google now simply ignores filtering attempts... Try to find ideas about multi-color designs, and all there is are endless image spam sites and Letterman style Top 10 lists. Try to filter those out, and Google just ignores many attempts.
There's so many, that even those that actually do get successfully filtered out, only reveal the next layer of slime to dig through. Maybe the people that didn't pay enough for placement?
Huge majority, far and away, where the "Alamy", "Shutterstock", "_____Stock", ect... photos websites. There's so many it's not really practical to notch filter. Anything involving images. Spend all day just trying to notch filter through "_____Stock" results to get to something real.
The worst though, was that even among sites that wrote something, there was almost nothing that was actually "user interfaces" or anything related to design, other than simplistic sites like "top 10 colors for your next design" that are easy to churn out.
Try to search on a different subject and filter for only recent results from 2024, get results from 2015, 2016. Difficult to tell if the subject had simply collapsed in the intervening 10 years (seemed unlikely) or if Google was completely ignoring the filters applied. The results did not substantially change. It's like existing in an echo chamber where you're shown what you're supposed to view. It all feels very 1984 lately.
Basically ended up at the same conclusion: their customers are the ad buyers. They don't get enough money from "normal" people to care.
ChatGPT has image generation, you can upload word docs, images and PDFs and it has a built in Python runtime that it uses to offload math problems to.
So, if it is true we’re on the cusp of an AI Revolution, AGI, the Singularity, or anything like that, then there’s precedent to worry. It could destroy our lives and livelihoods on a timescale of decades, even if the whole world really would be over all improved in a century or two.
I would bet money that the majority of users do not actually feel this way.
I do think it's as simple as appealing to stakeholders in whatever way they can, regardless of customer satisfaction. As we've seen as of late, the stock markets are completely antithetical to the improvement of people's lives.
The first point does indeed come into play because oftentimes most people don't throw enough of a fuss against it. But everything has some breaking point; Microsoft's horribly launched Copilot for Office 365 showed one of them
[0]: https://www.warc.com/content/feed/ai-is-a-turn-off-for-consu...
[1]: https://hbr.org/2025/01/research-consumers-dont-want-ai-to-s...
I'm sure google thinks that people have some sort of bias, and that if they force people to use it they'll come to like it (just like google plus), but this also shows how much google looks down on the free will of its users.
So get fooled in future with AI..
On top of all the other insane choices they made, like removing your search category restrictions if it thinks your query was too precise. I'm close to snapping.
Things will benefit with AI.. But more things will get screwed by too.
I don't know how a thinking person can use this technology and not see the possibilities it opens up.
Personally I find it slower than just doing it manually but it has resulted in the form being correct more often now and has a lot of usage. There is also a big button when the chat opens that you can click to just fill it out manually.
It has its place, that place just isn't everywhere and the only option.
The people I know who “worry” are terrible about predicting negative events that impact them. I think that’s why it’s uncommon, lots of negative health outcomes and almost zero actual benefits.
Instead simply aiming for reasonable levels of resiliency in health, finances, etc tends to cover a huge range of issues. In that context having a preference for open systems makes a lot of sense, but focusing a lot of effort on it doesn’t.
Unless your new upstart social service is called SearchEngine+ so you remove it
(Except duckduckgo also seems to semi-ignore it. I'm baffled. I give up. I'm throwing my computer out the window, and moving to the woods)
They cite "Revealed Choice", which may apply when there is an actual choice.
But in the nearly winner-take-all dynamic of digital services, when the few oligopolistic market leaders present nearly identical choices, the actual usage pattern reveals only that a few bad choices are just barely worse than chucking the whole thing.
You'd be surprised how many don't even realize it's artificial, and/or welcome it. The average Google user is most certainly not similar to the average Hacker News commenter.
Coincidentally today, I received an automated text from my heath care entity along the lines of, "Please recognize this number as from us. Our AI will be calling you to discuss your heath."
No. I'm not going to have a personal discussion with an AI.
In this scenario, individuals without substantial capital could leverage AI to achieve outcomes that today require the resources and influence of wealthy founders. It might do the opposite of what CEOs seem to think: challenge existing power structures and create a more level playing field.
https://www.reddit.com/r/google/comments/1czcjze/how_is_ai_o...
Mainstream press have been covering how much people hate it - people's grandparents are getting annoyed by it. Worse, it comes on the heels of four years of Prabhakar Raghavan ruining Google Search for the sake of trying to pump ad revenue.
It's a steaming pile of dogshit that either provides useless information unrelated to what you searched for and is just another thing to scroll past, and even worse, provides completely wrong information half the time, which means even if the response seems to be what you asked, it's still useless because you can't trust it.
If you want something smooth and easy that uses the google engine, visit udm14.com
If you want to integrate the google engine more directly into your browser(s), understand how to use &udm=14
Two different UX's, each appropriate for a different audience.
A bit ago I was searching for toothpaste that doesn't have mint in it. This is already a pain at a brick retailer, but I figured Amazon's huge product variety would help. Turns out their search is actively malicious to negative terms because otherwise I could buy just the one thing and be done with my shopping.
I should probably set up a similar homebrew search to get around this. Purchase history is far less important to me because I don't buy much from Amazon.
How much of "just append ?udm=14 to your search query" is absolute gibberish?
Is "install the udm14 plugin" going to make any more sense?
Is "go to udm14.com for all your searches" going to stick? Are there phishing sites at umd14.com, mdm41.com, uwu44.com, and all the other variants they'll probably misremember it as?
"just search for 'fucking whatever' and the AI crap goes away", on the other hand, is funny, uses a common dictionary word that everyone above the age of five knows how to spell, and is intensely memorable.
I should go back to look at that and see if we could incorporate an easy ChatBot as an improvement.
Your premise is wrong. You assume these people are spending money creating useful output or would otherwise understand and be able to implement a more efficient means on their own. In the words of David Graeber a lot of people have bullshit jobs, are you sure the "AI" isn't alleviating some other problem for them?
> and not see the possibilities it opens up.
The current technology has no natural exponential growth curve. Which means for a linear increase in spending you get a linear increase in accuracy. Any thinking person should see where this is going. Which is why you should call these LLMs so you don't accidentally fool yourself.
I mean, of course, when AGI does arrive and has a reasonable power budget, then we're talking. The current technology will never become this or anything like this. This will almost certainly lead to a new "AI winter" before AGI happens and will likewise almost certainly not occur during yours or my lifetime.
If you do believe that then I have a self driving battery powered semi to sell you that's fully autonomous and will run road cargo trains for you all day and night for huge profits.
They already fired so many developers and this feels more like a Hail Mary before maintenance costs and tech debt start catching up to you.
Someone has a 'friend' who has a totally-not-publically-visible form where a chat bot interacts with the form and helps the user fill the form in.
...and users love it.
However, when really pressed, I've yet to encounter someone who can actually tell me specifically
1) What form it is (i.e. can I see it?)
2) How much effort it was to build that feature.
...because, the problem with this story is that what you're describing is a pretty hard problem to solve:
- An agent interacts with a user.
- The agent has free reign to fill out the form fields.
- Guided by the user, the agent helps will out form fields in a way which is both faster and more accurate than users typing into the field themselves.
- At any time the user can opt to stop interacting with the the agent and fill in the fields and the agent must understand what's happened independently of the chat context. i.e. The form state has to be part of the chat bot's context.
- At the end, the details filled in by the agent are distinguished from user inputs for user review.
It's not a trivial problem. It sounds like a trivial problem; the agent asks 'what sort of user are you?' and parses the answer into one of three enum values; Client, Foo, Bar -> and sets the field 'user type' to the value via a custom hook.
However, when you try to actually build such a system (as I have), then there are a lot of complicated edge cases, and users HATE it when the bot does the wrong thing, especially when they're primed to click 'that looks good to me' without actually reading what the agent did.
So.
Can you share an example?
What does 'and has a lot of usage' mean in this context? Has it increased the number of people filling in the form, or completing it correctly (or both?) ?
I'd love to see one that users like, because, oh boy, did they HATE the one we built.
At the end of the day, smart validation hints on form input fields are a lot of easier to implement, and are well understood by users of all types in my experience; it's just generally a better, normal way of improving form conversion rates which is well documented, understood and measurable using analytics.
...unless you specifically need to add "uses AI" to your slide deck for your next round of funding.
Which is a stupid argument, since there is "any tech" that can do your laundry and dishes, and it's been around for decades! Is it too hard for you to put your dishes in the dishwasher, or your clothes in the washing machine?
And I say this as someone bearish on AI.
I've also talked to a number of CTOs and CEOs who tell me that they're building their own AI products nominally to replace human workers, but they're not necessarily confident it will be successful in the foreseeable future. However, they want to be in a good place to capitalize on the success of AI if it does happen.
Is it "too hard"? No. Is it a substantial time sink, and one that (in the case of laundry, particularly) breaks up flow, so that it is inconvenient for someone who has to deal with $DAYJOB and those chores and wants to do art and writing (or other personal projects that take focus)? Yes.
But, I must try to have a little bit of self awareness here: if we all think it can do the jobs we don’t understand and don’t think it can do the job we’ve got experience in, then maybe that just indicates that it isn’t really very good at anything yet.
that's a hot take! A classic "Eat shit! A million flies can't be wrong.". really made me smile :)
BILLIONS of people are spending their own money for useless tech, simply because they fear of missing out.
Thinking person can see it is generating text from the input query – which is useful of course – but not dramatically useful
no. On the contrary. We will need people to clean the mess left by AI
> and allow companies to use the massive amounts of data they've collected about us against us more effectively.
yes.
But just as the conservative old-school business people were laughing and patting themselves on the back post-bubble over how stupid all the dotcoms were for thinking they could monetize eyeballs, Google emerges, and 20 years later tech companies drive the stock market rather than following it. Don't dismiss a technology just because the birthing spasms look ugly, it takes some time for markets to develop and for products to settle into niches. At the start a lot of that is due to people not being comfortable, tech sucking, and the market shifting too quickly to precisely target, but that can all change pretty fast.
Jokes aside, investors behind google seem to not realize that google at this point is infrastructure and not an expandable product market anymore. What's left to expand to since the Google India Ad 2013? What? North Korea, China, Russia, maybe? And then everyone gets their payout?
Financial ecosystems and their bets rely on infinite expansion capabilities, which is not possible without space travel.
When someone uses an AI they do not own, they are (maybe) receiving a benefit in exchange for improving that AI and associated intellectual property / competitive advantage of the person or entity that owns the AI—-and subsequently improving the final position of the AI’s owner.
The better an AI becomes, the more valuable it becomes, and the more likely that the owner of the AI would want to either restrict access to the AI and extract additional value from users (e.g. via paid subscription model) or leverage the AI to develop new or improve existing revenue streams—-even if doing so is to the detriment of AI users. After all… a sufficiently-trained “AGI” AI could (in theory) be capable of outsmarting anyone that uses it, know more about its users than its users consciously know about themselves, and could act faster than any human.
While I share in your hope, I think it is unfortunately far more likely that AIs will widen the gap between the haves and the have-nots and will evolve into some of the most financially and intellectually oppressive technology ever used by humans (willingly or not).
One day they'll put those kinds of robots in people's homes, but I'll keep them out of mine because they'll be full of sensors, cameras, and microphones connected to the cloud and endlessly streaming everything about your family and your home to multiple third parties. It's hard enough dealing with cell phones and keeping "smart"/IoT crap from spying on us 24/7 and they don't walk around on their own to go snooping.
The sad thing about every technology now is that whatever benefits it might bring to our lives, it will also be working for someone else who wants to use it against us. Your new smart TV is gorgeous but it watches everything you see and inserts ads while your watching a bluray. Your shiny car is self-driving, but you're tracked everywhere you go, there are cameras pointed at you recording and microphones listening the entire time sending real-time data to police and your insurance company. Your fancy AR implant means you'll never forget someone's name since it automatically shows up next to their face when you see them, but now someone else gets to decide what you'll see and what you aren't allowed to see. I think I'll just keep washing my own dishes.
Another example; I was part of a team that created a chatbot which helped navigate internal systems for call centre operators. If a customer called in, we would pick up on keywords and that provided quick links for the operator and pre-fill details like accounts etc. The operator could type questions too which would bring up the relevant docs or links. I did think looking into the UX would’ve been a better time spend and solved more problems as the system was chaos but “client wants”. What we built in the end did work well and reduced onboarding and training by 2 weeks.
User satisfaction is a key driver for search engine development. If users are generally unhappy with the AI integration, that feedback would likely lead to changes aimed at improving the user experience.
Your superiority complex is nothing new... anytime new technology emerges, there's an old crochety class that thinks it's a fad. It's always people arrogant enough to believe they know the world better than everybody else.
And no, billions of people aren't spending money on tech purely because of FOMO. That's just nonsense.
That sets off super strong scam vibes to me... Our banking industry here and medical industry pushes phishing information down your throat so much people even worry about legitimate communication that couldn't possibly be a scam being a scam.
I find that to be better for society but definitely clouds my judgement on those kinds of text. Also I have absolutely dropped my previous bank because it became impossible to speak to an actual human and willingly pay more for a bank where my phonecall goes directly to a human.
Do what one of the other commenters mentioned, make the AI an assistant for the human beings that help your customers, let the humans communicate with humans.
https://reviews.vc/ai-filter-plugin/
https://github.com/reviewsvc/ai-filter
The plugin has been submitted to the Chrome Store but has not been approved yet. It is super lightweight and will be better to install locally on its own. I personally don’t trust Chrome Store plugins :) So, you can download the archive, review the source and load it unpacked from chrome://extensions
Yes, the "fucking" trick is awesome too.
You used to be able to put 'Eliza' to sleep by using the word 'Dreamt'
Data centers will soon outstrip all other uses of electrical power, as for an AI calling in sick, no it needs full power 24/7. AI has no creativity, no initiative, no conscious, and absolutely zero ethics.
"In a middle-ground scenario, by 2027 new AI servers sold that year alone could use between 85 to 134 terawatt hours (Twh) annually."
Now its, "there's no humans on the internet."
I struggle to find a comparison that adequately describes the head snap at how fast some of these image / video generators were deployed.
The one that really got me was the immediate use of fake image gen by the British Royal Family. [1] (Check the kid's hand in the lower left they didn't even mark, broken fingers) Didn't even try to respond with anything real. Immediate response, photo image gen.
[1] BRF Doctored Family Photo (Sky News), https://news.sky.com/story/kates-doctored-photo-is-a-huge-ch...
You would need to spend thousands of dollars to become a customer, if you are not already one.
>An agent interacts with a user.
Correct, they are asked to describe their problem. There are some follow up questions, then some very specific questions if the form still isn't filled out.
>The agent has free reign to fill out the form fields.
Correct but there are actually very few free form fields and a lot of selections.
>Guided by the user, the agent helps will out form fields in a way which is more accurate than users typing into the field themselves.
Correct, the form is filled out correctly more often now
>Guided by the user, the agent helps will out form fields in a way which is faster than users typing into the field themselves.
No, I specifically said it is not. I can fill out a junk but valid form in about 10 seconds and valid with relevant data for testing in about 30 seconds. It is not a long form, but your selections will change the next selections. But I also helped build the form and have seen it go through every iteration.
>At any time the user can opt to stop interacting with the the agent and fill in the fields and the agent must understand what's happened independently of the chat context. i.e. The form state has to be part of the chat bot's context.
Would be a nice feature upgrade but if the user abandons the bot they just fill out the form as normal, same as if they decided to skip the bot at the beginning.
>At the end, the details filled in by the agent are distinguished from user inputs for user review.
Do you mean how do we know if the chat bot was used or whether it fills out the form. Both are trivial.
>Has it increased the number of people filling in the form, or completing it correctly (or both?) ?
The ideal case is that they never need to request help, but nearly all users will need help maybe once or twice a year unless something is really wrong. But yes, the number of users filling out the form incorrectly has decreased. Seems like the users don't mind spending 2-5 minutes per year chatting with the bot.
Can you be more specific?
Like, where specifically would I have to spend money to see this.
> Seems like the users don't mind spending 2-5 minutes per year chatting with the bot.
This seems like an enormous amount of effort to have gone to for a single form that people use once a year.
Did you roll out the chatbot assist to other forms? If not, why not? If so, are any of those forms easier to get access to that we can see either live or in a video?
Honestly, this is why I get frustrated with these conversations.
If it works so well, why isn't this sort of thing rolled out in many, visible, obvious places. Why is it hidden away behind paywalls and internal systems where no one can see it?
Why isn't everyone doing it? I've visited 4 websites today which had a chat bot on them, and none of them had a way for the bot to interact with anything on the page other than their own chat context.
Like I said, I'm sure it works to some degree, and varying degrees depending how much effort you put into it... but I'm frustrated I can never find someone who's so proud of it working they can go HERE, look at THIS example of it working.
Does anyone have an example we can actually look at?
Are those example where a chatbot helps fill out the form, or just examples of where forms are hard?
My image search did not find any results of AI chatbots that helped fill out the form for you. Do you have a direct link to a form by any chance?
"It's dramatically useful for millions of people who are now much more productive than they were 3 years ago, including the programmers who have 10x'd their output." - anecdotally
Your productivity cult is nothing new, anytime new quantity multiplier emerges, there's a freshman manager class like you who thinks quantity>quality. It is obvious from your comment since more productive and 10x output are the only things you praised there.
It's always people arrogant enough to believe they are riding the right hype-train, and everybody else is left behind.
Distracting adjectives like effing probably spoil the match between your search query and the topic cluster.
The only extra costs are if you use the (opt-in optional) AI Assistant which is a web UI to access various models for chatting purposes. As an aside, they recently updated this UI so it’s actually usable as a ChatGPT or Claude alternative.
Nearly half of Google engineers' outputs are coming from AI generated code, but you obviously know better than all of them.
"Nearly half of Google engineers' outputs are coming from AI generated code" – let's take a look at the results: 30 discontinued products for the past 3 years, and just 1 new product: Gemini, which got it's glorious 10% market share. Now that's a productivity monster.
Google is such a successful company now: they released 1 new product which didn't even surpass the microsoft chatbot in market share, and people are adding "fucking" to searches in order to get adequate search results. Great growth, was definitely not possible without AI!
No, millions of people smarter and more succesful than you TELLING you it's useful is what makes it more useful. But when you're this arrogant it'll never register.
Like I said, this is nothing new. It's like the arrogant boobs who thought smartphones were just a fad. Or the famous HN commenter who said Dropbox was pointless.