EDIT: Some typo fixes, tho many remain, I'm sure :)
I like LLMs (over search engines) because they are not salespeople. They're one of the few things I actually "trust". (Which I know is something that many people fall on the other side of — but no, I actually trust them more than SEO'd web sites and ad-driven search engines.)
I suppose my local-LLM hobby is for just such a scenario. While it is a struggle, there is some joy in trying to host locally as powerful an open LLM model as your hardware will allow. And if the time comes when the models can no longer be trusted, pop back to the last reliable model on the local setup.
That's what I keep telling myself anyway.
The only thing I really care about with classic web search is whether the resulting website is relevant to my needs. On this point I am satisfied nearly all the time. It’s easy to verify.
With LLMs I get a narrative. It is much harder to evaluate a narrative, and errors are more insidious. When I have carefully checked an LLM result, I usually discover errors.
Are you really looking closely at the results you get?