Most active commenters
  • matwood(3)

←back to thread

858 points colesantiago | 18 comments | | HN request time: 0.417s | source | bottom
Show context
fidotron ◴[] No.45109040[source]
This is an astonishing victory for Google, they must be very happy about it.

They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.

Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.

replies(14): >>45109129 #>>45109143 #>>45109176 #>>45109242 #>>45109344 #>>45109424 #>>45109874 #>>45110957 #>>45111490 #>>45112791 #>>45113305 #>>45114522 #>>45114640 #>>45114837 #
jonas21 ◴[] No.45109242[source]
Do you not see ChatGPT and Claude as viable alternatives to search? They've certainly replaced a fair chunk of my queries.
replies(6): >>45109271 #>>45109465 #>>45109900 #>>45110000 #>>45110287 #>>45113999 #
1. bediger4000 ◴[] No.45109271[source]
I do not. I prefer to read the primary sources, LLM summaries are, after all, probabilistic, and based on syntax. I'm often looking for semantics, and an LLM really really is not going to give me that.
replies(8): >>45109288 #>>45109394 #>>45109428 #>>45109487 #>>45109535 #>>45109711 #>>45109742 #>>45113375 #
2. crazygringo ◴[] No.45109288[source]
Funny, I use LLM's for so much search now because they understand my query semantically, not just its syntax. Keyword matching fails completely for certain types of searching.
replies(1): >>45111930 #
3. throwaway314155 ◴[] No.45109394[source]
ChatGPT provides sources for a lot of queries, particularly if you ask. I'm not defending it, but you can get what claim to want in an easier interface than Google.
4. sothatsit ◴[] No.45109428[source]
Tools like GPT-5 Thinking are actually pretty great at linking you to primary sources. It has become my go-to search tool because even though it is slower, the results are better. Especially for things like finding documentation.

I basically only use Google for "take me to this web page I already know exists" queries now, and maps.

replies(1): >>45109558 #
5. the_duke ◴[] No.45109487[source]
Gemini 2.5 always provides a lot of references, without being prompted to do so.

ChatGPT 5 also does, especially with deep research.

6. whycome ◴[] No.45109535[source]
Since when does google give your primary sources for simple queries? You have to wade through all the garbage. At least an LLM will give you the general path and provide sources.
replies(1): >>45123368 #
7. Rohansi ◴[] No.45109558[source]
> pretty great at linking you to primary sources

Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source.

It's easy to assume it's always accurate when it generally is. But it's not always.

replies(2): >>45109708 #>>45112561 #
8. sothatsit ◴[] No.45109708{3}[source]
I have noticed it hallucinating links when it can't find any relevant documentation at all, but otherwise it is pretty good. And yes, I do check them.

The type of search you are doing probably matters a lot here as well. I use it to find documentation for software I am already moderately familiar with, so noticing the hallucinations is not that difficult. Although, hallucinations are pretty rare for this type of "find documentation for XYZ thing in ABC software" query. Plus, it usually doesn't take very long to verify the information.

I did get caught once by it mentioning something was possible that wasn't, but out of probably thousands of queries I've done at this point, that's not so bad. Saying that, I definitely don't trust LLMs in any cases where information is subjective. But when you're just talking about fact search, hallucination rates are pretty low, at least for GPT-5 Thinking (although still non-zero). That said, I have also run into a number of problems where the documentation is out-of-date, but there's not much an LLM could do about that.

9. scarface_74 ◴[] No.45109711[source]
ChatGPT gives you web citations from real time web searches.
10. hackinthebochs ◴[] No.45109742[source]
That Searlesque syntax/semantics dichotomy isn't as clear cut as it once was. Yes, programs operate syntactically. But when semantics is assigned to particular syntactic structures, as it is with word embeddings, the computer is then able to operate on semantics through its facility with syntax. These old standard thought patterns need to be reconsidered in the age of LLMs.
11. balder1991 ◴[] No.45111930[source]
Also weirdly LLMs like ChatGPT can give good sources that usually wouldn’t be at the top of a Google query.
replies(1): >>45112548 #
12. matwood ◴[] No.45112548{3}[source]
There’s a particular Italian government website and the only way I can find it is through ChatGPT. It’s a sub site under another site and I assume it’s the context of my question that surfaces the site when Google wouldn’t.
13. matwood ◴[] No.45112561{3}[source]
> It's easy to assume it's always accurate when it generally is. But it's not always.

So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day.

replies(2): >>45114882 #>>45115678 #
14. pas ◴[] No.45113375[source]
it's not syntax, it's data driven (yes of course syntax contributes to that)

https://freedium.cfd/https://vinithavn.medium.com/from-multi...

At its core, attention operates through three fundamental components — queries, keys, and values — that work together with attention scores to create a flexible, context-aware vector representation.

    Query (Q): The query is a vector that represents the current token for which the model wants to compute attention.

    Key (K): Keys are vectors that represent the elements in the context against which the query is compared, to determine the relevance.

    Attention Scores: These are computed using Query and Key vectors to determine the amount of attention to be paid to each context token.

    Value (V): Values are the vectors that represent the actual contextual information. After calculating the attention scores using Query and Key vectors, these scores are applied against Value vectors to get the final context vector
15. Rohansi ◴[] No.45114882{4}[source]
It's a marketing issue. LLMs are being marketed similar to Tesla's FSD - claims of PhD-level intelligence, AGI, artificial superintelligence, etc. set the expectation that LLMs should be smarter than (most of) us. Why would we have any reason to doubt the claims of something that is smarter than us? Especially when it is very confident about the way it is saying it.
replies(1): >>45118987 #
16. johannes1234321 ◴[] No.45115678{4}[source]
It's a question on judgement on the individual case.

A documentation for a specific product I expect to be mostly right, but maybe miss the required detail.

Some blog, by some author I haven't heard about I trust less.

Some third party sites I give some trust, some less.

AI is a mixed bag, while always implying authority on the subject. (While becoming submissive when corrected)

17. matwood ◴[] No.45118987{5}[source]
That's fair. The LLM hype has been next level, but it's only rivaled by the 'it never works for anything and will make you stupid' crowd.

Both are wrong in my experience.

18. blinding-streak ◴[] No.45123368[source]
Google's AI responses cite primary sources.