←back to thread

858 points colesantiago | 1 comments | | HN request time: 0s | source
Show context
fidotron ◴[] No.45109040[source]
This is an astonishing victory for Google, they must be very happy about it.

They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.

Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.

replies(14): >>45109129 #>>45109143 #>>45109176 #>>45109242 #>>45109344 #>>45109424 #>>45109874 #>>45110957 #>>45111490 #>>45112791 #>>45113305 #>>45114522 #>>45114640 #>>45114837 #
jonas21 ◴[] No.45109242[source]
Do you not see ChatGPT and Claude as viable alternatives to search? They've certainly replaced a fair chunk of my queries.
replies(6): >>45109271 #>>45109465 #>>45109900 #>>45110000 #>>45110287 #>>45113999 #
bediger4000 ◴[] No.45109271[source]
I do not. I prefer to read the primary sources, LLM summaries are, after all, probabilistic, and based on syntax. I'm often looking for semantics, and an LLM really really is not going to give me that.
replies(8): >>45109288 #>>45109394 #>>45109428 #>>45109487 #>>45109535 #>>45109711 #>>45109742 #>>45113375 #
sothatsit ◴[] No.45109428[source]
Tools like GPT-5 Thinking are actually pretty great at linking you to primary sources. It has become my go-to search tool because even though it is slower, the results are better. Especially for things like finding documentation.

I basically only use Google for "take me to this web page I already know exists" queries now, and maps.

replies(1): >>45109558 #
Rohansi ◴[] No.45109558[source]
> pretty great at linking you to primary sources

Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source.

It's easy to assume it's always accurate when it generally is. But it's not always.

replies(2): >>45109708 #>>45112561 #
matwood ◴[] No.45112561[source]
> It's easy to assume it's always accurate when it generally is. But it's not always.

So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day.

replies(2): >>45114882 #>>45115678 #
1. johannes1234321 ◴[] No.45115678[source]
It's a question on judgement on the individual case.

A documentation for a specific product I expect to be mostly right, but maybe miss the required detail.

Some blog, by some author I haven't heard about I trust less.

Some third party sites I give some trust, some less.

AI is a mixed bag, while always implying authority on the subject. (While becoming submissive when corrected)