←back to thread

858 points colesantiago | 1 comments | | HN request time: 0.218s | source
Show context
fidotron ◴[] No.45109040[source]
This is an astonishing victory for Google, they must be very happy about it.

They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.

Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.

replies(14): >>45109129 #>>45109143 #>>45109176 #>>45109242 #>>45109344 #>>45109424 #>>45109874 #>>45110957 #>>45111490 #>>45112791 #>>45113305 #>>45114522 #>>45114640 #>>45114837 #
jonas21 ◴[] No.45109242[source]
Do you not see ChatGPT and Claude as viable alternatives to search? They've certainly replaced a fair chunk of my queries.
replies(6): >>45109271 #>>45109465 #>>45109900 #>>45110000 #>>45110287 #>>45113999 #
bediger4000 ◴[] No.45109271[source]
I do not. I prefer to read the primary sources, LLM summaries are, after all, probabilistic, and based on syntax. I'm often looking for semantics, and an LLM really really is not going to give me that.
replies(8): >>45109288 #>>45109394 #>>45109428 #>>45109487 #>>45109535 #>>45109711 #>>45109742 #>>45113375 #
1. pas ◴[] No.45113375[source]
it's not syntax, it's data driven (yes of course syntax contributes to that)

https://freedium.cfd/https://vinithavn.medium.com/from-multi...

At its core, attention operates through three fundamental components — queries, keys, and values — that work together with attention scores to create a flexible, context-aware vector representation.

    Query (Q): The query is a vector that represents the current token for which the model wants to compute attention.

    Key (K): Keys are vectors that represent the elements in the context against which the query is compared, to determine the relevance.

    Attention Scores: These are computed using Query and Key vectors to determine the amount of attention to be paid to each context token.

    Value (V): Values are the vectors that represent the actual contextual information. After calculating the attention scores using Query and Key vectors, these scores are applied against Value vectors to get the final context vector