←back to thread

858 points colesantiago | 8 comments | | HN request time: 0.001s | source | bottom
Show context
fidotron ◴[] No.45109040[source]
This is an astonishing victory for Google, they must be very happy about it.

They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.

Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.

replies(14): >>45109129 #>>45109143 #>>45109176 #>>45109242 #>>45109344 #>>45109424 #>>45109874 #>>45110957 #>>45111490 #>>45112791 #>>45113305 #>>45114522 #>>45114640 #>>45114837 #
stackskipton ◴[] No.45109143[source]
Feels like judge was looking for any excuse not to apply harsh penalty and since Google brought up AI as competitor, the judge accepted it as acceptable excuse for very minor penalty.
replies(5): >>45109155 #>>45109230 #>>45109607 #>>45110548 #>>45111401 #
IshKebab ◴[] No.45109607[source]
AI is a competitor. You know how StackOverflow is dead because AI provided an alternative? That's happening in search too.

You might think "but ChatGPT isn't a search engine", and that's true. It can't handle all queries you might use a search engine for, e.g. if you want to find a particular website. But there are many many queries that it can handle. Here's just a few from my recent history:

* How do I load a shared library and call a function from it with VCS? [Kind of surprising it got the answer to this given how locked down the documentation is.]

* In a PAM config what do they keywords auth, account, password, session, and also required/sufficient mean?

* What do you call the thing that car roof bars attach to? The thing that goes front to back?

* How do I right-pad a string with spaces using printf?

These are all things I would have gone to Google for before, but ChatGPT gives a better overall experience now.

Yes, overall, because while it bullshits sometimes, it also cuts to the chase a lot more. And no ads for now! (Btw, someone gave me the hint to set its personality mode to "Robot", and that really helps make it less annoying!)

replies(18): >>45109744 #>>45109797 #>>45109845 #>>45110045 #>>45110103 #>>45110268 #>>45110374 #>>45110635 #>>45110732 #>>45110800 #>>45110974 #>>45111115 #>>45111621 #>>45112242 #>>45112983 #>>45113040 #>>45113693 #>>45135719 #
bigstrat2003 ◴[] No.45110374[source]
I don't agree that ChatGPT gives an overall better experience than Google, let alone an actual good search engine like Kagi. It's very rare that I need to ask something in plain English because I just don't know what the keywords are, so the one edge the LLM might have is moot. Meanwhile, because it bullshits a lot (not just sometimes, a lot), I can't trust anything it tells me. At least with a search engine I can figure out if a given site is reliable or not, with the LLM I have no idea.

People say all the time that LLMs are so much better for finding information, but to me it's completely at odds with my own user experience.

replies(4): >>45112514 #>>45112986 #>>45113331 #>>45116410 #
Andrew_nenakhov ◴[] No.45112514{3}[source]
Chatgpt, Grok and the likes give an overall better experience than Google because they give you the answer, not links to some pages where you might find the answer. So unless I'm explicitly searching for something, like some article, asking Grok is faster and gets you an acceptable answer.
replies(1): >>45112640 #
1. dns_snek ◴[] No.45112640{4}[source]
You get an acceptable answer maybe about 60% of the time, assuming most of your questions are really simple. The other 40% of the time it's complete nonsense dressed up as a reasonable answer.
replies(2): >>45112687 #>>45113192 #
2. Andrew_nenakhov ◴[] No.45112687[source]
In my experience I get acceptable answers in more than 95% of questions I ask. In fact, I rarely use search engines now. (btw I jumped off Google almost a decade ago now, have been using duckduckgo as my main search driver)
replies(1): >>45135742 #
3. sfdlkj3jk342a ◴[] No.45113192[source]
Have you used Grok or ChatGPT in the last year? I can't remember the last time I got a nonsense response. Do you have a recent example?
replies(3): >>45113329 #>>45113854 #>>45118325 #
4. dns_snek ◴[] No.45113329[source]
Yes I (try to) use them all the time. I regularly compare ChatGPT, Gemini, and Claude side by side, especially when I sniff something that smells like bullshit. I probably have ~10 chats from the past week with each one. I ask genuine questions expecting a genuine answer, I don't go out of my way to try to "trick" them but often I'll get an answer that doesn't seem quite right and then I dig deeper.

I'm not interested in dissecting specific examples because never been productive, but I will say that most people's bullshit detectors are not nearly as sensitive as they think they are which leads them to accepting sloppy incorrect answers as high-quality factual answers.

Many of them fall into the category of "conventional wisdom that's absolutely wrong". Quick but sloppy answers are okay if you're okay with them, after all we didn't always have high-quality information at our fingertips.

The only thing that worries me is how really smart people can consume this slop and somehow believe it to be high-quality information, and present it as such to other impressionable people.

Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough.

replies(1): >>45116284 #
5. tim1994 ◴[] No.45113854[source]
I think the problem is that they cannot communicate that they don't know something and instead make up some BS that sounds somewhat reasonable. Probably due to how they are built. I notice this regularly when asking questions about new web platform features and there is not enough information in the training data.
6. lelanthran ◴[] No.45116284{3}[source]
> Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough.

Do you have a few examples? I'm curious because I have a very sensitive BS detector. In fact, just about anyone asking for examples, like the GP, has a sensitive BS detector.

I want to compare the complexity of my questions to the complexity of yours. Here's my most recent one, the answer to which I am fully capable of determining the level of BS:

    I want to parse markdown into a structure. Leaving aside the actual structure for now, give me a exhaustive list of markdown syntax that I would need to parse.
It gave me a very large list, pointing out CommonMark-specific stuff, etc.

I responded with:

    I am seeing some problems here with the parsing: 1. Newlines are significant in some places but not others. 2. There are some ambiguities (for example, nested lists which may result in more than four spaces at the deepest level can be interpreted as either nested lists or a code block) 3. Autolinks are also ambiguous - how can we know that the tag is an autolink and not HTML which must be passed through? There are more issues. Please expand on how they must be resolved. How do current parsers resolve the issues?

Right. I've shown you mine. Now you show yours.
7. svieira ◴[] No.45118325[source]
Today, I asked Google if there was a constant time string comparison algorithm in the JRE. It told me "no, but you can roll your own". Then I perused the links and found that MessageDigest.isEqual exists.
8. johnnyanmac ◴[] No.45135742[source]
you might want to fact check those answers. Them "sounding" correct doesn't mean they are correct.