←back to thread

858 points colesantiago | 2 comments | | HN request time: 0s | source
Show context
fidotron ◴[] No.45109040[source]
This is an astonishing victory for Google, they must be very happy about it.

They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.

Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.

replies(14): >>45109129 #>>45109143 #>>45109176 #>>45109242 #>>45109344 #>>45109424 #>>45109874 #>>45110957 #>>45111490 #>>45112791 #>>45113305 #>>45114522 #>>45114640 #>>45114837 #
stackskipton ◴[] No.45109143[source]
Feels like judge was looking for any excuse not to apply harsh penalty and since Google brought up AI as competitor, the judge accepted it as acceptable excuse for very minor penalty.
replies(5): >>45109155 #>>45109230 #>>45109607 #>>45110548 #>>45111401 #
IshKebab ◴[] No.45109607[source]
AI is a competitor. You know how StackOverflow is dead because AI provided an alternative? That's happening in search too.

You might think "but ChatGPT isn't a search engine", and that's true. It can't handle all queries you might use a search engine for, e.g. if you want to find a particular website. But there are many many queries that it can handle. Here's just a few from my recent history:

* How do I load a shared library and call a function from it with VCS? [Kind of surprising it got the answer to this given how locked down the documentation is.]

* In a PAM config what do they keywords auth, account, password, session, and also required/sufficient mean?

* What do you call the thing that car roof bars attach to? The thing that goes front to back?

* How do I right-pad a string with spaces using printf?

These are all things I would have gone to Google for before, but ChatGPT gives a better overall experience now.

Yes, overall, because while it bullshits sometimes, it also cuts to the chase a lot more. And no ads for now! (Btw, someone gave me the hint to set its personality mode to "Robot", and that really helps make it less annoying!)

replies(18): >>45109744 #>>45109797 #>>45109845 #>>45110045 #>>45110103 #>>45110268 #>>45110374 #>>45110635 #>>45110732 #>>45110800 #>>45110974 #>>45111115 #>>45111621 #>>45112242 #>>45112983 #>>45113040 #>>45113693 #>>45135719 #
harmmonica ◴[] No.45109797[source]
Exactly this. Another way of putting it is that LLMs are doing all the clicking, reading, researching and many times even the "creating" for me. And I can watch it source things and when I need to question whether it's hallucinating I get a shortcut because I can see all the steps that went into finding the info it's presenting. And on top of it replacing Google Search it's now creating images, diagrams, drawings and endless other "new work" that Google search could never do for me in the first place.

I swear in the past week alone things that would've taken me weeks to do are taking hours. Some examples: create a map with some callouts on it based on a pre-existing design (I literally would've needed several hours of professional or at least solid amateur design work to do this in the past; took 10 minutes with ChatGPT). Figure out how much a rooftop solar system's output would be compromised based on the shading of a roof at a specific address at different times of the day (a task I literally couldn't have completed on my own). Structural load calculations for a post in a house (another one I couldn't have completed on my own). Note some of these things can't be wrong so of course you can't blindly rely on ChatGPT, but every step of the way I'm actually taking any suspicious-sounding ChatGPT output and (ironically I guess) running keyword searches on Google to make sure I understand what exactly ChatGPT is saying. But we're talking orders of magnitude less time, less searching and less cost to do these things.

Edit: not to say that the judge's ruling in this case is right. Just saying that I have zero doubt that LLM's are an existential threat to Google Search regardless of what Google's numbers said during their past earnings call.

replies(1): >>45112456 #
qnleigh ◴[] No.45112456[source]
> Structural load calculations for a post in a house

You're relying on ChatGPT for this? How do you check the result? That sounds kind of dangerous...

replies(1): >>45112647 #
harmmonica ◴[] No.45112647[source]
Not dangerous in this implementation. I knew going in there was likely significant margin for error. I would not rely on ChatGPT if I was endangering myself, my people or anyone else for that matter (though this project is at my place).

That said, the word "relying" is taking it too far. I'm relying on myself to be able to vet what ChatGPT is telling me. And the great thing about ChatGPT and Gemini, at least the way I prompt, is that it gives me the entire path it took to get to the answer. So when it presents a "fact," in this example a load calculation or the relative strength of a wood species, for instance, I take the details of that, look it up on Google and make sure that the info it presented is accurate. If you ask yourself "how's that saving you time?" The answer is, in the past, I would've had to hire an engineer to get me the answer because I wouldn't even quite be sure how to get the answer. It's like the LLM is a thought partner that fills the gap in my ability to properly think about a problem, and then helps me understand and eventually solve the problem.

replies(2): >>45113173 #>>45113664 #
ozgrakkurt ◴[] No.45113664[source]
How do you “vet” something technical and something that you can’t even do yourself is beyond me.

Vetting things is very likely harder than doing the thing correctly.

Especially the thing you are vetting is designed to look correct more than actually being correct.

You can picture a physics class where teacher gives a trick problem/solution and 95% of class doesn’t realize until the teacher walks back and explains it.

replies(1): >>45119951 #
1. harmmonica ◴[] No.45119951[source]
Hey, just replied to a sibling comment of yours that sort of addresses your commentary. Just in case you didn't read it because I didn't reply to you directly. One thing that reply didn't cover and I'll add here: I disagree that the LLM is actually designed to look correct more than it's trying to actually be correct. I might have a blind spot, but I don't think that is a logical conclusion about LLM's, but if you have special insight about why that's the case please do share. That does happen, of course, but I don't think that is intentional, part of the explicit design, or even inherent to the design. As I said, open to being educated otherwise.
replies(1): >>45139617 #
2. qnleigh ◴[] No.45139617[source]
> designed to look correct more than it's trying to actually be correct

This might not quite be true, strictly speaking, but a very similar statement definitely is. LLMs are highly prone to hallucinations, a term you've probably heard a lot in this context. One reason for this is that they are trained to predict the next word in a sequence. In this game, it's almost always better to guess than to output 'I'm not sure,' when you might be wrong. LLMs therefore don't really build up a model of the limits of their own 'knowledge,' they just guess until their guesses get better.

These hallucinations are often hard to catch, in part because the LLM will sound confident regardless of whether it is hallucinating or not. It's this tendency that makes me nervous about your use case. I asked an LLM about world energy consumption recently, and when it couldn't find an answer online in the units I asked for, it just gave a number from a website and changed (not converted) the units. I almost missed it, because the source website had the number!

Stepping back, I actually agree that you can learn new things like this from LLMs, but you either need to be able to verify the output or the stakes need to be low enough that it doesn't matter if you can't. In this case, even if you can verify the math, can you be sure that it's doing the right calculation in the right way? Did it point out the common mistakes that beginners make? Did it notice that you're attaching the support beam incorrectly?

Chances are, you've built everything correctly and it will be fine. But the chances of a mistake are clearly much higher than if you talked to an experienced human (professional or otherwise).