←back to thread

Google is winning on every AI front

(www.thealgorithmicbridge.com)
993 points vinhnx | 10 comments | | HN request time: 1.287s | source | bottom
Show context
levocardia ◴[] No.43662083[source]
Google is winning on every front except... marketing (Google has a chatbot?), trust (who knew the founding fathers were so diverse?), safety (where's the 2.5 Pro model card?), market share (fully one in ten internet users on the planet are weekly ChatGPT users), and, well, vibes (who's rooting for big G, exactly?).

But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.

replies(17): >>43662192 #>>43662408 #>>43662475 #>>43662882 #>>43662886 #>>43663093 #>>43663446 #>>43663516 #>>43663774 #>>43664230 #>>43665053 #>>43665425 #>>43665442 #>>43666747 #>>43667190 #>>43667707 #>>43676555 #
8f2ab37a-ed6c ◴[] No.43662192[source]
Google is also terribly paranoid of the LLM saying anything controversial. If you want a summary of some hot topic article you might not have the time to read, Gemini will straight up refuse to answer. ChatGPT and Grok don't mind at all.
replies(8): >>43662265 #>>43662337 #>>43662712 #>>43662995 #>>43663167 #>>43663466 #>>43667526 #>>43674275 #
1. silisili ◴[] No.43662337[source]
I noticed the same in Gemini. It would refuse to answer mundane questions that none but the most 'enlightened' could find an offensive twist to.

This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

replies(3): >>43663493 #>>43664575 #>>43667189 #
2. rat87 ◴[] No.43663493[source]
Trying to answer complex questions by making up shit in a confident voice is the worst option. Redirecting to a more trustworthy human source or multiple if needed is much better
replies(1): >>43664085 #
3. aeonik ◴[] No.43664085[source]
I talk to ChatGPT about some controversial things, and it's pretty good at nuance and devils advocate if you ask for it. It's more echo chamber, if you don't, or rather extreme principle of charity, which might be a good thing.
4. ranyume ◴[] No.43664575[source]
> Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

But that's good

replies(1): >>43666300 #
5. thfuran ◴[] No.43666300[source]
For who?
replies(1): >>43666673 #
6. ranyume ◴[] No.43666673{3}[source]
For the reader.

The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.

However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.

replies(2): >>43667569 #>>43670600 #
7. yieldcrv ◴[] No.43667189[source]
Deepseek to circumvent Western censorship

Claude to circumvent Eastern censorship

Grok Unhinged for a wild time

8. LightBug1 ◴[] No.43667569{4}[source]
Yes, Musk's contention of an AI trying to tell the truth, no matter what, is straight up horse manure. Should be done for false advertising (per usual)
replies(1): >>43668218 #
9. thfuran ◴[] No.43668218{5}[source]
Elon Musk had been an endless stream of false advertising for years.
10. wegfawefgawefg ◴[] No.43670600{4}[source]
"If i never choose, I can never be wrong. Isnt that great?"