Most active commenters
  • rat87(3)

←back to thread

Google is winning on every AI front

(www.thealgorithmicbridge.com)
993 points vinhnx | 26 comments | | HN request time: 0.561s | source | bottom
Show context
levocardia ◴[] No.43662083[source]
Google is winning on every front except... marketing (Google has a chatbot?), trust (who knew the founding fathers were so diverse?), safety (where's the 2.5 Pro model card?), market share (fully one in ten internet users on the planet are weekly ChatGPT users), and, well, vibes (who's rooting for big G, exactly?).

But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.

replies(17): >>43662192 #>>43662408 #>>43662475 #>>43662882 #>>43662886 #>>43663093 #>>43663446 #>>43663516 #>>43663774 #>>43664230 #>>43665053 #>>43665425 #>>43665442 #>>43666747 #>>43667190 #>>43667707 #>>43676555 #
1. 8f2ab37a-ed6c ◴[] No.43662192[source]
Google is also terribly paranoid of the LLM saying anything controversial. If you want a summary of some hot topic article you might not have the time to read, Gemini will straight up refuse to answer. ChatGPT and Grok don't mind at all.
replies(8): >>43662265 #>>43662337 #>>43662712 #>>43662995 #>>43663167 #>>43663466 #>>43667526 #>>43674275 #
2. AznHisoka ◴[] No.43662265[source]
The single reason I will never ever be an user of them. Its a hill I will die on
3. silisili ◴[] No.43662337[source]
I noticed the same in Gemini. It would refuse to answer mundane questions that none but the most 'enlightened' could find an offensive twist to.

This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

replies(3): >>43663493 #>>43664575 #>>43667189 #
4. logicchains ◴[] No.43662712[source]
Not a fan of Google, but if you use Gemini through AI studio with a custom prompt and filters disabled it's by far the least censored commercial model in my experience.
replies(2): >>43663248 #>>43668713 #
5. jsemrau ◴[] No.43662995[source]
>Google is also terribly paranoid of the LLM saying anything controversial.

When did this start? Serious question. Of all the model providers my experience with Google's LLMs and Chatproducts were the worst in that dimension. Black Nazis, Eating stones, pizza with glue, etc I suppose we've all been there.

replies(2): >>43663183 #>>43676116 #
6. miohtama ◴[] No.43663167[source]
I think that's the "trust" bit. In AI, trust generally means "let's not offend anyone and water it down to useless." Google is paranoid of being sued/getting attention if Gemini says something about Palestine or drawns images like Studio Ghibli. Meanwhile users love to these topics and memes are free marketing.
7. rahidz ◴[] No.43663183[source]
The ghost of Tay still haunts every AI company.
replies(1): >>43663474 #
8. einsteinx2 ◴[] No.43663248[source]
Less censored than Grok?
replies(1): >>43664954 #
9. rat87 ◴[] No.43663466[source]
Seems like a feature. Last thing we need is a bunch of people willing to take AI at it's word making up shit about controversial topics. I'd say redirecting to good or prestigious source is probably the best you can do
replies(1): >>43663603 #
10. rat87 ◴[] No.43663474{3}[source]
As it should. The potential for harm from LLMs is significant and they should be aware of that
11. rat87 ◴[] No.43663493[source]
Trying to answer complex questions by making up shit in a confident voice is the worst option. Redirecting to a more trustworthy human source or multiple if needed is much better
replies(1): >>43664085 #
12. StefanBatory ◴[] No.43663603[source]
I remember when LLM first appeared - on a local social website of my country (think Digg), a lot of people were exctatic because they got ChatGPT to say that black people are dumb, claiming it as a victory over woke :P
13. aeonik ◴[] No.43664085{3}[source]
I talk to ChatGPT about some controversial things, and it's pretty good at nuance and devils advocate if you ask for it. It's more echo chamber, if you don't, or rather extreme principle of charity, which might be a good thing.
14. ranyume ◴[] No.43664575[source]
> Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

But that's good

replies(1): >>43666300 #
15. nova22033 ◴[] No.43664954{3}[source]
How many people use Grok for real work?
replies(1): >>43677010 #
16. thfuran ◴[] No.43666300{3}[source]
For who?
replies(1): >>43666673 #
17. ranyume ◴[] No.43666673{4}[source]
For the reader.

The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.

However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.

replies(2): >>43667569 #>>43670600 #
18. yieldcrv ◴[] No.43667189[source]
Deepseek to circumvent Western censorship

Claude to circumvent Eastern censorship

Grok Unhinged for a wild time

19. ◴[] No.43667526[source]
20. LightBug1 ◴[] No.43667569{5}[source]
Yes, Musk's contention of an AI trying to tell the truth, no matter what, is straight up horse manure. Should be done for false advertising (per usual)
replies(1): >>43668218 #
21. thfuran ◴[] No.43668218{6}[source]
Elon Musk had been an endless stream of false advertising for years.
22. int_19h ◴[] No.43668713[source]
Most of https://chirper.ai runs on Gemini 2.0 Flash Lite, and it has plenty of extremely NSFW content generated.
23. wegfawefgawefg ◴[] No.43670600{5}[source]
"If i never choose, I can never be wrong. Isnt that great?"
24. dorgo ◴[] No.43674275[source]
Try asking ChatGPT to solve a captcha for you ( character recognition in a foreign language ). AI studio doesn't refuse.
25. bmcahren ◴[] No.43676116[source]
From day one. We would have had LLMs years before if Google wasn't holding back. They knew the risk - google search would be dead as soon as the internet were flooded with AI content that google could not distinguish from real content.

Then you could look at how the first "public preview" models they released were so neutered by their own inhibitions they were useless (to me). Things like over-active refusals in response to "killing child processes".

26. polski-g ◴[] No.43677010{4}[source]
I do. It is absolutely astounding for coding.