But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.
But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.
This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.
When did this start? Serious question. Of all the model providers my experience with Google's LLMs and Chatproducts were the worst in that dimension. Black Nazis, Eating stones, pizza with glue, etc I suppose we've all been there.
The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.
However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.
Then you could look at how the first "public preview" models they released were so neutered by their own inhibitions they were useless (to me). Things like over-active refusals in response to "killing child processes".