But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.
But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.
This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.
Who is? (Genuine question, it's hard to keep up given how quickly the field moves.)
When did this start? Serious question. Of all the model providers my experience with Google's LLMs and Chatproducts were the worst in that dimension. Black Nazis, Eating stones, pizza with glue, etc I suppose we've all been there.
So far this has been nothing but a PM wankfest but if Gemini-in-{Gmail,Meet,Docs,etc} actually gets useful, it could be a big deal.
I also don't think any of those concerns are as important for API users as direct consumers. I think that's gonna be a bugger part of my the market as time goes on.
I definitely don't trust Google -- fool me once, and all -- but to the extent I'm going to "trust" any business with my data, I'd like to see a proven business model that isn't based on monetizing my information and is likely to continue to work (e.g., Apple). OpenAI doesn't have that.
ChatGPT has a nice consumer product, and I also like it.
Google gets a bad rap on privacy, etc., but if you read the documentation and set privacy settings, etc. then I find them reasonable. (I read OpenAI’s privacy docs for a long while before experimenting with their integration of Mac terminal, VSCode, and IntelliJ products.)
We live in a cornucopia of AI tools. Occasionally I will just for the hell of it do all my research work for several days just using open models running on my Mac using Ollama - I notice a slight hit in productivity, but still a good setup.
Something for everyone!
No one is going to build on top of anything "Google" without having a way out thought out in advance.
Not that important for LLMs, where drop-in replacements are usually available. But a lot of people just hear "by Google" now and think "thanks I'll pass" - and who can blame them?
They've got the cash, the people, and the infrastructure to do things faster than the others going forward, which is a much bigger deal IMO than having millions more users right now. Most people still aren't using LLMs that often, switching is easy, and Google has the most obvious entry points with billion+ users with google.com, YouTube, gmail, chrome, android, etc.
Google can be good on the technological side of things, but we saw time and time again that, other than ads, Google is just not good at business.
What can OpenAI do? They can sell my data, whatever, it’s a whole bunch of prompts of me asking for function and API syntax.
In either case, I'm sure that's how it starts. "This company has very little power and influence; what damage can they do?"
Until, oh so suddenly, they're tracking and profiling you and selling that data.
Large corporations wind up creating internal policies, controls, etc. If you know anyone who works in engineering at Google, you'll find out about the privacy and security reviews required in launching code.
Startups, on the other hand, are the wild west. One policy one day, another the next, engineers are doing things that don't follow either policy, the CEO is selling data, and then they run out of money and sell all the data to god knows who.
Google is pretty stable. OpenAI, on the other hand, has been mega-drama you could make a movie out of. Who knows what it's going to be doing with data two or four years from now?
The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.
However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.
Maybe Gemini is finally better, but I'm not exactly excited to give it a try.
Google’s main source of income, by far, is selling ads. Not just any ads but highly targeted ones, which means global digital surveillance is an essential part of their business model.
Then you could look at how the first "public preview" models they released were so neutered by their own inhibitions they were useless (to me). Things like over-active refusals in response to "killing child processes".
It seems their revenue in 2024 exceeded $3B.
> they will sell your data to make ends meet
I’m not sure they can do that without breaching the contract. My employer pays for ChatGPT enterprise I use.
Another thing, OpenAI has very small amount of my data because they only have the stuff I entered to their web service. Google on the other hand tracks people across half of the internets, because half of the web pages contain ads served by google. Too bad antimonopoly regulators were asleep on their job when google acquired DoubleClick, AdMob, and the rest of them.
With a loss of $5B. A viable business model needs more than revenue. It also needs profit.
It is not unusual for a business with visions for profitability to accept losses for a while to get there, but OpenAI does not seem to have such vision. They seem to be working off the old tech model of "If we get enough users we'll eventually figure something out" – which every other time we've heard that has ended up meaning selling user data.
Maybe this time will be different, but every time we hear that...
Cult of personality is blinding. But I could be wrong in my interpretation. Would you be able to put what that moral standing is about in prose without engaging in public figure examples?