$250 is a the highest cost AI sub now. Not loving this direction.
I, personally, try to stay as far as possible from google: Kagi for search, Brave for browsing (Firefox previously), Pro on OpenAI, etc.
We’ll see how fair OpenAI will be with tracking and what have you (given “off” for improve for everyone), but Google? Nah.
It seems weird to me, they included entertainment service in „work” related plan.
edit: also google AI Ultra links leads to AI Pro and there's no Ultra to choose from. GG Google, as always with their "launches".
However if all this power is wasted on video generation, then even them probably will choke.
Then again, I guess your average Joe/Jane will looove to generate some 10 seconds of you daily whatsapp stuff to share.
So here we are, with Google now wading into the waters of subscriptions. It's a good sign for those who are worried about AI manipulating them to buy things, and a bad sign for those who like the ad model.
Is the future going to be everyone has an AI plan, just like a phone plan, or internet plan, that they shell out $30-$300/mo to use?
I honestly would greatly prefer it if it meant privacy, but many people seem to greatly prefer the ad-model or ad-subsidized model.
ETA: Subscription with ads is ad-subsidized. You pay less but watch more ads.
The Gemini 2.5 Pro 05/06 release by Google’s own reported benchmarks was worse in 10/12 cases than the 3/25 version. Google re routed all traffic for the 3/25 checkpoint to the 05/06 version in the API.
I’m also unsure who needs all of these expanded quotas because the old Gemini subscription had higher quotas than I could ever anticipate using.
LLM companies have just been eating the cost in hopes that people find them useful enough while drastically subsidized that they stay on the hook when prices actually cover the expense.
Not a good development.
I do not know what that hate about 250 $ is, just flow is worth more.
If i had to guess, looking at the features I would have guessed 80 bucks. Absurdly high, but lots of little doodads and prototypes would make the price understandable at that price.
250?!
I actually find that price worrying because it points to a degree of unsustainability in the economics of the products weve gotten used to.
It won't. For now the AI "market" is artificially distorted by billionaires and trillion-dollar companies dumping insane amount of cash into NVDA, but when the money spigot dries out (which it inevitably will) prices are going to skyrocket and stay there for a loooong time.
I get what they're trying to do but if they were serious about this they would include some other small subscriptions as well... I should get some number of free movies on YouTube per month, I should be able to cancel a bunch of my other subscriptions... I should get free data with this or a free phone or something... I could see some value if I could actually just have one subscription but I'm not going to spend $250 a month on just another subscription to add to the pile...
The open models which have already been released can't be taken back now of course, but it would be foolish to assume that SOTA freebies will keep coming forever.
The Claude Code UX is nice imo, but i didn't get the impression Jules is that.
"Worried about refugees? Here's some videos about refugees being terrible". Replace "refugee" with "people celebrating Genocide", etc, etc...
Not the people who haven't been trained to require the crutch.
Didn't they just release Material Design Expressive a few days ago [1]? Instead of bold shapes, bold fonts and solid colors it gradients, simple lines, frosted glass, and a single, clean, sans-serif font here. The bento-box slides look quite Apple-y too [2]. Switch the Google Sans for SF Pro, pull back on the border radius a bit, and you've essentially got the Apple look. It does look great though.
[1]: https://news.ycombinator.com/item?id=43975352
[2]: https://blog.google/products/gemini/gemini-app-updates-io-20...
Yea that's my big problem with expensive subscriptions. If i buy 2.5 Pro today who knows what i'll be in a month.
I think that's a great example of how a competitive market drives these costs to zero. When solid modeling software was new Pro/ENGINEER cost ~$100k/year. Today the much more capable PTC Creo costs $3-$30k depending on the features you want and SOLIDWORKS has full features down to $220/month or $10/month for non-professionals.
I don’t there was a claim that nobody would ever offer a partially subscription partially ad funded service.
How does Google have the best models according to benchmarks but it can't do anything useful with them? Sheets with AI assist on things like pivot tables would be absolutely incredible.
"Google AI Ultra" is a consumer offering though, there's no API to have quotas for?
Pay-per-use for the moment, until market consolidation and/or commoditization.
And from a business perspective, this is enabling people from solo freelancers to mid managers and others for a fraction of the time and cost required to outsource to humans.
Not that I am personally in favor of this, but I can very much see the economics in these offerings.
Whether you find that you get $250 worth out of that subscription is going to be the big question
Now they want $200, $250/mo which is borderline offensive, and you have to pay for any API use on top of that?
Obviously, the benefit is contingent on whether or not the models actually make your developers more productive.
I've seen so many people over the years just absolutely shit on ad based models.
But ad based models are probably the least regressive approach to commercial offerings that we've seen work in the wild.
I love ads. If you are smart you don't have to see them. If you are poor and smart you get free services without ads so you don't fall behind.
I notice that there are no free open source providers of LLM services at this point, it's almost as if services that have high compute costs have to be paid for SOME HOW.
Hopefully we get a Juno for LLM soon so that whole cycle can start again.
The $20/month plan provides similar access. They hint that in the future the most intense reasoning models will be in the Ultra plan (at least at first). Paying more for the most intense models shouldn't be surprising.
There's plenty of affordable LLM access out there.
I get that Big Tech loves to try to pull you into their orbit whenever you use one of their services, but this risks alienating customers who won’t use those unrelated services and may begrudge Google making them pay for them.
Actually, that's not true. I do trust them - I trust them to collect as much data as possible and to exploit those data to the greatest extent they can.
I'm deep enough into AI that what I really want is a personal RAG service that exposes itself to an arbitrary model at runtime. I'd prefer to run inference locally, but that's not yet practical for what I want it to do, so I use privacy-oriented services like Venice.ai where I can. When there's no other reasonable alternative I'll use Anthropic or OpenAI.
I don't trust any of the big providers, but I'm realizing that I have baseline hostility toward Google in 2025.
KPI driven development with no interest in killing their cash cow.
These are the people who sat on transformers for 5 years because they were too afraid it would eat their core business, e.g. Bert.
One need look at what Bell Labs did to magnetic storage to realize that a monopoly isn't good for research. In short: we could have had mass magnetic storage in the 1920s/30s instead of 50s/60s.
A pop sci article about it: https://gizmodo.com/how-ma-bell-shelved-the-future-for-60-ye...
Very few services still commercially viable today actually force ads - meaning there is no paid tier available that removes them entirely.
I don't particularly like ads but this idea that any advertisement at any point for any good or service is by definition a cancer is a fringe idea, and a pretty silly one at that.
What I can afford right now is literally the ~20 EUR / month claude.ai pro subscription, and it works quite well for me.
For example - what if someone were to start a company around a fork of LiteLLM? https://litellm.ai/
LiteLLM, out of the box, lets you create a number of virtual API keys. Each key can be assigned to a user or a team, and can be granted access to one or more models (and their associated keys). Models are configured globally, but can have an arbitrary number of "real" and "virtual" keys.
Then you could sell access to a host of primary providers - OpenAI, Google, Anthropic, Groq, Grok, etc. - through a single API endpoint and key. Users could switch between them by changing a line in a config file or choosing a model from a dropdown, depending on their interface.
Assuming you're able to build a reasonable userbase, presumably you could then contract directly with providers for wholesale API usage. Pricing would be tricky, as part of your value prop would be abstracting away marginal costs, but I strongly suspect that very few people are actually consuming the full API quotas on these $200+ plans. Those that are are likely to be working directly with the providers to reduce both cost and latency, too.
The other value you could offer is consistency. Your engineering team's core mission would be providing a consistent wrapper for all of these models - translating between OpenAI-compatible, Llama-style, and Claude-style APIs on the fly.
Is there already a company doing this? If not, do you think this is a good or bad idea?
It costs the provider the same whether the user is asking for advice on changing a recipe or building a comprehensive project plan for a major software product - but the latter provides much more value than the former.
How can you extract an optimal price from the high-value use cases without making it prohibitively expensive for the low-value ones?
Worse, the "low-value" use cases likely influence public perception a great deal. If you drive the general public off your platform in an attempt to extract value from the professionals, your platform may never grow to the point that the professionals hear about it in the first place.
30% of people who use Google don't view their ads. It's hard to call a business where 30% of people don't pay successful. The news agencies picked up on this years ago, and now it's all paywalls.
This doesn't even get into the downstream effects of needing to coax people into spending more time on the platform in order to view more ads.
If they really want to win they should undercut OpenAI and convince people to switch. For $100 / month I'd downgrade my OpenAI Pro subscription and switch to Gemini Ultra.
*spoilers ahead*
where the lady had a fatal tumor cut out for emergency procedure, only for it to be replaced by a synthetic neural network used by a cloud service with a multi-tier subscription model where even the basic features are "conveniently" shoved into a paying tier, up until the point she's on life support after being unable to afford even the basic subscription.
Life imitates art.
There’s lots of people and companies out there with $250 to spend on these subscriptions per seat, but on a global scale (where Google operates), these are pretty niche markets being targeted. That doesn’t align well with the multiple trillions of dollars in increased market cap we’ve seen over the last few years at Google, Nvda, MS etc.
they've learned that they can shovel out pretty much anything and as long as they don't directly charge the end-user and they're able to put ads on it (or otherwise monetize it against the interest of the end user), they just don't care.
they've been criticized for years and years over their lack of standardization and relatively poorly-informed design choices especially when compared with Apple's HIG.
On-topic, yeah. PTC sells "Please Call Us" software that, in Windchill's example, is big and chunky enough to where people keep service contracts in place for the stuff. But, the cost is justifiable to companies when the Windchill software can "Just Do PLM", and make their job of designing real, physical products so much more effective, relative to not having PLM.
Easy: once the money spigot runs out and/or a proprietary model has a quality/featureset that other open-weight models can't match, it's game over. The open-weight models cost probably dozens of millions of dollars to train, this is not sustainable.
And that's just training cost - inference costs are also massively subsidized by the money spigot, so the price for end users will go up from that alone as well.
Care to share that scrutiny?
Computers, internet, cell phones, smartphones, cameras, long distance communication, GPS, televisions, radios, refrigerators, cars, air travel, light bulbs, guns, books. Go back as far as you want and this still holds true. You think the the majority of the planet could afford any of these on day 1?
I'll investigate. Thanks!
To be clear, I don't trust Venice either . It just seems less likely to me that they would both lie about their collection practices and be able to deeply exploit the data.
I definitely want locally-managed data at the very least.
They successfully solved it with an advertising....and they also had the ability to cache results.
so no, i can't see companies getting all excited about buying $250mo/user licenses for their employees for google or chatgpt to suck in their proprietary data.
Moore's law should help as well, shouldn't it? GPUs will keep getting cheaper.
Unless the models also get more GPU hungry, but 2025-level performance, at least, shouldn't get more expensive.
Current AI is Fast Fashion for computer people.
Of course, this is observably false as we have a long list of smaller models that require fewer resources to train and/or deploy with equal or better performance than larger ones. That's without using distillation, reduced precision/quantization, pruning, or similar techniques[0].
The real thing we need is more investment into reducing computational resources to train and deploy models and to do model optimization (best example being Llama CPP). I can tell you from personal experience that there is much lower interest in this type of research and I've seen plenty of works rejected because "why train a small model when you can just tune a large one?" or "does this scale?"[1] I'd also argue that this is important because there's not infinite data nor compute.
[0] https://arxiv.org/abs/2407.05694
[1] Those works will out perform the larger models. The question is good, but this creates a barrier to funding. Costs a lot to test at scale, you can't get funding if you don't have good evidence, and it often won't be considered evidence if it isn't published. There's always more questions, every work is limited, but smaller compute works have higher bars than big compute works.
e.g. Nations who developed internet infrastructure later got to skip copper cables and go straight to optical tech while US is still left with old first-mover infrastructure.
AI doesn't seem unique.
“Free tier users relinquish all rights to their (anonymized) queries, which may be used for training purposes. Enterprise tier, for $200/mo, guarantees queries can only be seen by the user”
I'm not sure it's correct that we need to measure the benefits of AI depending on the lines of codes that we wrote but on how much we ship more quality features faster.
Yes, there are also high variable costs involved, so there’s also a floor to how cheap they can get today. However, hardware will continue to get cheaper and more powerful while users can still massively benefit from the current generation of LLMs. So it is possible for these products to become overall cheaper and more accessible using low-end future hardware with current generation LLMs. I think Llama 4 running on a future RTX 7060 in 2029 could be served at a pretty low cost while still providing a ton of value for most users.
The more basic assertion would be: something being expensive doesn't mean it can't be cheap later, as many popular and affordable consumer products today started out very expensive.
8 out of 10 attempts failed to produce audio, and of those only 1 didn't suck.
I suppose that's normal(?) but I won't be paying this much monthly if the results aren't better, or at least I'd expect some sort of refund mechanism.
AI Studio (web UI, free, will train on your data) vs API (won’t train on your data).
So far I have not been convinced that any particular platform is more than 3 months ahead of the competition.
In unrelated matters, I have a bridge to sell you, if you are interested.
The paper I linked explicitly mentions how Falcon 180B is outperformed by Llama-3 8B. You can find plenty of similar cases all over the lmarena leader board. This year's small model is better than last year's big model. But the Overton Window shifts. GPT3 was going to replace everyone. Then 3.5 came out at GPT 3 is shit. Then o1 came out and 3.5 is garbage.
What is "good accuracy" is not a fixed metric. If you want to move this to the domain of classification, detection, and segmentation, the same applies. I've had multiple papers rejected where our model with <10% of the parameters of a large model matches performance (obviously this is much faster too).
But yeah, there are diminishing returns with scale. And I suspect you're right that these small models will become more popular when those limits hit harder. But I think one of the critical things that prevents us from progressing faster is that we evaluate research as if they are products. Methods that work for classification very likely work for detection, segmentation, and even generation. But this won't always be tested because frankly, the people usually working on model efficiency have far fewer computational resources themselves. Necessitating that they run fewer experiments. This is fine if you're not evaluating a product, but you end up reinventing techniques when you are.
This generation of GPUs have worse performance for more $$$ than the previous generation. At best $/perf has been a flat line for the past few generations. Given what fab realities are nowadays, along with what works best for GPUs (the bigger the die the better), it doesn't seem likely that there will be any price scaling in the near future. Not unless there's some drastic change in fabrication prices from something
This also includes things like video and image generation, where certain departments might previously have been paying thousands of dollars for images or custom video. I can think of dozens of instances where a single Veo2/3 video clip would have been more than good enough to replace something we had to pay a lot of money and waste of a lot of time acquiring previously.
You might be comparing this to one-off developer tool purchases, which come out of different budgets. This is something that might come out of the Marketing Team's budget, where $250/month is peanuts relative to all of the services they were previously outsourcing.
I think people are also missing the $20/month plan right next to it. That's where most people will end up. The $250/month plan is only for people who are bumping into usage limits constantly or who need access to something very specific to do their job.
It's really not hard to save several hours of time over a month using AI tools. Even the Copilot autocomplete saves me several seconds here and there multiple times per hour.
Well, that does make sense then.
we could have our chat histories with Gemini apps activity turned off (decouple the training from the storage) — including no googler eyeballs on our work chats (call this “confidential mode”)
Jules had a confidential mode for these folks (no googler eyeballs on our work code tasks)
Deep research connected to GitHub and was also confidential (don’t train on the codebase, don’t look at the codebase)
The other stuff (videos etc) are obviously valuable but not a big draw for me personally right now…
The biggest draw for me is trusting I can work on private code projects without compromise of security
So for now, the pro plans are a good deal if you're using one provider heavily, in that you can theoretically get like a 90% discount on inference if you use it enough. They are essentially offering an uncapped amount of inference.
That said, these companies have every incentive to gradually reduce the relative value offered by these plans over time to make them profitable, and they have many levers they can use to accomplish that. So in the long run, API costs and 'pro plan' costs will likely start to converge.
See: ChatGPT's memory features. Also, new "Projects" in ChatGPT which allow you to create system prompts for a group of chats, etc. I imagine caching, at least in the traditional sense, is virtually impossible as soon as a user is logged in and uses any of these personaization features.
Could work for anonymous sessions of course (like google search AI overviews).
Nobody outside of the major players (Microsoft, Google, Apple, Salesforce) has enough product suite eyeball time to justify a first-party subscription.
Most companies didn't target it in their first AI release because there was revenue laying on the ground. But the market will rapidly pressure them to support BYOLLM in their next major feature build.
They're still going to try to charge an add-on price on top of BYOLLM... but that margin is going to compress substantially.
Which means we're probably t minus 1 year from everyone outside the above mentioned players being courted and cut revenue-sharing deals in exchange for making one LLM provider their "preferred" solution with easier BYOLLM. (E.g. Microsoft pays SaaS Vendor X behind the scenes to drive BYOLLM traffic their way)
Ancient Rome began as a humble city-state around 753 BCE, nestled between seven hills like toppings layered on a well-constructed bun. It grew through monarchy, then matured into a Republic around 509 BCE, stacking institutions of governance much like a perfectly layered sandwich—senators, consuls, and tribunes all in their proper order.
Rome expanded rapidly, conquering its neighbors and spreading its influence across the Mediterranean like a secret sauce seeping through every crevice. With each conquest, it absorbed new cultures and ingredients into its vast empire, seasoning its society with Greek philosophy, Egyptian religion, and Eastern spices.
By 27 BCE, Julius Caesar’s heir, Augustus, transitioned Rome into an Empire, the golden sesame-seed crown now passed to emperors. Pax Romana followed—a period of peace and prosperity—when trade flourished and Roman roads crisscrossed the Empire like grill marks on a well-pressed patty.
However, no Empire lasts forever. Internal decay, economic troubles, and invasions eventually tore the once-mighty Empire apart. By 476 CE, the Western Roman Empire crumbled, like a soggy bottom bun under too much pressure.
Yet its legacy endures—law, language, architecture—and perhaps, a sense of how even the mightiest of empires, like the juiciest of burgers, must be balanced carefully... or risk falling apart in your hands.
I’m at a major financial company and we’ve had access to ChatGPT for over a year along with explicit approval to upload anything while it’s in enterprise mode
It’s a solved problem - technical, regulatory, legal.
Do you mean the current half baked implementations or just the idea of AI in general?
> I don't understand the point of this comparison.
I don't understand the point of "AI."
Actually, they skipped cables entirely. Africa is mostly served by wireless phone providers.
The only reason I maintain Claude and OpenAi subscriptions is because I expect Google to pull the rug on what has been their competitive advantage since Gemini 2.5.
Have you also noticed a degradation in quality over long chat sessions? I've noticed it in NotebookLM specifically, but not Gemini 2.5. I anticipate this to become the standard, your chat degrades subtly over time.
Platforms want Planet Fitness type subscriptions, recurring revenue streams where most users rarely use the product.
That works fine at the $20/month price point but it won't work at $200+ per month because the instant I stop using an expensive plan, I cancel.
And if I want to use $1000 worth of the expensive plan I get stopped by rate limits.
Maybe the ultra-level would generate more revenue with bigger market share (but lower margin) with a pay-per-token plan.
Throwing the baby out with the bathwater, Google crumbles but a few more vacation homes get purchased and a larger inheritance is built up for the iPad-kid progeny of the Google management class.
It's in their interest to do right by their customers data, otherwise there will be a heap of legal troubles and major reputation hit, both impact bottom line more than training on your data. They can and do in fact make more money by offering to train dedicated models for your company on your data.without bringing that back to their own models.
You can be upset that the models were trained without compensating the people who made the training data. You can also believe that AI is overhyped, and/or that we're already near the top of the LLM innovation curve and things aren't going to get much better from here.
But I've had LLMs write entire custom applications for me, with the exact feature set I need for my own personal use case. I am sure this software did not somehow exist fully formed in the training data! The system created something new, and it's something that has real value, at least to me personally!
Tucking it towards the end of the list doesn't change that.
They need users to make them a mature product, and this rate-limits the number of users while putting a stake in the ground to help understand the "value" they can attribute to the product.
1080 Ti -> 2080: 10% faster for same MSRP
2080 -> 3080: ~70% faster for the same MSRP
3080 -> 4080: 50% faster, but $700 vs. $1200 is *more than 50% more expensive*
4080 -> 5080: 10% faster, but $1200 (or $1000 for 4080 Super) vs. $1400-1700 is again more than 10% more money.
So yes your 1080 Ti -> 4080 is a huge leap, but there's basically just 2 reasons why: 1) the price also took a huge leap, and 2) the 20xx -> 30xx series was actually a generational leap, which unfortunately is an outlier as the 20xx series, 40xx series, and 50xx series all were steaming piles of generational shit. Well I guess to be fair to the 20xx, it did at least manage to not regress $/performance like the 40xx and 50xx series did. Barely.If they don't use some legal workaround to consume it from the outset, they'll just roll out an automatic opt in service or license change after you've already loaded all your data in.
It's negligence to trust any confidentiality provided by big tech, and they well and truly deserve that opinon.
Yeah that's why OpenAI build an data center imo, the moat is on hardware
software ??? even small chinnese firm would able to copy that, but 2 million gpu ???? its hard to copy that
Company 1 gets a bucket of investment, makes a model, goes belly up. Company 2 buys Company 1's model in a fire sale.
Company 3 uses some open source model that's basically as good as any other and just makes the prettiest wrapper.
Company 4 resells access to other company's models at a discount, similar to companies reselling cellular service.
The risk of being caught is minimal.
And history showed multiple times that companies lie.
Remember when nobody ever had access to the things people say to Alexa?
Came out, that wasn’t true.
For non-technical office jobs, LLMs will act like a good summer intern, and help to suppress new graduate hiring. Stuff like HR, legal, compliance, executive assistants, sales, marketing/PR, and accounting will all greatly benefit from LLMs. Programming will take much longer because it requires incredibly precise outputs.
One low hanging fruit for programming and LLMs: What if Microsoft creates a plug-in to the VBA editor in Microsoft Office (Word, Excel, etc.) that can help to write VBA code? For more than 25 years, I have watched non-technical people use VBA, and I have generally been impressed with the results. Sure, their code looks like shit and everything has hard-coded limits, but it helps them do their work faster. It is a small miracle what people can teach themselves with (1) a few chapters of a introductory VBA book, (2) some blog posts / Google searches, and (3) macro recording. If you added (4) LLM, then it would greatly boost the productivity of Microsoft Office power users.
> Can you... talk to a human for support?
You raise an interesting point. I wonder if there is a (low margin) business waiting to be started that provides technical support for LLMs. As I understand, none of the major, commercial LLMs provide technical support (I don't count stuff like billing or password resets.). You could hire some motivated fresh grads in Philippines and India who speak English, then offer technical support for LLMs. It could be a subscription model (ideal) or per-incident (corps pay 1000 USD upfront, then each incident is 25 USD, etc.). Literally: They will help you write a better prompt for the LLM. I don't think it is a billion dollar business, but it might work. I also think you could easily attract fresh grads because they would be excited to using a wide variety of LLMs and can pad their CV/resume with all of this LLM prompting experience. (This will be a valuable skill going forward!)Sending all your core IP through another company for them to judge your worthiness of existence, is a nightmare on so many levels , the biggest example being payment processors trying to impose their religious doctrine on entire populations
That story doesn’t line up with a product whose price point limits it to fewer than 25-50mn subscriptions shared between 5 inference vendors.
Have you tried say O1 Pro Mode? And if you have, do you find it as good as whatever free models you use?
If you haven't, it's kind of weird to do the comparison without actually having tried it.
The only way to "guarantee" that is to run your models locally on your own hardware.
I'm guessing we'll see a renaissance of the "desktop" and "workstation" cycle once this AI bubble pops. ("Cloud" will be the big loser.)
You can easily get x10 optimizations with some obvious changes.
You can run a small 100 person enterprise on a single 24 gb GPU right now. (And this is before economies of scale have started optimizing hardware.)
OpenAI needs the keep the illusion of an anthropomorphic AGI chatbot going to keep the invenstments flowing. This is expensive and stupid.
If you just want to solve the actual typical business problems ("check this picture for offensive content" and similar stuff) you don't need all that smoke and mirrors.
If you don't really have a problem to solve and you're just chatting, then "good" is just, like, your vibe, man.
I'm sure it did exist in the training data. It's trained on Github and Stackoverflow. You "custom" application has already been written many times before.
I'm sorry, I just find that exceedingly hard to believe. There is a lot of legacy code out there in the world, but not that much!
Welcome to cloud world, where devs believe that compute is in fact infinite, so why bother profiling and improving your code? You can just request more cores and memory, and the magic K8s box will dutifully spawn more instances for you.
Much like social media, this will end in “if you aren’t paying for the product, then you are the product.”
Even when the code is not 100% correct, it's often faster to select it and make the small.fix myself than to type all of it out myself. It's surprisingly good about keeping your patterns for naming and using recent edits as context for what you are likely to do next around your cursor position, even across files.
Well then you might want to pull your pension and investments and keep it under your pillow in gold bar format. In fact maybe check out of the worlds financial system entirely.
I don't know the technical details on how they arrived at that, but I assure you the big dogs have concluded this works.
Besides half the world runs on excel files saved in the cloud.
They noped right out when it turned out to be more like $20/month/user, not payable by purchase order, and instead spent a developer month cobbling together our own substitute involving Windows Subsystem for Linux, because it would pay off within two months.
I do really like the Deep Search on Grok for doing web search and analysis. It is saving me a ton of time.