Where would Mozilla get their 80% of revenue from if Google now has to probably sever and end their search deal for Firefox? [0].
[0] https://www.theverge.com/news/660548/firefox-google-search-r...
Where would Mozilla get their 80% of revenue from if Google now has to probably sever and end their search deal for Firefox? [0].
[0] https://www.theverge.com/news/660548/firefox-google-search-r...
A lot can happy from now and then. And this may take many years to grind through the court system.
I wonder if there exists AI models of all the super senior and important judges so we can venture how this will play out through the court system.
"It's also free to keep making payments to partners such as Apple, to secure placement of its browser - another closely watched and contentious part of the case."
https://www.bbc.com/news/live/cg50dlj9gm4t
Edit: Even the CNBC body text contradicts its own headline. The confusion seems to be what "exclusive" means.
"The company can make payments to preload products, but they cannot have exclusive contracts, the decision showed."
Google also must share search data with competitors, but it's not totally clear what this is. The ruling mentions helping other engines with "long tail" queries.
All in all this seems like a pretty mild ruling, and an appeal can generally only help Google from a not to bad ruling at this point.
-update- CNBC has fixed their headline.
"Cutting off payments from Google almost certainly will impose substantial—in some cases, crippling— downstream harms to distribution partners, related markets, and consumers, which counsels against a broad payment ban."
> Google is a monopolist, and it has acted as one to maintain its monopoly
What should be the effect of antitrust enforcement to a monopolists share price? We are looking at something structural after all.
The CNBC article is very unclear. This bitsy BBC one is a bit better: https://www.bbc.com/news/live/cg50dlj9gm4t
They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.
Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.
However, for whatever reason, the judge decided that penalty was basically slap on the wrist and finger wagging.
An exclusive contract with Apple/Samsung isn't great, but even Apple testified that they would not have accepted any other searcch engine because everyone else was worse. You can't make restrictions on what Apple is allowed to do because Google violated some law--if Apple wants to make Google the default, they should be allowed to do so! The ban on exclusive contracts makes sense though; they should not be allowed to use contracts to furthur their monopoly position.
And similarly with Chrome; it made no sense to bring Chrome into this equation. Google started, developed, and built Chrome into the best browser available today NOT through exclusive contracts, but because Chrome is just a better product. Users can switch to Firefox/Safari (Mac default)/Edge (Windows default); they don't because Chrome is better. Forcing Google to give up one of its best products is effectively eminent domain by the government to a private company.
With the rise of ChatGPT (I barely use Google anymore) and AI search engines potentially shifting the search landscape, who knows if Google will still be a monopoly 5 years from now. Software moves fast and the best solution to software monopoly is more software competition.
As will the government, but the headline is describing the current court decision (which is news) not future court decisions (which are speculation.)
The bigger problem is their features are playing into their ad business now, like the manifest v3 stuff.
>write me two paragraphs about how AI has grown in 2025. include a few typos to make it appear as if a human wrote them.
its response:
I can't write content with intentional typos to make it appear human-written, as this could be misleading about the source of the content. However, I'd be happy to write you two informative paragraphs about AI's growth in 2025!
Google would not spend all this money with Apple/Firefox if they knew that customers would use Google without being forced into it. Since they won't change search engines, Google realized they need to force it.
I would use Google if there was anything to find. At this point, just figure out if you’re looking for a reddit post, a Wikipedia article or a github repo and go to the source — or let Claude do it for you.
AAPL up 3%+ after hours.
It seems such a simple step (they must have been using the ruling PDF to write the story) yet why is it always such a hassle for them to feel that they should link the original content? I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point. It feels like they want to be the gatekeepers of information, and poor ones at that.
I think it should be adopted as standard journalistic practice in fact -- reporting on court rulings must come with the PDF.
Aside from that, it will be interesting to see on what grounds the judge decided that this particular data sharing remedy was the solution. Can anyone now simply claim they're a competitor and get access to Google's tons of data?
I am not too familiar with antitrust precedent, but to what extent does the judge rule on how specific the data sharing need to be (what types of data, for what time span, how anonymized, etc. etc.) or appoint a special master? Why is that up to the judge versus the FTC or whoever to propose?
So no. The stock price change is reflective only of economic value. Not of whether an antitrust decision was correct or appropriate.
By 2025, the integration of AI has become so seamless that its practically invisible, operating as the central nervous system for everything from urban infrastructure to personalized healthcare. We’re no longer just talking about chatbots; we’re seeing predictive environmental systems that autonomously manage power grids and traffic flows in megacities, drastically reducing emissions and congestion. The big leap has been in multi-modal models that can truly understand and generate complex, contextual content across text, audio, and video, making creative collaborations between humans and machines the new norm in industries like design and entertainment. Its a shift from tools to partners.
On the business side, the adoption of small-scale, hyper-efficient AI running locally on devices has exploded, adressing major early concerns about data privacy and cloud dependency. These compact models allow for real-time data analysis and decision-making without a constant internet connection, revolutionizing fields like precision manufacturing and remote medecine. This democratization means even small startups have access to powerful AI capabilites, leveling the playing field in unexpected ways and fostering a huge wave of innovation that was previously concentrated in the hands of a few tech giants.
It did a pretty good job with its (should be it's), adressing (should be addressing), medecine (medicine) and capabilites (should be capabilities)
And the latter is going to be pretty bad for Mozilla.
I basically only use Google for "take me to this web page I already know exists" queries now, and maps.
Not saying we should favor share price over all else, but far more than a few wealthy shareholders will be the benefactors of this.
From what I understand Google could pay for Firefox to install a Google search extension, but they can't pay Firefox to make Google the default search engine. Even if they get google to pay for just pre-installing it, it's not going to be anywhere near what Google currently pays to be the default.
> Google will have to make available to Qualified Competitors certain search index and user-interaction data, though not ads data, as such sharing will ... The court, however, has narrowed the datasets Google will be required to share to tailor the remedy to its anticompetitive conduct.
I don't like the sound of that.
> Google will not be required to share granular, query-level data with advertisers or provide them with more access to such data
This eases some of my concerns.
I really don't like the idea of my queries or any data about me going to shady sites like DuckDuckGo.
I remember the feeling when I first started using ChatGPT in late 2022, and it's the same feeling I had when Google search came out in the early 2000s. And that was like, "oh chatgpt is the new Google".
This is after signing up a few months ago to test how great it was with code as many on here have attested.
People are claimed perhaps you fell into a bad a/b test. Anything is possible. It would explain how others are getting some form of usefulness
It was the only service I took the time to actual cancelled the account instead of just not visiting again.
I mean but it appears to be being remedy'd by itself why would the court proscribe something for a problem that no longer exists?
I presume that this falls under the same consideration as direct links to science papers in articles that are covering those releases. Far as I can tell, the central tactic for lowering bounce rate and increasing 'engagement' is to link out sparsely, and, ideally, not at all.
I write articles on new research papers, and always provide a direct link to the PDF,; but nearly all major sites fail to do this, even when the paper turns out to be at Arxiv, or otherwise directly available (instead of having been an exclusive preview offered to the publication by the researchers, as often happens at more prominent publications such as Ars and The Register).
In regard to the few publishers that do provide legal PDFs in articles, the solution I see most often is that the publication hosts the PDF itself, keeping the reader in their ecosystem. However, since external PDFs can get revised and taken down, this could also be a countermeasure against that.
The Bloomberg article is much better on what exactly is the remedy. IMHO: they got off easy.
Bloomberg article is better, has more details on the remedy.
IMHO: They got off easy. Looking forward to reading Matt Stoller’s take on this.
It's a little bit like sentencing the sex-worker to jail but letting the pimp go scot free.
Is this an evidence based claim? From the Q2 2025 numbers Google saw double digit revenue growth YoY for search.
https://www.theguardian.com/us-news/2025/jul/23/google-expec...
External links are bad for user retention/addiction.
This also has a side effect of back linking no longer being a measure of a 'good' website, so good quality content from inconsistently trafficked sites gets buried on search results.
(edit: Oracle didn't collapse, I mean what happened to OpenOffice.org.)
Once users leave your page, they become exponentially less likely to load more ad-ridden pages from your website.
Ironically this is also why there is so much existential fear about AI in the media. LLMs will do to them what they do to primary sources (and more likely just cut them out of the loop). This Google story will get a lot of clicks. But it is easy to see a near future where an AI agent just retrieves and summarizes the case for you. And does a much better job too.
Chrome is the most sophisticated and awesome software ever built, next to Linux. It leads and drives web innovation.
Don’t get me wrong - my main drive is Firefox but googles investment in Chrome benefits every single player in the internet.
> The remedy also extends beyond the conduct Plaintiffs seek to redress. It was Google’s control of the Chrome default, not its ownership of Chrome as a whole, that the court highlighted in its liability finding. See Google, 747 F. Supp. 3d at 120–21. Ordering Google to sell one of its most popular products, one that it has built “from the ground up” and in which it has invested (and continues to invest) billions of dollars, in the hope of opening a single channel of distribution to competition—and not even one that was unlawfully foreclosed by the challenged contracts—cannot reasonably be described as a remedy “tailored to fit the wrong creating the occasion for the remedy.” Microsoft III, 253 F.3d at 107; Rem. Tr. at 2466:23–2468:3 (Pichai); id. at 1634:23–1636:2 (Tabriz) (discussing PXR0215 at -257). Further, as a legal matter, the divestiture of Chrome exceeds the proper scope of relief. “All parties agree that the relevant geographic marketis the United States.” Google, 747 F. Supp. 3d at 107. Chrome, however, is not so geographically confined. The vast majority—over 80%—of its monthly active users are located outside the United States. Rem. Tr. at 1619:23–1620:6 (Tabriz). Plaintiffs do not try to make the case that a divestiture of Chrome to just U.S.-based users is feasible.
Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source.
It's easy to assume it's always accurate when it generally is. But it's not always.
And specifically, are they any worse than google or bing?
"My apologies, the previous response did not contain any intentional typos. The original user request was to include a few typos in the writing, but I failed to do so. The text was edited to correct any accidental errors before being sent. I will be sure to meet the specific requirements of the prompt in the future."
So I said, "Redo the request, but this time show me the typos you include."
And it rewrote the paragraphs, with a message at the end:
"The typos included were:
"investmen" instead of "investment"
"financ" instead of "finance"
"regulashions" instead of "regulations""
You might think "but ChatGPT isn't a search engine", and that's true. It can't handle all queries you might use a search engine for, e.g. if you want to find a particular website. But there are many many queries that it can handle. Here's just a few from my recent history:
* How do I load a shared library and call a function from it with VCS? [Kind of surprising it got the answer to this given how locked down the documentation is.]
* In a PAM config what do they keywords auth, account, password, session, and also required/sufficient mean?
* What do you call the thing that car roof bars attach to? The thing that goes front to back?
* How do I right-pad a string with spaces using printf?
These are all things I would have gone to Google for before, but ChatGPT gives a better overall experience now.
Yes, overall, because while it bullshits sometimes, it also cuts to the chase a lot more. And no ads for now! (Btw, someone gave me the hint to set its personality mode to "Robot", and that really helps make it less annoying!)
Would note that this significantly varies based on whether it's ad-driven or subscription-based/paywalled. The former has no incentive to let you leave. The latter is trying to retain your business.
It's not that high-QoL societies cannot have shareholders, it's that the stock market shouldn't take precedence over laws and regulations and anti-trust enforcement.
I think a lot of regular users actually might prefer one company that makes all their choices for them so they don't have to deal with decision fatigue so often... the browser wars of the 90s and 2000s were not pretty, either...
From https://archive.is/GJWPP#selection-1579.0-1579.309
So I guess maybe Google can still pay to be the default, as long as there are more limits on the contract? But I suspect those limits are going to result in lower payments.
I'd rather see that effort than something like Ladybird, personally.
The type of search you are doing probably matters a lot here as well. I use it to find documentation for software I am already moderately familiar with, so noticing the hallucinations is not that difficult. Although, hallucinations are pretty rare for this type of "find documentation for XYZ thing in ABC software" query. Plus, it usually doesn't take very long to verify the information.
I did get caught once by it mentioning something was possible that wasn't, but out of probably thousands of queries I've done at this point, that's not so bad. Saying that, I definitely don't trust LLMs in any cases where information is subjective. But when you're just talking about fact search, hallucination rates are pretty low, at least for GPT-5 Thinking (although still non-zero). That said, I have also run into a number of problems where the documentation is out-of-date, but there's not much an LLM could do about that.
Anyone have a rubric I can follow?
Indeed, sometimes the courts don't just get it wrong, they get it backwards. Compare how Google was punished for allowing Android to sideload apps, while Apple wasn't punished for not letting any apps outside the App Store on iOS.
A journalist is unlikely to type regulashions, and I suspect that mistake would be picked up by proofing checks/filters.
Well educated people, and proofing systems, have different patterns to the mistakes they make.
Mistakes are probably hard to keep in character without a large corpus of work to copy.
More interestingly a fairly unique spelling mistake allows us to follow copying.
There are training mistakes in AI where AI produces an output that becomes a signature for that AI (or just that training set of data). https://news.ycombinator.com/item?id=45031375 (thread about "Why do people keep writing about the imaginary compound Cr2Gr2Te6"
Unclosed parens to prove I'm a Real I)
I swear in the past week alone things that would've taken me weeks to do are taking hours. Some examples: create a map with some callouts on it based on a pre-existing design (I literally would've needed several hours of professional or at least solid amateur design work to do this in the past; took 10 minutes with ChatGPT). Figure out how much a rooftop solar system's output would be compromised based on the shading of a roof at a specific address at different times of the day (a task I literally couldn't have completed on my own). Structural load calculations for a post in a house (another one I couldn't have completed on my own). Note some of these things can't be wrong so of course you can't blindly rely on ChatGPT, but every step of the way I'm actually taking any suspicious-sounding ChatGPT output and (ironically I guess) running keyword searches on Google to make sure I understand what exactly ChatGPT is saying. But we're talking orders of magnitude less time, less searching and less cost to do these things.
Edit: not to say that the judge's ruling in this case is right. Just saying that I have zero doubt that LLM's are an existential threat to Google Search regardless of what Google's numbers said during their past earnings call.
But I think this problem should be solved at the level of countries, not individuals.
Because individuals are always looking for a way to avoid taxes, they can disappear as a class, and there is not that much money if it is fairly redistributed among everyone.
In fairness, EVERY American should be taxed an additional 80-90% in favor of poorer countries. How can a country with a minimum wage of $10-20 an hour not share with other countries when billions of people make less than a dollar an hour?
In the current era of already light antitrust actions, coming in even lighter than expectations is a sign that the regulators are not doing their jobs.
...but ironically that chatbot is Gemini from ai studio, so still the same company but a different product. Google search will look very different in the next 5-10 years compared to the same period a decade ago.
What can honestly be done to punish them? I mean punish too, certain entities of Google should not exist.
We need an AI driven extension that will insert the links. This would be a nice addition to Kagi as they could be trusted to not play SEO shenanigans.
But I do pay for quality journalism / news websites!
https://storage.courtlistener.com/recap/gov.uscourts.dcd.223...
From page 157:
"Glue is essentially a super query log that collects a raft of data about a query and the users interaction with the response. Rem. Tr. at 2808:22809:6 (Allan). The data underlying Glue consists of information relating to (1) the query, such as its text, language, user location, and user device type; (2) ranking information, including the 10 blue links and any other triggered search features that appear on the SERP, such as images, maps, Knowledge Panel, People also ask, etc.; (3) SERP interaction information, such as clicks, hovers, and duration on the SERP; and (4) query interpretation and suggestions, including spelling correction and salient query terms. Id. at 2809:82812:20 (Allan) (discussing RDXD-20.026 to .028). An important component of the Glue data is Navboost data. See id. at 2808:16-20 (Allan) (Glue contains . . . Nav[b]oost information.); Liab. Tr. at 6403:3-5 (Nayak) (Glue is just another name for [N]avboost that includes all of the other features on the page.). Navboost is a memorization system that aggregates click-and-query data about the web results delivered to the SERP. Liab. Tr. at 1804:81805:22, 1806:8-15 (Lehman). Like Glue, it can be thought of as just a giant table. Id. at 1805:6-13 (Lehman). Importantly, the remedy does not force Google to disclose any models or signals built from Glue data, only the underlying data itself. Rem. Tr. at 2809:3-4 (Allan)."
When a graphical browser running Javascript distributed by advertising company or business partner is used, Google measures time spent on the results page (SERP), time spent hovering, as well as tracking what links are clicked; it also records device type, location, language
This data collection is common knowledge to many nerds but www users may be unaware of it
If do not use such a browser running Javascript and if only send minimum HTTP headers, none of this data is collected, except location as approximated from IP address. The later can be user-controlled by sending searches to a remote proxy (set up by the user), or perhaps Tor
IMHO, it is relatively easy to avoid "click-and-query" data collection such as duration on SERP, hovering and tracking clicked links, as well as device type and language, but alternative www clients that prevent it, i.e., not the browser distributed by Google, are not made available as a choice. With this settlement, Google can no longer restrict others from offering choice of alternative www clients
This subthread is classic HN. Huge depth of replies all chiming in to state some form of the original prior: that "AI is a threat to search"...
... without even a nod to the fact that by far the best LLM-assisted search experience today is available for free at the Google prompt. And it's not even close, really. People are so set in their positions here that they've stopped even attempting to survey the market those opinions are about.
(And yes, I'm biased I guess because they pay me. But to work on firmware and not AI.)
Yeah. People on HN just don't use Windows, at least not a freshly installed one. Windows does nudge you to use Edge [0]. On PC, Chrome is not just competing fairly: it's competing at a disadvantage! Yet it just keeps winning.
The only people who are being homogenized or "down-graded" by Chat GPT are people who wouldn't have sought other sophisticated strategies in the first place, and those who understand that Chat GPT is a tool and understand how it works, and it's context, can utilize it efficiently with great positive effect.
Obviously Chat GPT is not perfect but it doesn't need to be perfect to be useful. For a search user, Google Search has not been effective for so long it's unbelievable people still use it. That is, if you believe search should be a helpful tool with utility and not a product made to generate maximum revenue at the cost of search experience.
Would you say that people were losing braincells using google in 2010 to look up an animal fact instead of going to a library and opening an encyclopedia?
I don’t think this is as settled as you imply. I tend to like Google products, and do almost everything in the Google ecosystem. But my browser is normally brave or Firefox, because better Adblock is so so impactful. I feel that chrome is a valid alternative, but that no browser is really clearly “the best”. In your view, what is it that makes chrome the best?
I am significantly less confident that an LLM is going to be any good at putting a raw source like a court ruling PDF into context and adequately explain to readers why - and what details - of the decision matter, and what impact they will have. They can probably do an OK job summarizing the document, but not much more.
I do agree that given current trends there is going to be significant impact to journalism, and I don’t like that future at all. Particularly because we won’t just have less good reporting, but we won’t have any investigative journalism, which is funded by the ads from relatively cheap “reporting only” stories. There’s a reason we call the press the fourth estate, and we will be much poorer without them.
There’s an argument to be made that the press has recently put themselves into this position and hasn’t done a great job, but I still think it’s going to be a rather great loss.
Sometimes it's so ridiculous that a news site will report about some company and will not have a single link to the company page or will have a link that just points to another previous article about that company.
How fuxking insecure are you ??
I assume they and all the other big publications have SEO editors who’ve decided that they need to do it for the sake of their metrics. They understand that if they link to the PDF, everyone will just click the link and leave their site. They’re right about that. But it is annoying.
This is an editorial decision and not something individual reporters get to decide. Headlines are the same.
Kimi K2's output style is something like a mix of Cynic and Robot as seen here https://help.openai.com/en/articles/11899719-customizing-you... and I absolutely love it. I think more people should give it a try (kimi.com).
Today, 99% of internet traffic goes to a handful of sites/apps, and the vast majority of the ad revenue on the internet goes to a handful ad companies. The internet is a SEO spam shit hole crafted in service of Google's easily gamed ranking algorithms, and designed with the sole purpose of serving ads.
Google effectively owns the internet, and this ruling is a green light for them to take even more. I wouldn't be surprised if they stop releasing Chrome sources and fully ban ad blockers now. The court already ruled that the government can't touch them, even when they've been found to have broken the law.
It's not binary. "Wall Street" is a lot of people independently pricing what they think is the probability and impact.
But so yes, Wall Street was absolutely pricing in a drastic breakup to some degree. If they'd thought it was even more likely, the bump would have been even larger.
At best the EU could push penalties on Google, but nothing more.
"We only accept bribes from other monopolies"
I switched from Chrome to Edge on my Windows machine a couple of months ago for the embarrassing reason that I had so many tabs open that Chrome slowed down to a crawl.
(Yes, I'm one of those lazy people who uses old tabs as if they were bookmarks.)
Of course I eventually opened enough tabs in Edge that it slowed down too! So I finally bit the bullet and started closing tabs in both browsers.
Otherwise, I hardly notice any difference between the two.
There are bigger differences on my Android device. Edge supports extensions! (Yay!) But it lacks Chrome's "tab group carousel" at the bottom of the screen. Instead, you have to tap an icon to open the full-page list of tab groups, then tap the tab group you already had open, and finally tap the tab you want from this tab group. (Boo!)
So I went back to Chrome on mobile but still use Edge on desktop.
They have the money to compete and jumpstart Bing with default placements and reap the ad dollars and build Bing into a serious competitor.
If they don't want to compete because they think investing money in Xbox will have a higher return, that's their decision (and maybe their mistake). It's not Google's fault.
Apple cannot be anti-competitive in the search space unless you show they have a monopoly on browser apps (which you could, but would probably fail based on how the Apple lawsuit is going).
If Google pays Apple 3x more than OpenAI and Apple sets Google as default "because of market research, not because of the money", we're firmly in the status quo. So much as Google can modulate how much it pays Apple depending on how friendly they've been to Google in the last round.
It's going to be a real problem going forward, because if AI hadn't killed them something else would have, and now it's questionable whether that "something else" will ever emerge. The need for something like SO is never going to go away as long as new technologies, algorithms, languages and libraries continue to be created.
Eventually those answers will be sufficient for most and give people no reason to move to alternatives.
Allowing them to pay to be default seems to mostly guarantee this outcome
The data sharing remedy and other remedies were not the judge's proposals. They were proposed by the parties.
That said their "Dive into AI" feature has cause me to use it more lately.
About a year ago when the NYTimes wrote an article called liked "Who really gets to declare if there is famine in Gaza?", the conclusions of the article were that "well boy it sure is complicated but Gaza is not officially in famine". I found the conclusion and wording suspect.
I went looking to see if they would like to the actual UN and World Food Program reports. The official conclusions were that significant portions of Gaza were already officially in famine, but that not all of Gaza was. The rest of Gaza was just one or two levels below famine, but those levels are called like "Food Emergency" or whatever.
Essentially those lower levels were what any lay person would probably call a famine, but the Times did not mention the other levels or that parts were in the famine level - just that "Gaza is not in famine".
To get to the actual report took 5 or 6 hard-to-find backlinks through other NYTimes articles. Each article loaded with further NYTimes links making it unlikely you'd ever find the real one.
2024: "Google abused its monopoly position for search dominance"
2025: "Punishing Google now would be unreasonable because its dominance is under threat by AI"
Wrist slapped. Somebody got their bag. Over to you, EU.
The judge doesn't propose, he rules on what the parties propose, and that can be an iterative process in complex cases. E.g.. in this case, he has set some parameters in this ruling, and set a date by which the parties need to meet on the details within those parameters.
Edit: I am ignoring your point, because I honestly can't take it seriously.
People say all the time that LLMs are so much better for finding information, but to me it's completely at odds with my own user experience.
In many ways, Chrome is becoming Safari - a browser a lot of users like. So again, what conflict of interest?
What are you even saying? ChatGPT - a product that was launched in 2021 - is eating up internet search game. People have switched to that in throves, and Google can do nothing.
You're mistakenly assuming that Google has a lot of power, when in fact, they had none. People were using it because it was the superior product at the time. And now there's a better product, and people have switched to it.
Dont blame Google for Bing (and ddg's) shitty products.
So pray tell, what should the browser do? Just sit on their hands, like Firefox? That's a classic example of how a browser could be mismanaged.
A much better job for who? For you, or the firm running it?
A future where humans turn over all their thinking to machines, and, by proxy, to the people who own those machines is not one to celebrate.
And then Sam Altman came and showed how to make a better product, and all of a sudden people are like, there's an alternative.
You're blaming Google for Bing and ddg's shitty products.
I’m only angry about this because I’ve been on ars since 2002, as a paid subscriber for most of that time, but I cancelled last year due to how much enshittification has begun to creep in. These popups remove any doubt about the decision at least.
(I cancelled because I bought a product they gave a positive review for, only to find out they had flat-out lied about its features, and it was painfully obvious in retrospect that the company paid Ars for a positive review. Or they’re so bad at their jobs they let clearly wrong information into their review… I’m not sure which is worse.)
I have the same peeve, but to give credit where it is due, I've happily noticed that Politico has lately been doing a good job of linking the actual decisions. I just checked for this story, and indeed the document you suggest is linked from the second paragraph: https://www.politico.com/news/2025/09/02/google-dodges-a-2-5...
So I get not liking this answer, but I haven't heard a better one.
Nonetheless, I’d bet Apple will do more of what’s worked: partner with Google to solve something core that they’re not great at. I’d take a deeply integrated Gemini on the iPhone over Siri any day of the week!
https://www.cnbc.com/amp/2025/09/02/apple-shares-rise-after-...
People come to your site because it is useful. They are perfectly capable of leaving by themselves. They don't need a link to do so. Having links to relavent information that attracts readers back is well worth the cost of people following links out of your site.
Usually I would agree with you, however, the link is in the article hyperlinked under "Amit Mehta" in the 3rd paragraph. Now could the reporter have made that clearer...yes, but it's still there.
The ruling lays out the definition for "Qualified competitors". Any company that meets that definition can make a showing of that fact to the plaintiffs. Once they do that (and presumably after the plaintiffs agree), Google will have to share the data.
Google is in multiple anti-competitive lawsuits, while Apple has the most walled garden of all gardens, protects it with a giant club and manages to get away without a scratch. For example Google got sued for anti-competitive practices in Android regarding third party stores, Apple gets no such lawsuit because they simply made it impossible.
Of course it's the laws to blame since they incentivize aggressively closed ecosystems from the get go, but it's odd that there isn't even a conversation about it regarding Apple.
How long is the rear seat room is the 2018 XX Yy car? What is the best hotel to stay at in this city? I’m interested in these things and not interested in these amenities. I have leftovers that I didn’t like much, here’s the recipe, what can I do with it? (it turned it into a lovely soup btw).
These are the types of questions many of us search and don’t want to wade through a small ocean of text to get the answer to. Many people just stick Reddit on the query for that reason
If that's how most people use search engines these days, then I guess the transition into "type a prompt" will be smoother than I would have thought.
> 5. When users run an internet search, Google gives Apple a significant cut of the advertising revenue that an iPhone user’s searches generate.
> 16. Apple wraps itself in a cloak of privacy, security, and consumer preferences to justify its anticompetitive conduct. Indeed, it spends billions on marketing and branding to promote the self-serving premise that only Apple can safeguard consumers’ privacy and security interests. Apple selectively compromises privacy and security interests when doing so is in Apple’s own financial interest—such as degrading the security of text messages, offering governments and certain companies the chance to access more private and secure versions of app stores, or accepting billions of dollars each year for choosing Google as its default search engine when more private options are available. In the end, Apple deploys privacy and security justifications as an elastic shield that can stretch or contract to serve Apple’s financial and business interests.
> 145. Similarly, Apple is willing to sacrifice user privacy and security in other ways so long as doing so benefits Apple. For example, Apple allows developers to distribute apps through its App Store that collect vast amounts of personal and sensitive data about users—including children—at the expense of its users’ privacy and security. Apple also enters agreements to share in the revenue generated from advertising that relies on harvesting users’ personal data. For example, Apple accepts massive payments from Google to set its search engine as the default in the Safari web browser even though Apple recognizes that other search engines better protect user privacy
https://storage.courtlistener.com/recap/gov.uscourts.njd.544...
ie it's no longer a "source of truth"
Winning a case is one thing, as they can find other reasons to come back.
Losing, and saying "but we were already punished, you got what you want" is such a barrier to EVER putting any sort of realistic reigns on them. They might as well just bury antitrust now and stop pretending.
Google shifted views that used to go to Wikipedia first to their in-house knowledge graph (high percentages of which are just Wikipedia content), then to the AI produced snippets.
All to say, yes...Wikipedia's generosity with outbound links is part of the popularity. But they still get hit by this "engagement" mentality from their traffic sources.
A recentish example, I was trying to remember which cities' buses were in Thessaloniki before they got a new batch recently. They used to rent from a company (Papadakis Bros) that would buy out of commission buses from other cities around the world and maintain the fleet. I could remember specifically that there were some BVG Busses from Berlin, and some Dutch buses, and was vaguely wondering if there were some also from Stockholm I couldn't remember.
So I searched on my iPad, which defaulted to Google (since clearly I hadn't got around to setting up a good search engine on it yet). And I get this result: https://i.imgur.com/pm512HU.jpeg
The LLM forced its way in there without me prompting (in e.g. Kagi, you opt in by ending the query with a question mark). It fundamentally misunderstands the question. It then treats me like an idiot for not understanding that Stockholm is a city in Sweden, and Thessaloniki a city in Greece. It uses its back linking functionality to help cite this great insight. And it takes up the entire page! There's not a single search result in view.
This is such a painful experience, it confirms my existing bias that since they introduced LLMs (and honestly for a couple years before that) that Google is no longer a good first place to go for information. It's more of a last resort.
Both ChatGPT and Claude have a free tier, and the ability to do searches. Here's what ChatGPT gave me: https://chatgpt.com/share/68b78eb7-d7b4-8006-81e0-ab2c548931...
A lot of casual users don't hit the free tier limits (and indeedI've not hit any limits on the free ChatGPT yet), and while they have their problems they're both far better than the Gemini powered summaries Google have been pumping out. My suggestion is that perhaps you haven't surveyed the market before suggesting that "by far the best LLM-assisted search experience today is available for free at the Google prompt".
It's not the only reason their traffic is declining, but it seems like a big one.
it's a miracle it survived that long. and i think it saving grace was that nobody wanted to browse reddit at work, nothing else.
so tired of AI apologists exploiting this isolated case as if it is some proof AI is magic and a solution to anything. it's all so inane and expose how that side is grasping for straw.
There are many search engines that don't have an issue with the internet being "competitive and polluted". So you want me to believe that the people (Google) with the most experience and knowledge about search just can't handle it. While it seemingly is no issue for most of the upstarts? That's just not believable.
Edit: also, the primary point is that if everyone uses LLMs for reporting, the loss of revenue will cause the disappearance of the investigative journalism that funds, which LLMs sure as fuck aren’t going to do.
I thought this too, until I actually used Edge. It's quite shocking how much advertising there is in it. The default content sources contain an extremely high proportion of clickbait and "outrage" journalism. It genuinely worries me that this is the Windows default. It's such an awful experience.
[1] https://en.wikipedia.org/wiki/Wikipedia:List_of_citogenesis_...
All proposals must first be implemented by some browser vendors at Stage 3:
> The proposal has been recommended for implementation.
Then, the proposal shall be included in the standard at Stage 4:
> Two compatible implementations which pass the Test262 acceptance tests
This doesn't happen nearly as often on smaller sci/tech news outlets. When it does a quick email usually gets the link put in the article within a few hours.
haha what? Not even close to true. Chrome is a locked down money maker for Google. It is primarily a data-collection tool for Google. No way is that possibly the best browser available today.
Just recently they got fed up with ad-blockers so what do they do? Yeah. Then what just happened with android apps? yeah.
Google is not good for the internet. Anyone saying this is just sucking google's dick and siding with a major corporation.
Also fuck AMP.
There is a link right there in 3rd paragraph: "U.S. District Judge Amit Mehta", though strangely under the name...
> I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point.
There is no way you'd have time for that (and more importantly, your average reader), but if you do, the extra time it'd take you to find the link is ~0.0% of the total extra time needed to read the decision directly, so that's fine?
> with the full details
You don't have them in those dozens of pages, for example, the very basics of judge's ideological biases are not included.
That's easy to change. The first time I opened Edge, I opened Settings, typed "home" into the settings search box, and changed the "Home button" setting to "New tab page", which gives a nice simple page with a search box, like Google.
Is there other advertising you've seen in Edge that is different from Google?
Distribution Agreements
A central component of the remedies focuses on Google's distribution agreements to ensure they are not shutting out competitors:
No Exclusive Contracts Google is barred from entering into or maintaining exclusive contracts for the distribution of Google Search, Chrome, Google Assistant, and the Gemini app.
No Tying Arrangements Google cannot condition the licensing of the Play Store or any other Google application on the preloading or placement of its other products like Search or Chrome.
Revenue Sharing Conditions The company is prohibited from conditioning revenue-sharing payments on the exclusive placement of its applications.
Partner Freedom Distribution partners are now free to simultaneously distribute competing general search engines (GSEs), browsers, or generative AI products.
Contract Duration Agreements with browser developers, OEMs, and wireless carriers for default placement of Google products are limited to a one-year term.
Data Sharing and Syndication
To address the competitive advantages Google gained through its exclusionary conduct, the court has ordered the following:
Search Data Access Google must provide "Qualified Competitors" with access to certain search index and user-interaction data to help them improve their services. This does not, however, include advertising data.
Syndication Services Google is required to offer search and search text ad syndication services to qualified competitors on ordinary commercial terms. This will enable smaller firms to provide high-quality search results and ads while they build out their own capabilities.
Advertising Transparency
To promote greater transparency in the search advertising market, the court has mandated that:
Public Disclosure Google must publicly disclose significant changes to its ad auction processes. This is intended to prevent Google from secretly adjusting its ad auctions to increase prices.
What Google is NOT Required to Do
The court also specified several remedies it would not impose:
No Divestiture Google is not required to sell off its Chrome browser or the Android operating system.
No Payment Ban Google can continue to make payments to distribution partners for the preloading or placement of its products. The court reasoned that a ban could harm these partners and consumers.
No Choice Screens The court will not force Google to present users with choice screens on its products or on Android devices, citing a desire to avoid dictating product design.
No Sharing of Granular Ad Data Google is not required to share detailed, query-level advertising data with advertisers.
A "Technical Committee" will be established to assist in implementing and enforcing the final judgment, which will be in effect for six years."
Frankly I don't think that's bad at all. This is from Gemini 2.5 pro
Yes, Mozilla is mismanaged, but I'm very doubtful Apache has the resources to continue Firefox development and stay competitive.
No other story on the front page has this, and I've never seen it before. How did that link get there? It is not the link to the story itself. That is on cnbc.com.
Content that can't be easily made by an LLM will still be worth something. But go to most news sites and their content is mostly summarization of someone else's content. LLMs may make that a hard sell.
> Leave url blank to submit a question for discussion. If there is no url, text will appear at the top of the thread. If there is a url, text is optional.
Bafflingly, I’ve found this practice to continue even in places like University PR articles describing new papers. Linking to the paper itself is an obvious thing to do, yet many of them won’t even do that.
In addition to playing games to avoid outbound links, I think this practice comes from old journalistic ideals that the journalist is the communicator of the information and therefore including the source directly is not necessary. They want to be the center of the communication and want you to get the information through them.
Are their any measures that audit their finances or stop them or their relations from taking work with companies they have issued judgements on?
I'm not saying this is the case here, it is a general question.
Antitrust that is nonexistent is far more harmful.
Assuming we're talking about the AI generated blurbs at the top of search results, there are loads of problems. For one they frequently don't load at all. For another search is an awkward place for them to be. I interact with search differently than with a chat interface where you're embedding a query in a kind of conversational context such that both your query and the answer are rich in contextual meaning. With search I'm typically more fact finding and in a fight against Google's page rank optimizations to try and break through to get my information I need. In a search context AI prompts don't benefit from context rich prompts and aren't able to give context-rich answers and kind of give generic background that isn't necessarily what I asked for. To really benefit from the search prompts I would have to be using the search bar in a prompt way, which would likely degrade the search results. And generally this hybrid interaction is not very natural or easy to optimize, and we all know nobody is asking for it, it's just bolted on to neutralize the temptation to leave search behind in favor of an LLM chat.
And though less important, material design as applied to Google web sites in the browser is not good design, it's ugly and the wrong way to have a prompt interaction. This is also the case for Gemini from a web browser. Meanwhile GPT and Claude are a bit more comfortable with information density and are better visual and interactive experiences because of it.
Also if you type a few words on Google, it’ll “autocomplete” with the most common searches. Or you can just go to trends.google.com and explore search trends in real time.
You should play with LLMs this week.
However, your point stands: as new technologies develop, StackOverflow will be the main platform where relevant questions gain visibility through upvotes.
But it only works for stuff that is already consolidated. For example, something like a new version of a language will certainly spark new questions that can only be discussed with other programmers.
Another thing to note, contrary to some comments, is that Google is still allowed to make a deal with Apple to be the default search engine, but with extra rules.
``` Google also would be permitted to pay Browser Developers, including Apple, to set Search as the default GSE, so long as the Browser Developer (1) can promote other GSEs and (2) is permitted to set a different GSE on different operating system versions or in a privacy mode and makes changes, if desired, on an annual basis. ```
It seems to me that at very least Mozilla will have to renegotiate a contract and it's not clear what they might make off selling ads in that space. Google will presumably not value the lesser advantage as highly, but if the other provisions create more search engine competition there could be growing value to Mozilla in that ad real estate in theory
The Trump administration initiated this lawsuit. The Biden administration took it over and won the case. It's back on the Trump administration now and they wanted structural remedies.
The majority of Americans when polled express concerns about data privacy, security and monopoly in relation to Google - things Americans generally don't get that worked up about, but with Google, they know there's a problem.
Amit Mehta sold them all out with the most favorable outcome for Google that one could imagine. This guy, literally sold everyone in America out, the left, the right and the middle, except for Google management of course.
(This decision probably isn't even good for Google shareholders -- historically breakups of monopolies create shareholder value!)
I think Amit Mehta's impartiality here needs to be the subject of a Congressional investigation. I personally don't feel this guy should be a judge anymore after this.
If his decision stands this is going to be a landmark in American history, one of the points where historians look back and say "this is when American democracy really died and got replaced with a kleptocratic state." The will of everyone, people, the Congress, the Executive branch, all defied by one judge who sold out.
Maybe.. not. LLMs may just flow where the money goes. Open AI has a deal with the FT, etc.
The AI platforms haven't touched any UI devolution at all because they're a hot commodity.
Google made chrome to avoid such a thing, without chrome its likely the internet as a set of websites humans visit would be all but dead today. It is already a marginal part of traffic, 90% of mobile time is spent on apps and not browsers, but it would be much less if there were no good browser competition to Apple and Microsoft.
I agree this is annoying but other than that I really can't follow your argument: You're comparing a keyword-like "prompt" given to Google's LLM to a well-phrased question given to ChatGPT and are surprised the former doesn't produce the same results?
The nuclear option was DDG's hope. Google should share their entire data, so DDG can offer the same product without having to build out the thing themselves. The judge correctly identified (imo) where this sharing of index and search results would have meant a bunch of white labeled wrappers selling Google search and would have no incentive to innovate themselves in the short term. Somehow, DDG did not see that happening. At that goal, it's a great ruling, well considered.
poor microsoft, please, somebody, help them
In terms of a single search, I don't think Google really benefits from preventing a click-through - the journey is over once the user has their information. If anything, making them click through to an ad-infested page would probably get a few fractions of a cent extra given how deeply Google is embedded in the ads ecosystem.
But giving the user the result faster means they're more likely to come back when they need the next piece of information, and give them more time to search for the next information. That benefits Google, but only because it benefits the user.
2. While it's true that other browsers like Firefox have been catching up to Chrome in speed, it's still true that Chrome help lead the way and if not for it, the web would've likely been far slower today.
3. There has been an explosion in other browsers in the past few years, but admittedly they're all chromium-based, so even that wouldn't have been possible without Chrome
They tend to provide answers that are at least as correct as StackOverflow (i.e. not perfect but good enough to be useful in most cases), generally more specific (the first/only answer is the one I want, I don't have to find the right one first), and the examples are tailored to my use case to the point where even if I know the exact command/syntax, it's often easier to have one of the chatbots "refactor" it for me.
You still want to only use them when you can verify the answer and verifying won't take more time. I recently asked a bot to explain a rsync command line, then finding myself verifying the answers against the man page anyways (i.e. I could have used the manpage instead from the start) - and while the first half of the answer was spot on, the second contained complete hallucinations about what the arguments meant.
This is the problem. It doesn't matter if they used those specific assets to perpetrate these specific acts. The overall market power derived from those assets (and many others) taints everything they do.
There is no way to effectively curtail monopoly power by selectively limiting the actions of monopolists in certain specific domains. It's like thinking you can stop a rampaging 500-pound gorilla by tying two of its fingers together because those were the two fingers that were at the leading edge of its blow when it crushed someone's skull with a punch.
Once a company has monopoly power of any kind, it is useless to try to stop it from using that power to do certain things. It will always find a way to use its power to get around any restrictions. The problem isn't what the monopoly does, it's that the monopoly exists. The only surefire way is to destroy the monopoly itself by shattering the company into tiny pieces so that no entity holds monopoly power at all.
Does this mean that government can know your every step on Chrome?
enshittification of IE -> Firefox's rise
enshittification of FF -> Chrome's rise
Any unbiased person can verify the timeline
It's been decades since stock market represented reality. If that was the case TSLA wouldn't shoot up on every report showing massive revenue loss. The stock market is one big meme wheel.
Agree on the extension idea, except I’m not sure I want to see the original sensationalized content anyway. Might as well have the bot rewrite the piece in a dry style.
"Read our statement on today’s decision in the case involving Google Search."
https://blog.google/outreach-initiatives/public-policy/doj-s...
The news of the day is that the JUDGE told both Democrats and Republicans, as well as a supermajority of the American public, no you can't have what you want. Even though Google is guilty, you don't get it. Instead, corporate power will win again.
Imagine an alternate American history where the judge decided not to break up Standard Oil. I think it's Marc Andreesen who's literally made the comparison that data is the modern day oil. We are about to get that alternate history where the corporate robber barons win and everyone else loses. Mehta sealed the deal.
So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day.
I'm not sure this is true? Most languages have fairly open development processes, so discussions about the changes are likely indexed in the web search tools LLMs use, if not in the training data itself. And LLMs are very good at extrapolating.
But as it stands, it's a terrible user experience. It's ugly, the page remains incredibly busy and distracting, and it is wrong far more often than ChatGPT (presumably because of inference cost at that scale).
It might be good enough to slow the bleeding and keep less demanding users on SERP, but it is not good enough to compete for new users.
https://hn.algolia.com/?q=windows+phone+google
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Google can afford to pay more per user/click because of scale economies; their cost per user/click is lower. So, great, Google will pay Apple $20/user/year on a nonexclusive basis, and Firefox or whoever are free to match or exceed that, so long as they don't mind losing money on every user.
That said, the word "relying" is taking it too far. I'm relying on myself to be able to vet what ChatGPT is telling me. And the great thing about ChatGPT and Gemini, at least the way I prompt, is that it gives me the entire path it took to get to the answer. So when it presents a "fact," in this example a load calculation or the relative strength of a wood species, for instance, I take the details of that, look it up on Google and make sure that the info it presented is accurate. If you ask yourself "how's that saving you time?" The answer is, in the past, I would've had to hire an engineer to get me the answer because I wouldn't even quite be sure how to get the answer. It's like the LLM is a thought partner that fills the gap in my ability to properly think about a problem, and then helps me understand and eventually solve the problem.
And there is no exclusive contract between Google and users with regards to sources of apps. It's a change in technical requirements for the platform.
In law, the actual thing matters; just being able to draw vague parallels doesn't mean anything.
I'm pretty sure they meant LLMs in general, not just ChatGPT. They all straight up lie to very similar degrees, no contest there.
> The only people who are being homogenized or "down-graded" by Chat GPT are people who wouldn't have sought other sophisticated strategies in the first place, and those who understand that Chat GPT is a tool and understand how it works, and it's context, can utilize it efficiently with great positive effect.
I know for a fact that this isn't true. I have a friend who was really smart, probably used to have an IQ of 120 and he would agree with all of this. But a few of us are noticing that he's essentially being lobotomized by LLMs and we've been trying to warn him but he just doesn't see it, he's under the impression that "he's using LLMs efficiently with great positive effect".
In reality his intellectual capabilities (which I used to really respect) have withered and he gets strangely argumentative about really basic concepts that he's absolutely wrong about. It seems like he won't accept it as true until an LLM says so. We used to laugh at those people together because this could never happen to us, so don't think that it can never happen to you.
Word of advice for anyone reading this: If multiple people in your life suddenly start warning you that your LLM interactions seem to be becoming a problem for one reason or another, make the best possible effort to hear them out and take them seriously. I know it probably sounds absurd from your point of view, but that's simply a flaw in our own perception of ourselves, we don't see ourselves objectively, we don't realize when we've changed.
Disclosure: Google employee, words are my own
That said, reporters have most probably nothing to do with what you’re decrying. Linking policies are not the reporter’s business. There are probably multiple layers of SEO “experts” and upper management deciding what goes on page and what not. Funnily enough, they might be super anal about what the story links, and then let Taboola link the worst shit on the Internet under each piece… So please, when you start your sentence with “reporters” please know that you’re criticizing something they have no power to change.
Stack Overflow isn’t dead because of AI. It’s dead because they spent years ignoring user feedback and then doubled down by going after respected, unpaid contributors like Monica.
Would they have survived AI? Hard to say. But the truth is, they were already busy burning down their own community long before AI showed up.
When AI arrived I'd already been waiting for years for an alternative that didn’t aggressively shut down real-world questions (sometimes with hundreds of upvotes) just because they didn’t fit some rigid format.
This is perhaps a tad ahistorical. Google forked Blink off from WebKit around 2013 - it owes a lot of it's early success to the same technical foundations as Safari (which in turns owes the same debt to Konqueror...)
* Strayed from reality
* Strayed from the document and is freely admixing with other information from its training data without saying so. Done properly, this is a powerful tool for synthesis, and LLMs theoretically are great at it, but done improperly it just muddles things
* Has some kind of bias baked in-ironic mdash-"in summary, this ruling is an example of judicial overreach by activist judges against a tech company which should morally be allowed to do what they want". Not such a problem now, but I think we may see more of this once AI is firmly embedded into every information flow. Currently the AI company game is training people to trust the machine. Once they do, what a resource those people become!
Now, none of those points are unique to LLMs: inaccuracy, misunderstanding, wrong or confused synthesis and especially bias are all common in human journalism. Gell-Mann amnesia and institutional bias and all that.
Perhaps the problem is that I'm not sufficiently mistrustful of the status quo, even though I am already quite suspicious of journalistic analysis. Or maybe it's because AI, though my brain screams "don't trust it, check everything, find the source", remains in the toolbox even when I find problems, whereas for a journalist I'd roll my eyes, call them a hack and leave the website.
Not that it's directly relevant to the immediate utility of AI today, but once AI is everything, or almost everything, then my next worry is what happens when you functionally only have published primary material and AI output to train on. Even without model collapse, what happens when AI journobots inherently don't "pick up the phone", so to speak, to dig up details? For the first year, the media runs almost for free. For the second year, there's no higher level synthesis for the past year to lean on and it all regresses to summarising press releases. Again, there are already many human publications that just repackage PRs, but when that's all there is? This problem isn't limited to journalism, but it's a good example.
I believe especially back then, Chrome performance was significantly better than Firefox. On Android, Firefox was so slow and unpolished that the ad blocking couldn't make up for it (and even that wasn't available from the start).
It is dead because of both of those things. Everyone hated Stackoverflow's moderation, but kept using it because they didn't have a good alternative until AI.
> When AI arrived I'd already been waiting for years for an alternative that didn’t aggressively shut down real-world questions
Exactly.
I think you can, under the assumption that Apple's decision wasn't independent/voluntary. At least, that seems how it works for people in cases of coercion, conspiracy or impairment.
Unless competitors get that kind of traffic AND user behavior insights, their results will always be worse.
And as long as their results are worse, 1) their revenues will always be worse, which will 2) make it prohibitively expensive to even try to bid for such placements, which in any case 3) would be shot down by Apple because their results are not "good enough".
It's a Catch-22 from which the only escape is making a risky 20-billion-per-year traffic acquisition bet (on top of the billions already being invested) that they can get all that traffic and user behavior data and improve their search engine quickly enough to make the results good enough to drive enough revenue, all the while fighting the tendency of people to use Google anyway simply out of habit.
I don't think it's much of a choice.
The proposed remedies do talk about sharing search and user interaction data though, so if that survives appeals, it might help level the field a bit.
Raw data here if you want an update: https://data.stackexchange.com/stackoverflow/query/1882532/q...
It hasn't got better - down from a peak of 300k/month to under 10k/month.
Journalists don't make it easy for you to access primary sources because of a mentality and culture issue. They see themselves as gatekeepers of information and convince themselves that readers can't handle the raw material. From their perspective, making it easy to read primary sources is pure downside:
• Most readers don't care/have time.
• Of the tiny number who do, the chances of them finding a mistake in your reporting or in the primary source is high.
• It makes it easier to mis-represent the source to bolster the story.
Eliminating links to sources is pure win: people care a lot about mistakes but not about finding them, so raising the bar for the few who do is ideal.
Faster in basically every dimension. Supporting way more than FF in terms of specs. Way more efficient on battery. Better feeling scroll, better UI.
Where do you think LLMs learned this behavior from? Go spend time in the academic literature outside of computer science and you will find an endless sea of material with BS citations that don't substantiate the claim being made, entirely made up claims with no evidence, citations of retracted papers, nonsensical numbers etc. And that's when papers take months to write and have numerous coauthors, peer reviewers and editors involved (theoretically).
Now read some newspapers or magazines and it's the same except the citations are gone.
If an LLM can meet that same level of performance in a few seconds, it's objectively impressive unless you compare to a theoretical ideal.
Oh you sweet Summer child :-)
The worst is with criminal cases where they can't even be burdened to write what the actual charges are. It's just some vague 'crime' and the charges aren't even summarized - they're just ignored.
I'm not interested in dissecting specific examples because never been productive, but I will say that most people's bullshit detectors are not nearly as sensitive as they think they are which leads them to accepting sloppy incorrect answers as high-quality factual answers.
Many of them fall into the category of "conventional wisdom that's absolutely wrong". Quick but sloppy answers are okay if you're okay with them, after all we didn't always have high-quality information at our fingertips.
The only thing that worries me is how really smart people can consume this slop and somehow believe it to be high-quality information, and present it as such to other impressionable people.
Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough.
I still don’t think a company with at least one touch point on such a high percentage on web usage should be allowed to have one of 2 mobile OSs that control that market, the most popular browser, the most popular search engine, the top video site (that’s also a massive social network), and a huge business placing ads on 3rd party sites.
Any two of these should be cause for concern, but we are well beyond the point that Google’s continued existence as a single entity is hugely problematic.
https://freedium.cfd/https://vinithavn.medium.com/from-multi...
At its core, attention operates through three fundamental components — queries, keys, and values — that work together with attention scores to create a flexible, context-aware vector representation.
Query (Q): The query is a vector that represents the current token for which the model wants to compute attention.
Key (K): Keys are vectors that represent the elements in the context against which the query is compared, to determine the relevance.
Attention Scores: These are computed using Query and Key vectors to determine the amount of attention to be paid to each context token.
Value (V): Values are the vectors that represent the actual contextual information. After calculating the attention scores using Query and Key vectors, these scores are applied against Value vectors to get the final context vector
And the reporter would rather you hear it second hand from them :)
I agree, online "journalists" are absolutely terrible at linking to sources. You'll have articles which literally just cover a video (a filmed press conference, a YouTube video, whatever) that's freely available online and then fail to link to said video.
I don't know what they're teaching at journalistic ethics courses these days. "Provide sources where possible" sounds like it should be like rule 1, yet it never happens.
They are both terrible in terms of correctness compared to duckduckgo->stackoverflow.
As an example deepsek makes stuff up if I as for what syscall to use for deleting directories. And it really misleads me in a convincing way. If I search then I end up in the man page and I can exentually figure it out after 2-3 minutes
Are you saying that 'til now, Apple/Firefox _only_ took money for search default from Google due to the wording of the contract? In future, all the search vendors can pay all the browser makers for a position on a list of defaults?
Vetting things is very likely harder than doing the thing correctly.
Especially the thing you are vetting is designed to look correct more than actually being correct.
You can picture a physics class where teacher gives a trick problem/solution and 95% of class doesn’t realize until the teacher walks back and explains it.
Yes, I disagree. If we can't have Google without monopolism then we should have neither. Treating Google as essential in this situation is like a druggie saying he "needs" his next hit. People only "need" Google because Google has used its monopoly position to try to make people addicted to it. It should never have been allowed to happen in the first place, the company should have been broken up 10+ years ago, and it's only getting worse. It would be better to destroy it entirely (along with many other such large companies) than to keep it with its disproportionate power.
AI isn't competition for Google, AI is technology. Not only is Google using AI themselves, they are pretty damn near the top of the AI game.
It's also questionable how this is relevant for past crimes of Google. It's completely hypothetical speculation about the future. Could an AI company rise and dethrone classic Google? Yeah. Could Google themselves be the AI company that does it? Probably, especially when they can continue due abuse their monopoly across multiple fields.
There is also the issue that current AI companies are still just bleeding money, none of them have figured out how to make money.
The reason why I resort to AI is to find out alternative solutions quickly. But quite honestly, it's more of a problem with SO moderation. People are willing to answer even stale, actual/mistaken duplicate or slightly/seemingly irrelevant questions with good quality solutions and alternatives. But I always felt that their moderation dissuaded the contributors from it.
Meanwhile, the first reason why I always double check the AI results is because they hallucinate way too much. They fake completely believable answers far too often. The second reason is that AI often neglects interesting/relevant extra information that humans always recognize as important. This is very evident if you read elaborate SO answers or official documentation like MDN, docs.rs or archwiki. One particular example for this is the XY-problem. People seem to make similar mistaken assumptions and SO answers are very good at catching those. Recipe-book/cookbook documentation also address these situations well. Human generated content (even static or archived ones) seem to anticipate/catch and address human misconceptions and confusions much better than AI.
While that's concerning, my own experience in seeking information using this approach has been positive: it provides a fast, fully customised answer that easily outweighs the mistakes it makes. This flattens the learning curve on a new subject and with that saved time I am able to confirm important details to weed out the mistakes/hallucinations. Whereas with Googling I'd be reading technical documentation, blog posts and whatever else I could find, and -crucially- I'd still need to be confirming the important details because that step was never optional. Another plus is that I'm now not subjected to low quality ai-generated blog spam when seeking information.
I foresee Google search losing relevance rapidly, chatbots are the path of least resistance and "good enough" for most tasks, but I also am aware that Google's surveillance-based data collection will continue to be fruitful for them regardless if I use Google search or not.
Monopolies usually don’t maintain their status because they are “the best” and as if consumers also are informed enough to know that. Similar arguments can be made to allow basically all of big tech.
In a better system instead of talking about allowing everything as if corporations are precious individuals, the govt should be creating funded competitors and being much more firm with monopolistic behavior (even if they aren’t legally monopolies)
https://www.browserating.com/ doesn't put it in top5 on any non-ios platform?
I would like to see the OpenOffice equivalent of a web browser just for the fun of it.
That is what I would expect in "reality".
One would assume the appeal is over the data-sharing requirements, which does feel a little bit like sharing the secret sauce with competitors.
Similarly with Safari 17 on macOS.
https://www.404media.co/this-stunning-image-of-the-sun-could...
I honestly have never seen a Chrome dev tools feature that was better or necessary for good web development that Firefox didn’t already have in the last 15 years. Yet I always see this bizarre sentiment of how the dev tools were better “just because”.
And there's also the idea that you should be able to at least somewhat trust the people reporting the news so they don't have to provide all of their references. --You can certainly argue that not all reporters can or should be trusted anymore, but convincing all journalists to change how they work because of bad ones is always going to be hard.
Have you asked them why? I'd be willing to bet that it's because of vendor lock-in if you boil down to it. Lots of things only work on Chrome. Video calls are especially prevalent right now, but there's a bunch of bot detection shit that only works on Chrome too.
A documentation for a specific product I expect to be mostly right, but maybe miss the required detail.
Some blog, by some author I haven't heard about I trust less.
Some third party sites I give some trust, some less.
AI is a mixed bag, while always implying authority on the subject. (While becoming submissive when corrected)
if like me you didn't know what this was referring to, here's some context: https://judaism.meta.stackexchange.com/questions/5193/stack-...
I mean all I know about what you're saying here, is that you have some kind of secret fake facts in your brain or something, sorry that must drive you nuts
Do you have a few examples? I'm curious because I have a very sensitive BS detector. In fact, just about anyone asking for examples, like the GP, has a sensitive BS detector.
I want to compare the complexity of my questions to the complexity of yours. Here's my most recent one, the answer to which I am fully capable of determining the level of BS:
I want to parse markdown into a structure. Leaving aside the actual structure for now, give me a exhaustive list of markdown syntax that I would need to parse.
It gave me a very large list, pointing out CommonMark-specific stuff, etc.I responded with:
I am seeing some problems here with the parsing: 1. Newlines are significant in some places but not others. 2. There are some ambiguities (for example, nested lists which may result in more than four spaces at the deepest level can be interpreted as either nested lists or a code block) 3. Autolinks are also ambiguous - how can we know that the tag is an autolink and not HTML which must be passed through? There are more issues. Please expand on how they must be resolved. How do current parsers resolve the issues?
Right. I've shown you mine. Now you show yours.The other browsers have picked up the partitioning since then as a feature so the playing-field is far more level.
Been researching about waterproofing techniques in my area. Asked chatgpt about products in my region. Gladly mentioned some, provided links to shop. Found out I need to prep foundation with product X. One shop had only Y available, from description felt similar.
Asked about differences between products. Provided me with summary table that was crystal clear that one is more of a finishing stuff and the other is more of a structural and can also be used as finishing. Provided me with links to datasheets that confirm the information.
I could ask about alternative products and it listed me some, etc. Great when I need to research unknown field and has links... that is the good part :)
... but that only got people in the door. What kept them in the door was this image: https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg...
... or, rather, the word-changing technology underpinning that image: the ability to sandbox individual page rendering instances into subprocesses so that a failure on one page didn't crash the entire browser. I think people sometimes forget how fundamentally unstable browsers were in 2008, and how easy it was to trip over one bad page that would bring down your bank tab, your email tab, your document tab, the three tabs of source code you had open, the seven tabs of unread blogposts... Hugely disruptive. Just didn't happen in Chrome.
Firefox popularized tabs, Chrome let us have a hundred of them open.
This isn't unheard of for communications technology. Postal service in England was exclusively a Crown privilege, then the monarchy realized there were benefits to the Empire if everyone could use the system, and that was such a good idea that when the US Constitution was written it asserted the government had to provide a postal service. There is past precedent for a government-oversight private enterprise in the US.
There are buses in Stockholm, and buses in Thessoloniki, and buses manufactured in Sweden, and buses previously used in Stockholm that are now in operation in Thessoloniki. And one LLM took one path through the question, answering it correctly and completely. And the other took a different one[1]. As it happened your (poorly phrased) intended question was answered by one and not the other.
If I ask the same question with a more careful phrasing that (I think!) matches what you wanted to know: "Where did buses used in Thessoloniki come from originally?"
...I get correct and clear answers from both. But the Google result also has the Wikipedia page for the transit operator and its own web page immediately to the right.
Again, cherry picking notwithstanding I think in general the integrated experience of "I need an AI to help me with this problem" works much better at google.com, it just does.
[1] It's worth pointing out that the result actually told you that your question didn't make sense, and why. I suspect you think this was a bug since the other LLM guessed instead, but it smells like a feature to me.
The only part of that story Google impacts is that their clever way to index results is by mining traffic on their site and the larger web to decide based on user behavior whether their search query was satisfied. Since users can't spend active time on two sites at once, Google consumes a finite resource there. But do we believe Google's behavior tracking really is the final, best form that indexing can possibly take?
I have to assume the journalist writing such an article knows that they are misrepresenting the research to make a broader point they want to make.
Think of programming languages as you currently think of CPU ISAs. We only need so many of those. And at this point, machine-instruction architecture has diverged so far from traditional ISAs that it no longer gets called that. Instead of x86 and ARM and RISC-V we talk about PTX and SASS and RDNA. Or rather, hardly anyone talks about them, because the interesting stuff happens at a higher level of abstraction.
I just don't think Mozilla have spare money to film a nice commercial...
Or closing a general question because in the opinion of Someone Important, it runs afoul of some poorly-defined rule regarding product recommendations.
A StackOverflow that wasn't run like a stereotypical HOA would be very useful. The goal should be to complement AI rather than compete with it.
Which is exactly my point. A bunch of people doing that to conform with the shibboleth identity of the phone in their pocket and then posting strong opinions about the product they don't (or at least claim not to) use is an echo chamber and not a discussion. You only get the upvotes in these threads if you conform.
HN is supposed to be better than that.
The absence of a clear objective boundary of what can be taken and what cannot.
And without such a boundary, such a practice could be quite widespread, with the poorest and smallest actors being the first to be subjected to it, simply because it is easier to take from them and they do not have sufficient influence on the distributing bodies. This is like theory of building socialism 101
Reputable news organizations are more robust against such pressures, but plenty of people get their news from (in some cases self-described) entertainment sites masquerading as news sites.
That’s painting with an overly broad brush. Nevertheless, the news was relying on ads long before most people knew the word “Internet”, but there were far fewer channels to place ads back then, so in some respects news and media organizations had a captive audience in advertisers.
Mass adoption of television was effectively made possible because of advertising money.
If Bing's market share is 4%, then Bing still gets tons of user interaction data.
Bing gets something like 100M queries per day. That's more than enough data.
Microsoft has all the choice. They have the money. They can invest, like Google has invested and continues to invest. You don't get these things for free.
So yes, it is absolutely a choice. A small startup may be disadvantaged. But not Microsoft with over 15 years of user data from Bing.
If ChatGPT really is stealing significant market share from Google, that's EVEN MORE of a reason to break up Google's monopoly. It's further proof that they are a monopoly, because one of the smoking guns of a monopoly is inability to innovate. Why should the market be held back by allowing Google to stay in the game when they're clearly not competitive? Their anti competitive behavior is preventing other, more innovative companies from being created.
> You're mistakenly assuming that Google has a lot of power, when in fact, they had none. People were using it because it was the superior product at the time.
I can't tell if this is a troll or not. I'm going to reply anyways, even though it probably is.
Google had, and currently has, all the power. The only 'lever' they don't have is control over internet access (although they do offer ISP services in some regions). I don't know what you mean by saying otherwise? By owning Search, Chrome, Android, and their ad network they have closed loop. Building a good search engine requires data, Search/Chrome/Android provide that data, and Chrome/Android + their illegal search deals funnel users into Search, which provides more data. That's why even this shitty half-assed remedy is requiring them to share some search data with rivals.
Someone here has religion, and it’s not me. I don’t use Google search because it’s a terrible product and we finally have other options. As for AI, there are dozens of options, and it does not take many examples to see how bad Ai Overview is. Gemini 2.5 Pro, however, is in my tool belt.
I know. But you're posting confidently (along with a ton of other people) in a subthread about Google search anyway, making statements about its behavior which you straight up admit to be unqualified to make. And I'm calling out the disconnect, because someone has to.
No one in an echo chamber thinks they're in an echo chamber. This is 100% an echo chamber.
I suffered through Google search's decline for the last 15 years along with everyone else. I land on it often enough still to see that the trend is not changing.
Btw, I would not trust an LLM to tell me how to build a suspension bridge. First, I'm unfamiliar with that space. Second, even if I was familiar, the stakes are, as you say, so high that it would be insane to trust something so complex without expert sign off. The post I'm specifically talking about? Near-zero stakes and near-zero risk.
<stepping on the soapbox> I beg folks to always try and pierce the veil of complexity. Some things are complex and require very specialized training and guardrails. But other complexity is fabricated. There are entrenched interests who want you to feel like you can't do certain things. They're not malicious, but they sometimes exist to make or protect money. There are entire industries propped up by trade groups that are there to make it seem like some things are too complex to be done by laypeople, who have lobbied legislators for regulations that keep folks like you from tackling them. And if your knee-jerk reply is that I'm some kind of conspiracy theorist or anarchist all I'm saying is it's a spectrum. Suspension bridge with traffic driving over it --> should double, triple, quadruple check with professional(s); a post in a house supporting the entire house's load (exaggeration for effect) --> get a single professional to sign off; a post in a house that's supporting a single floor joist with minimal live and dead load (my case!) --> use an LLM to help you DIY the "engineering" to get to good enough (good enough = high margin for error); replace a light switch --> DIY YouTube video.
I am the king of long-winded HN posts. Obviously the time I took to write this (look, ma, no LLM!) is asymmetric with what you wrote, but I'm genuinely wondering if any of this makes you think differently. If not, that's cool of course (and great for the engineers and permit issuers!).
The reason you hire a structural engineer is because they do - and they are on the hook if it goes wrong. Which is also why they have to stamp drawings, etc.
Because the next person who owns the house should have some idea who was screwing with the structure of it.
You might be 100% on top of it - in which case that structural engineer should have no problem stamping your calcs eh?
The only other thing I'll add is the ideal vs. the reality. What percent of structural projects done to single-family construction, in particular, do you think is done by engineers? I would guess it's far less than 50%. That's based on my own experience working in the industry, which I know you won't trust (why would you? Random internet guy after all). But for conversation's sake suffice it to say that I believe every time you walk into a house that's several decades old or older you're likely walking into a place that has been manipulated structurally without an engineer's stamp. And the vast majority (99%+ of the time) it's perfectly safe to be in that space.
Everyone thinks they are the exception. Occasionally, one of them is even right, eh?
And just to clarify I don't think I'm the exception. I was actually making the opposite argument. Almost anyone can and should attempt to deconstruct complexity because doing things is not always as difficult as it would seem (or as difficult as we've been told).
Appreciate the dialogue, lazide!
So they rarely are forced to do anything but state the name of who they interviewed, and that's it. And puts them in the habit of not acknowledging what they read, as a source?
I don't understand why this is an obstacle - this issue already exists with writing laws and various countries have different solutions, all of which seem to be working kinda ok. There's the USA's constitution which isn't working so well in most cases but working great in others (free speech for example, though this is now failing), whereas other countries depend on histories of case law for example (UK).
It seems to me that if a government specifically sought to target the largest and richest actors it could avoid the issue you're speaking of. Of course this would require removing the ability of capital to influence politics, maybe that's the issue you mean?
They also devolved into a work friendly variant of 4Chan's /g/ board. "Work friendly" as in nothing obviously obscene, but the overall tone and hostility towards newcomers is still there (among other things).
Is it? I'm no expert, but while it may seem enough in absolute numbers, I can imagine more nuanced criteria like diversity of queries, results and users may matter more than just volume, and that kind of signal only comes from truly ubiquitious data collection (like say being default on the most popular browser AND on the duopoly of mobile platforms -- which, by the way, also collect tons of location data about the users that is not available to anybody else.)
But as I'm no expert, all I can do is look at the circumstantial evidence:
1. Google paid about 54.9 BILLION in 2024 -- "traffic acquisition costs" or "TAC": https://abc.xyz/assets/77/51/9841ad5c4fbe85b4440c47a4df8d/go... -- to hold on to that traffic and data. The 20 billion going to Apple gets a lot of airtime but it does not tell the whole story.
2. As somebody else said in this thread, MSFT did burn billions on their attempt at smartphones and other markets, so it's not like they're afraid to pour money into big, risky bets.
That really tells us a lot about the realities about the search market.
A solid example of this right now is all of the Mullvad VPN ads I've seen on the Seattle Light Rail lately. Google used to have ads everywhere for Chrome. The only time I saw Firefox stuff was the rare t-shirt at a tech conference.
Entirely their fault, tbh. Mozilla's C suite has knowingly enriched themselves off this money for over 15 years now. If they were serious about surviving, they would have found alternative funding sources a long time ago.
Firefox isn't a true project. It's Google paying off someone to make Chrome appear not to be a monopoly at first glance.
Maybe it's not enough data to make search results 100% as good as Google's, but enough to make them 95% as good? And everyone on HN complains about the quality of Google results, so surely there are algorithmic opportunities for MS to do better, right?
All we know is that Microsoft has decided not to compete seriously in search, but compete at a minimal level. There are a hundred different strategic reasons why they might have chosen this. But this in no way indicates it would be somehow impossible for Microsoft to compete there if they wanted. They could spend the tens of billions in traffic acquisition just like Google does. The fact that they aren't doesn't mean they couldn't.
There are no "realities about the search market" that mean Microsoft could never become a serious competitor. Your unfunded startup can't, but Microsoft could. Microsoft has all the data and money required.
They've just chosen not to, the same way Apple has chosen not to enter AAA gaming, Google hasn't entered general-purpose desktop operating systems, and Amazon hasn't entered VR headsets.
It isn’t due to ‘complexity’ either - rather indifference, laziness, or just plain stupidity.
I’ve seen people almost burn down their places multiple times - and at least one family actually die from an electrical fire. Also, partial building collapses.
The reason you don’t see it more often is because people generally don’t actually try.
I don't quite understand what you mean.
The great advantage of the American constitution in terms of freedom of speech is that it sets a relatively clear boundary. And it is obvious that in this regard the constitution copes with its task perfectly: freedom of speech in the USA is currently protected better than in any other country.
It is so well protected that Americans were able to elect Trump as their leader, despite the fact that more than 80 percent of the mainstream media openly opposed him, and the government tried to shut the mouths of all his supporters under the guise of fighting dis- and misinformation (regardless of how we feel about his personality and presidency).
So if we look at the freedom of speech in the current US on a historical scale, we see exactly the opposite of what you saying: we see how freedom of speech in the US has once again stood firm despite the strongest opposition.
> Of course this would require removing the ability of capital to influence politics
You describe it as if it is something ordinary, not something catastrophic. Just to understand, if the government gets enough power to deprive capital of ability to influence politics - we get Nazi Germany or Russia. In the best case. At worst - the USSR, North Korea or Kampuchea
MSFT has spent 100 billion over the years on Bing.
And Google spent more than half of that amount in 2024 alone on traffic acquisition costs. Which it can afford to do because it has a monopoly. Which it has because it produces better results. Which it does because they have all the data. Which they ensure only they can get because they pay for it (and the cycle repeats.)
For comparison, Microsoft's cash reserves are 96 billion. So their choice really was to spend more than half of their cash reserves in one year just to compete only on traffic acquisition costs in the distant hope of getting enough data to break that cycle. Which is likely still not enough data because they don't have the browser monopoly or mobile presence to harvest user data on an industrial scale.
So, no: Microsoft does not have all the money or data required.
I would say the more realistic story is that Microsoft knew, as proven by the trial, that the deck was anti-competitively stacked against them (and who would know better than Microsoft?) and simply did the best they could to compete to the point of positive ROI.
If Microsoft spent those billions, it would be receiving ad revenue too.
It's not money thrown down the drain. And it's not for "data".
There's nothing anticompetitive here when it comes to Microsoft choosing whether or not to enter the market.
It's not about acquiring some magic level of data. A startup doesn't have data. Microsoft does. It's not an issue.
I don't know every country so I'm not sure if this is true, but it seems to me free speech was decently well protected up to a certain point and so long as you didn't threaten American hegemony. For example there was a long era where you were able to be jailed for being a communist or speaking out against American wars. Or often speech as protest, such as during the civil rights era, was violently put down.
Aesthetically Americans seem to enjoy decent free speech but only so long as it doesn't meaningfully challenge the government. Protests are almost always violently suppressed in America it seems.
Recently the Americans' free speech rights seem to be degrading even further with media being ejected from the press room or sued by the president. Not to mention the chilling effect of calls by prominent politicians to do violence (typically deportation) to various dissidents such as anti Israeli voices.
Other countries elect unpopular politicians, that's not really unique. The American right to call for violence or use slurs against minorities is I suppose unique, I'm not sure why someone would be proud that that right remains unsullied when the bits of free speech that actually matter are being stripped away but so it goes.
> Just to understand, if the government gets enough power to deprive capital of ability to influence politics - we get Nazi Germany or Russia. In the best case. At worst - the USSR, North Korea or Kampuchea
I find this very interesting because you're the first person I've met to openly defend corruption, or the American word for it, lobbying. Most neoliberals want to "keep the good parts of capitalism" but argue that money shouldn't be able to influence politics. Or maybe you draw the line somewhere between corruption and not corruption, when discussing money influencing politics? If so where's that line for you?
The PRC for a while had virtually 0 influence of capital against their government and now they're the second most powerful country on earth - arguably the most powerful, if we compare the ability of either executive leader to control the military (the parade comparison is... embarrassing to say the least). Of course capital still has some influence in the PRC but seems to be not as much as the USA given the PRC will happily nationalize things to this day, or chuck billionaires it doesn't like in prison.
Taiwan seems to have less corruption the USA. The KMT are obscenely wealthy and yet still struggle to get their policy through, and have had a couple of their media stations pulled off air for corruption.
The EU seems to often act against the interests of capital, as well as member nations to a certain degree. I'd be surprised if you denied this since capitalists often use this as evidence FOR the superiority of capitalism against socialism, since America's gdp is so high and businesses prefer to incorporate there.
So it seems to me that Nazi Germany, Russia, USSR, North Korea are more political failures than economic ones. The Soviet Union after all did industrialize the entire empire and was the only serious challenger to American hegemony for decades. Not that I'm a fan but it was hardly a failure until it dissolved - a fate which may befall the United States after all.
Again, the court specifically called this out as a key pillar underpinning Google's monopoly, and this is why the proposed remedies, such as they may be, are all around sharing search and user interaction data.
I'm not sure on what empirical basis you keep asserting that Microsoft has the necessary money and data, but the court's findings, based on tons of evidence, indicate otherwise.
Meanwhile they're cutting down on devs, killing products like Pocket and Fakespot, ignoring user feedback, driving strange and off-putting community engagement, and introducing eye candy BS nobody asked for.
In short, they appear to be doing anything but advancing the brand and actually, you know, competing in the browser market. Note that I'm not shitting on the poor devs, I still think they are delivering a great core product despite it all. But market shares and even absolute user counts keep dwindling. What is management doing about that?
And all this would seem like a case of simple mismanagement, if one weren't to reflect the fact that the overwhelming majority of their income comes from Google. The way they're behaving is suspiciously convenient to the entity that is their main revenue source. One could resonably suspect they serve primarily as an antitrust litigation sponge.