https://arxiv.org/abs/2501.00663
https://arxiv.org/pdf/2504.13173
Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.
https://arxiv.org/abs/2501.00663
https://arxiv.org/pdf/2504.13173
Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.
the current Meta outlook is embarassing tbh, the fact they have largest data of social media in planet and they cant even produce a decent model is quiet "scary" position
Here is a bit more information about this program: https://www.google.com/about/careers/applications/jobs/resul...
It's not impossible that they asses it as local maximum / dead end and are evaluating/training something completely different - and if it'll work, it'll work big time.
Sinking a bazillion dollars into models alone doesn’t get you shit except a gold star for being the valley’s biggest smartypants, because in the product world, model improvements only significantly improve all-purpose chatbots. The whole veg-o-matic “step right up folks— it slices, it dices, it makes julienne fries!” approach to product design almost never yields something focused enough to be an automatic goto for specific tasks, or simple/reliable enough to be a general purpose tool for a whole category of tasks. Once the novelty wears off, people largely abandon it for more focused tools that more effectively solve specific problems (e.g. blender, vegetable peeler) or simpler everyday tools that you don’t have to think about as much even if they might not be the most efficient tool for half your tasks (e.g. paring knife.) Professionals might have enough need and reason to go for a really great in-between tool (e.g mandolin) but that’s a different market, and you only tend to get a limited set of prosumers outside of that. Companies more focused on specific products, like coding, will have way more longevity than companies that try to be everything to everyone.
Meta, Google, Microsoft, and even Apple have more pressure to make products that sanely fit into their existing product lines. While that seems like a handicap if you’re looking at it from the “AI company” perspective, I predict the restriction will enforce the discipline to create tools that solve specific problems for people rather than spending exorbitant sums making benchmark go up in pursuit of some nebulous information revolution.
Meta seems to have a much tougher job trying to make tools that people trust them to be good at. Most of the highest-visibility things like the AI Instagram accounts were disasters. Nobody thinks of Meta as a serious, general-purpose business ecosystem, and privacy-wise, I trust them even less than Google and Microsoft: there’s no way I’m trusting them with my work code bases. I think the smart move by Meta would be to ditch the sunk costs worries, stop burning money on this, focus on their core products (and new ones that fit their expertise) and design these LLM features in when they’ll actually be useful to users. Microsoft and Google both have existing tools that they’ve already bolstered with these features, and have a lot of room within their areas of expertise to develop more.
Who knows— I’m no expert— but I think meta would be smart to try and opt out as much as possible without making too many waves.
80% of the ecosystem is built on top of companies, groups and individuals publishing their research openly, not sure why Google would get more credit for this than others...
AI is a bit different.
Recently, my favorite from them was lumine: https://arxiv.org/abs/2511.08892
Here's their official page: https://seed.bytedance.com/en/research
To wit, it's dangerous to assume the value of this idea based on the lack of public implementations.
I know I know that Elon is crazy etc but Grok example and way to integrate with core product is actually the only ways I can even came up tbh (other than character.ai flavor)
It's very likely no one is using this architecture at Google for any production work loads. There are a lot of student researchers doing fun proof of concept papers, they're allowed to publish because it's good PR and it's good for their careers.
Given the competitive nature of the AI race, it's hard to believe any of these companies are really trying to help the competition.
2nd tier winner is Amazon for the same reasons between being able to leverage AI with both Amazon Retail and AWS where they can sell shovels. I’ve also found their internal Nova models to be pretty good for my projects.
Microsoft will be okay because of Azure and maybe Office if they get their AI story right.
I just don’t see any world where OpenAI comes out ahead from a business standpoint as long as they are sharecroppers on other people’s hardware. ChatGPT alone will never make it worth the trillion dollar capitalization long term unless it becomes a meme stock like Tesla
You don't necessarily have to prove it out on large foundation models first. Can it beat out a 32b parameter model, for example?
b is mostly not true but c is especially not true. I doubt they do it because it wouldn't work; it's not high quality data.
But it would also obviously leak a lot of personal info, and that really gets you in danger. Meta and Google are able to serve you ads with your personal info /because they don't leak it/.
(Also data privacy laws forbid it anyway, because you can't use personal info for new uses not previously agreed to.)
Most research coming out of big US labs is counter indicative of practical performance. If it worked (too) well in practice, it wouldn't have been published.
Some examples from DeepSeek:
While they do have lots of money and many people, they don't have infinite money and specifically only have so much hot infrastructure to spread around. You'd expect they have to gradually build up the case that a large scale experiment is likely enough to yield a big enough advantage over what's already claiming those resources.
We post a lot of research on mlscaling sub if you want to look back through them.
So, I think they could default on doing it for small demonstrators.
(1) Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek
(2) DeepSeek's claim of training a cutting-edge LLM using a fraction of the compute that is typically needed, without providing a plausible, reproducible method
(3) Early DeepSeek coming up with near-identical answers to ChatGPT--e.g. https://www.reddit.com/r/ChatGPT/comments/1idqi7p/deepseek_a...
Here's an umbrella doc from the USTR, and the good stuff: China used foreign ownership restrictions, such as joint venture (JV) requirements and foreign equity limitations, and various administrative review and licensing processes, to require or pressure technology transfer from U.S. companies. 2. China’s regime of technology regulations forced U.S. companies seeking to license technologies to Chinese entities to do so on non-market-based terms that favor Chinese recipients. 3. China directed and unfairly facilitated the systematic investment in, and acquisition of, U.S. companies and assets by Chinese companies to obtain cutting-edge technologies and IP and generate the transfer of technology to Chinese companies. 4. China conducted and supported unauthorized intrusions into, and theft from, the computer networks of U.S. companies to access their IP, including trade secrets, and confidential business information.
As mentioned - no one has claimed that DeepSeek in its entirety was stolen from the U.S.
It is almost a certainty based on decades of historical precedent of systematic theft that techniques, research, and other IP was also systematically stolen for this critical technology.
Don't close your eyes when the evidence, both rigorously proven and common sense, is staring you in the face.
...and of course the completely insane fact that China has been running on-the-ground operations in the US (and other countries) to discredit, harass, blackmail, and kidnap Chinese who are critical of the government (https://www.npr.org/2020/10/28/928684913/china-runs-illegal-... and https://www.justice.gov/archives/opa/pr/eight-individuals-ch...) - INCLUDING CITIZENS OF OTHER COUNTRIES (https://www.smh.com.au/world/asia/detained-blogger-revealed-...).
Is that supposed to be a long time? Seems fair that companies don't rush to open up their models.
No, your comment seems to be a deflection. You made an outstanding claim, that DS stole some IP, and have been asked for outstanding evidence, or at least some evidence. You need to provide it if you want to be taken seriously.
>Large-scale exfiltration of data from ChatGPT when DeepSeek was being developed, and which Microsoft linked to DeepSeek
Where's the evidence for that? I also have a claim that I can't back up with anything more than XLab's report: before the release of R1, there were multiple attempts to hack DS's systems, which nobody noticed. [1]
You really seem to have no idea what you're talking about. R1 was an experiment on teaching the model to reason on its own, exactly to avoid large amounts of data in post-training. It also partially failed, they called the failed snapshot R1-Zero. And it's pretty different from any OpenAI or Anthropic model.
>DeepSeek's claim of training a cutting-edge LLM using a fraction of the compute that is typically needed, without providing a plausible, reproducible method
DeepSeek published a lot more about their models than any top tier US lab before them, including their production code. And they're continuing doing so. All their findings in R1 are highly plausible and most are replicated to some degree and adopted in the research and industry. Moonshot AI trained their K2 on DeepSeek's architecture with minor tweaks (not to diminish their novel findings). That's a really solid model.
Moreover, they released their DeepSeek-Math-7B-RL back in April 2024. [2] It was a tiny model that outperformed huge then-SOTA LLMs like Claude 3 Opus in math, and validated their training technique (GPRO). Basically, they made the first reasoning model worth talking about. Their other optimizations (MLA) can be traced back to DeepSeek v2.
>Early DeepSeek coming up with near-identical answers to ChatGPT--e.g. https://www.reddit.com/r/ChatGPT/comments/1idqi7p/deepseek_a...
That's n=1 nonsense, not evidence. GPT contamination was everywhere, even Claude used to claim to be GPT-3 occasionally, or Reddit Anti-Evil Team. (yes, really) All models have overlapping datasets that are also contaminated with previous models outputs, and mode collapse makes them converge on similar patterns which seem to come and go with each generation.
name is just topical. although it says something about 2025 that we can't tell!
This is not the same thing at all. Current legal doctrine is that ChatGPT output is not copyrightable, so at most Deepseek violated the terms of use of ChatGPT.
That isn't IP theft.
To add to that example, there are numerous open-source datasets that are derived from ChatGPT data. Famously, the Alpaca dataset kick-started the open source LLM movement by fine tuning Llama on a GPT-derived dataset: https://huggingface.co/datasets/tatsu-lab/alpaca
"some elements of the indictment concern cyber-snooping in connection with trade disputes, which at least sounds a lot like the kind of cyber-snooping on firms that the United States does."
https://www.lawfaremedia.org/article/why-did-doj-indict-chin...
https://www.theguardian.com/world/2013/sep/09/nsa-spying-bra...
https://edition.cnn.com/2015/04/30/news/airbus-germany-nsa-s...
Do we all forget how bad GPT 4.5 was?
OpenAI got out of that mess with some miraculous post-training efforts on their older GPT-4o model.
But in a different timeline we are all talking about how great Llama 4.5 is and how OpenAI needs to recover from the GPT 4.5 debacle.
Student: Look, a well known financial expert placed what could potentially be a hundred dollar bill on the ground, other well-known financial experts just leave it there!
> In fact, the UL2 20B model (at Google) was trained by leaving the job running accidentally for a month.
https://www.yitay.net/blog/training-great-llms-entirely-from...
It didn't bench well against the other benchmaxxed models, and it was too expensive to run, but it was a glimpse of the future where more capable hardware will lead to appreciably smarter models.