It's absolutely fantastic that they're releasing an actually OSS model, but isn't "the best fully open" a bit of a low bar? I'm not aware of any other fully open models.
And otherwise you 1on1 start competing with notsoOpenAI, or say Llama.
See https://www.swiss-ai.org/apertus for details.
https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-l... was the press release.
Olmo and HF not only processed the data to address language bias, they also publish lot of data augmentation results including European language performance. European LLMs just claim that language bias is the motivator.
> We go beyond just releasing model weights - we provide our training code, training data, our model weights, and our recipes.
We are competitive with open weights models in general, just a couple points behind best Qwen.
Fully open models are important for research community; a lot of fundamental discoveries are made when you have access to training data. We call out we are the best fully open model because researchers would want to know about that.
There's a bunch of other fully open models, including the [Marin](https://marin.community/) series of models out of Stanford and Nvidia regularly releases fully open models.