←back to thread

361 points mseri | 2 comments | | HN request time: 0s | source
Show context
stavros ◴[] No.46002252[source]
> the best fully open 32B-scale thinking model

It's absolutely fantastic that they're releasing an actually OSS model, but isn't "the best fully open" a bit of a low bar? I'm not aware of any other fully open models.

replies(9): >>46002293 #>>46002338 #>>46002597 #>>46002842 #>>46002944 #>>46003313 #>>46004177 #>>46006028 #>>46006176 #
1. shoffmeister ◴[] No.46002842[source]
Switzerland, through EPFL, ETH Zurich, and the Swiss National Supercomputing Centre, has released a complete pipeline with all training data - that is "fully open", to my understanding.

See https://www.swiss-ai.org/apertus for details.

https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-l... was the press release.

replies(1): >>46002918 #
2. YetAnotherNick ◴[] No.46002918[source]
All the data used by Apertus is just data processed or generated by American companies(NVidia, Apple and huggingface mostly). They didn't release any new data.

Olmo and HF not only processed the data to address language bias, they also publish lot of data augmentation results including European language performance. European LLMs just claim that language bias is the motivator.