←back to thread

68 points peakji | 2 comments | | HN request time: 0.41s | source

Steiner is a series of reasoning models trained on synthetic data using reinforcement learning. These models can explore multiple reasoning paths in an autoregressive manner during inference and autonomously verify or backtrack when necessary, enabling a linear traversal of the implicit search tree.

Blog: https://medium.com/@peakji/a-small-step-towards-reproducing-...

Hugging Face: https://huggingface.co/collections/peakji/steiner-preview-67...

Show context
Mr_Bees69 ◴[] No.41915821[source]
Really hope this goes somewhere, o1 without openai's costs and restrictions would be sweet.
replies(2): >>41916023 #>>41916629 #
peakji ◴[] No.41916023[source]
The model can already answer some tricky questions that other models (including GPT-4o) have failed to address, achieving a +5.56 improvement on the GPQA-Diamond dataset. Unfortunately, it has not yet managed to reproduce inference-time scaling. I will continue to explore different approaches!
replies(1): >>41917314 #
1. swyx ◴[] No.41917314[source]
not sure i understand the rsults. its based on qwen 32b which is 49.49, and your best model is 53.54. results havent shown that your approach adds significant value yet.

can you compare with just qwen 32b with CoT?

replies(1): >>41917409 #
2. peakji ◴[] No.41917409[source]
The result for Qwen2.5-32B (49.49) is using CoT prompting. Only Steiner models do not use CoT prompting.

More importantly, I highly recommend to try these out firsthand (not only Steiner, but all reasoning models). You'll find that these reasoning models can solve many problems that other models with the same parameter size cannot handle. The existing benchmarks may not reflect this well, as I mentioned in the article:

"... automated evaluation benchmarks, which are primarily composed of multiple-choice questions and may not fully reflect the capabilities of reasoning models. During the training phase, reasoning models are encouraged to engage in open-ended exploration of problems, whereas multiple-choice questions operate under the premise that "the correct answer must be among the options." This makes it evident that verifying options one by one is a more efficient approach. In fact, existing large language models have, consciously or unconsciously, mastered this technique, regardless of whether special prompts are used. Ultimately, it is this misalignment between automated evaluation and genuine reasoning requirements that makes me believe it is essential to open-source the model for real human evaluation and feedback."