Looks to be the ~same intelligence as gpt-oss-120B, but about 10x slower and 3x more expensive?
https://artificialanalysis.ai/models/deepseek-v3-1-reasoning
replies(5):
https://artificialanalysis.ai/models/deepseek-v3-1-reasoning
Let's hope not, because gpt-oss-120B can be dramatically moronical. I am guessing the MoE contains some very dumb subnets.
Benchmarks can be a starting point, but you really have to see how the results work for you.