←back to thread

314 points pretext | 1 comments | | HN request time: 0.195s | source
Show context
banjoe ◴[] No.46220493[source]
Wow, crushing 2.5 Flash on every benchmark is huge. Time to move all of my LLM workloads to a local GPU rig.
replies(3): >>46220593 #>>46223561 #>>46229791 #
1. red2awn ◴[] No.46223561[source]
Why would you use an Omni model for text only workload... There is Qwen3-30B-A3B.