/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Qwen3-Omni-Flash-2025-12-01:a next-generation native multimodal large model
(qwen.ai)
314 points
pretext
| 1 comments |
10 Dec 25 16:13 UTC
|
HN request time: 0.195s
|
source
Show context
banjoe
◴[
10 Dec 25 17:20 UTC
]
No.
46220493
[source]
▶
>>46219538 (OP)
#
Wow, crushing 2.5 Flash on every benchmark is huge. Time to move all of my LLM workloads to a local GPU rig.
replies(3):
>>46220593
#
>>46223561
#
>>46229791
#
1.
red2awn
◴[
10 Dec 25 20:45 UTC
]
No.
46223561
[source]
▶
>>46220493
#
Why would you use an Omni model for text only workload... There is Qwen3-30B-A3B.
ID:
GO
↑