←back to thread

504 points Terretta | 1 comments | | HN request time: 0s | source
Show context
NitpickLawyer ◴[] No.45066063[source]
Tested this yesterday with Cline. It's fast, works well with agentic flows, and produces decent code. No idea why this thread is so negative (also got flagged while I was typing this?) but it's a decent model. I'd say it's at or above gpt5-mini level, which is awesome in my book (I've been maining gpt5-mini for a few weeks now, does the job on a budget).

Things I noted:

- It's fast. I tested it in EU tz, so ymmv

- It does agentic in an interesting way. Instead of editing a file whole or in many places, it does many small passes.

- Had a feature take ~110k tokens (parsing html w/ bs4). Still finished the task. Didn't notice any problems at high context.

- When things didn't work first try, it created a new file to test, did all the mocking / testing there, and then once it worked edited the main module file. Nice. GPT5-mini would often times edit working files, and then get confused and fail the task.

All in all, not bad. At the price point it's at, I could see it as a daily driver. Even agentic stuff w/ opus + gpt5 high as planners and this thing as an implementer. It's fast enough that it might be worth setting it up in parallel and basically replicate pass@x from research.

IMO it's good to have options at every level. Having many providers fight for the market is good, it keeps them on their toes, and brings prices down. GPT5-mini is at 2$/MTok, this is at 1.5$/MTok. This is basically "free", in the great scheme of things. I ndon't get the negativity.

replies(10): >>45066728 #>>45067116 #>>45067311 #>>45067436 #>>45067602 #>>45067936 #>>45068543 #>>45068653 #>>45068788 #>>45074597 #
coder543 ◴[] No.45067311[source]
Qwen3-Coder-480B hosted by Cerebras is $2/Mtok (both input and output) through OpenRouter.

OpenRouter claims Cerebras is providing at least 2000 tokens per second, which would be around 10x as fast, and the feedback I'm seeing from independent benchmarks indicates that Qwen3-Coder-480B is a better model.

replies(2): >>45067631 #>>45067760 #
stocksinsmocks ◴[] No.45067760[source]
There is a national superset of “NIH” bias that I think will impede adoption of Chinese-origin models for the foreseeable future. That’s a shame because by many objective metrics they’re a better value.
replies(1): >>45068189 #
dlachausse ◴[] No.45068189[source]
In my case it's not NIH, but rather that I don't trust or wish to support my nation's largest geopolitical adversary.
replies(4): >>45070723 #>>45070873 #>>45071387 #>>45075162 #
bigyabai ◴[] No.45070723[source]
Your loss. Qwen3 A3B replaced ChatGPT for me entirely, it's hard for me to imagine going back using remote models when I can load finetuned and uncensored models at-will.

Maybe you'd find consolation in using Apple or Nvidia-designed hardware for inference on these Chinese models? Sure, the hardware you own was also built by your "nation's largest geopolitical adversary" but that hasn't seemed to bother you much.

replies(2): >>45071415 #>>45073708 #
wickedsight ◴[] No.45073708[source]
How did it replace ChatGPT for you? I'm running Qwen3 Coder locally and in no way does it compare to ChatGPT. In agentic workflows it fails almost every time. Maybe I'm doing something wrong, but I'm falling back to OpenAI all the time.
replies(1): >>45075932 #
1. evilduck ◴[] No.45075932[source]
It feels to me like it could replace ChatGPT 3.5 from the perspective of comparing it to their web chat interface if you were just asking about programming things two years ago, but the world moved on and you can do a lot more than just talk with a model and copy paste code now.

Having Qwen3 Coder's A3B available for chat oriented coding conversations is indeed amazing for what it is and for being local and free but I also struggled to get useful agentic tools to reliably work with it (a fair number of tool calls fail or start looping, even with correct and advised settings, and tried using Cline, Roo, Continue and their own Qwen Code CLI). Even when I do get it to work for a few tasks in a row I don't have the hardware to run at comparable speed or manage the massive context sizes as a hosted frontier model. And buying capable enough hardware costs about as much as many years of paying for top tier hosted models.