←back to thread

504 points Terretta | 1 comments | | HN request time: 0s | source
Show context
NitpickLawyer ◴[] No.45066063[source]
Tested this yesterday with Cline. It's fast, works well with agentic flows, and produces decent code. No idea why this thread is so negative (also got flagged while I was typing this?) but it's a decent model. I'd say it's at or above gpt5-mini level, which is awesome in my book (I've been maining gpt5-mini for a few weeks now, does the job on a budget).

Things I noted:

- It's fast. I tested it in EU tz, so ymmv

- It does agentic in an interesting way. Instead of editing a file whole or in many places, it does many small passes.

- Had a feature take ~110k tokens (parsing html w/ bs4). Still finished the task. Didn't notice any problems at high context.

- When things didn't work first try, it created a new file to test, did all the mocking / testing there, and then once it worked edited the main module file. Nice. GPT5-mini would often times edit working files, and then get confused and fail the task.

All in all, not bad. At the price point it's at, I could see it as a daily driver. Even agentic stuff w/ opus + gpt5 high as planners and this thing as an implementer. It's fast enough that it might be worth setting it up in parallel and basically replicate pass@x from research.

IMO it's good to have options at every level. Having many providers fight for the market is good, it keeps them on their toes, and brings prices down. GPT5-mini is at 2$/MTok, this is at 1.5$/MTok. This is basically "free", in the great scheme of things. I ndon't get the negativity.

replies(10): >>45066728 #>>45067116 #>>45067311 #>>45067436 #>>45067602 #>>45067936 #>>45068543 #>>45068653 #>>45068788 #>>45074597 #
coder543 ◴[] No.45067311[source]
Qwen3-Coder-480B hosted by Cerebras is $2/Mtok (both input and output) through OpenRouter.

OpenRouter claims Cerebras is providing at least 2000 tokens per second, which would be around 10x as fast, and the feedback I'm seeing from independent benchmarks indicates that Qwen3-Coder-480B is a better model.

replies(2): >>45067631 #>>45067760 #
sdesol ◴[] No.45067631[source]
As a bit of a side note, I want to like Cerebras, but using any of the models through OpenRouter that uses them has lead to, too many throttling responses. Like you can't seem to make a few calls per minute. I'm not sure if Cerebras is throttling OpenRouter or if they are throttling everybody.

If somebody from Cerebras is reading this, are you having capacity issues?

replies(2): >>45068267 #>>45076815 #
gompertz ◴[] No.45068267[source]
You can get your own key with cerebras and then use it in openrouter. Its a little hidden, but for each provider you can explicitly provide your own key. Then it won't be throttled.
replies(1): >>45076415 #
omneity ◴[] No.45076415[source]
Why would you go through openrouter in this case?
replies(1): >>45076684 #
1. theshrike79 ◴[] No.45076684[source]
Easier to switch models