←back to thread

210 points blackcat201 | 7 comments | | HN request time: 0.228s | source | bottom
Show context
textembedding ◴[] No.45769528[source]
125 upvotes with 2 comments is kinda sus
replies(3): >>45769778 #>>45770249 #>>45770284 #
1. muragekibicho ◴[] No.45769778[source]
Lots of model releases are like this. We can only upvote. We can't run the model on our personal computers. We can neither test their 'Efficient Attention' concept on our personal computers.

Honestly, it would take 24 hours just to download the 98 GB model if I wanted to try it out (assuming I had a card with 98 GB of ram).

replies(3): >>45770625 #>>45771629 #>>45771703 #
2. danielbln ◴[] No.45770625[source]
Your have a 9Mbit downlink? I'm not sure you're trying much of anything with that Internet connection, no offense.
3. Der_Einzige ◴[] No.45771629[source]
People here absolutely can afford the ~2 dollars an hour of cloud rental costs for an H100 or even 8 (OCI has cheap H100 nodes). Most people are too lazy to even try and thank goodness for it because I prefer my very high salaries as someone who isn’t too lazy to spin up a cloud instance.
replies(1): >>45772029 #
4. samus ◴[] No.45771703[source]
We very much can, especially such a Mixture of Experts model with only 3B activated parameters.

With an RTX 3070 (7GB GRAB VRAM), 32 GB RAM and an SSD I can run such models at speeds tolerable for casual use.

replies(1): >>45772009 #
5. embedding-shape ◴[] No.45772009[source]
How many tok/s are you getting (with any runtime) with either the Kimi-Linear-Instruct or Kimi-Linear-Base on your RTX 3070?
replies(1): >>45776165 #
6. embedding-shape ◴[] No.45772029[source]
Not to mention some of us have enough disposable income to buy a RTX Pro 6000 so we can run our stuff locally and finally scale up our model training a little bit.
7. samus ◴[] No.45776165{3}[source]
With a Qwen3-32B-A3B (Q8) I'm getting 10-20 t/sec on KoboldAI, e.g., llama cpp. Faster than I can read, so good enough for hobby use. I expect this model to be significantly faster, but llama.cpp-based software probably doesn't support it yet.