←back to thread

114 points cmcconomy | 1 comments | | HN request time: 0s | source
Show context
aliljet ◴[] No.42175062[source]
This is fantastic news. I've been using Qwen2.5-Coder-32B-Instruct with Ollama locally and it's honestly such a breathe of fresh air. I wonder if any of you have had a moment to try this newer context length locally?

BTW, I fail to effectively run this on my 2080 ti, I've just loaded up the machine with classic RAM. It's not going to win any races, but as they say, it's not the speed that matter, it's the quality of the effort.

replies(3): >>42175226 #>>42176314 #>>42177831 #
notjulianjaynes ◴[] No.42175226[source]
Hi, are you able to use Qwen's 128k context length with Ollama? Using AnythingLLM + Ollamma and a GGUF version I kept getting an error message with prompts longer than 32,000 tokens. (summarizing long transcripts)
replies(1): >>42175335 #
syntaxing ◴[] No.42175335[source]
The famous Daniel Chen (same person that made Unsloth and fixed Gemini/LLaMa bugs) mentioned something about this on reddit and offered a fix. https://www.reddit.com/r/LocalLLaMA/comments/1gpw8ls/bug_fix...
replies(2): >>42175727 #>>42175742 #
zargon ◴[] No.42175727[source]
After reading a lot of that thread, my understanding is that yarn scaling is disabled intentionally by default in the GGUFs, because it would degrade outputs for contexts that do fit in 32k. So the only change is enabling yarn scaling at 4x, which is just a configuration setting. GGUF has these configuration settings embedded in the file format for ease of use. But you should be able to override them without downloading an entire duplicate set of weights (12 to 35 GB!). (It looks like in llama.cpp the override-kv option can be used for this, but I haven't tried it yet.)
replies(1): >>42175997 #
1. syntaxing ◴[] No.42175997{3}[source]
Oh super interesting, I didn’t know you can override this with a flag on llama.cpp.