←back to thread

114 points cmcconomy | 1 comments | | HN request time: 0.204s | source
Show context
aliljet ◴[] No.42175062[source]
This is fantastic news. I've been using Qwen2.5-Coder-32B-Instruct with Ollama locally and it's honestly such a breathe of fresh air. I wonder if any of you have had a moment to try this newer context length locally?

BTW, I fail to effectively run this on my 2080 ti, I've just loaded up the machine with classic RAM. It's not going to win any races, but as they say, it's not the speed that matter, it's the quality of the effort.

replies(3): >>42175226 #>>42176314 #>>42177831 #
1. ipsum2 ◴[] No.42176314[source]
The long context model has not been open sourced.