Most active commenters

    ←back to thread

    114 points cmcconomy | 11 comments | | HN request time: 1.025s | source | bottom
    1. aliljet ◴[] No.42175062[source]
    This is fantastic news. I've been using Qwen2.5-Coder-32B-Instruct with Ollama locally and it's honestly such a breathe of fresh air. I wonder if any of you have had a moment to try this newer context length locally?

    BTW, I fail to effectively run this on my 2080 ti, I've just loaded up the machine with classic RAM. It's not going to win any races, but as they say, it's not the speed that matter, it's the quality of the effort.

    replies(3): >>42175226 #>>42176314 #>>42177831 #
    2. notjulianjaynes ◴[] No.42175226[source]
    Hi, are you able to use Qwen's 128k context length with Ollama? Using AnythingLLM + Ollamma and a GGUF version I kept getting an error message with prompts longer than 32,000 tokens. (summarizing long transcripts)
    replies(1): >>42175335 #
    3. syntaxing ◴[] No.42175335[source]
    The famous Daniel Chen (same person that made Unsloth and fixed Gemini/LLaMa bugs) mentioned something about this on reddit and offered a fix. https://www.reddit.com/r/LocalLLaMA/comments/1gpw8ls/bug_fix...
    replies(2): >>42175727 #>>42175742 #
    4. zargon ◴[] No.42175727{3}[source]
    After reading a lot of that thread, my understanding is that yarn scaling is disabled intentionally by default in the GGUFs, because it would degrade outputs for contexts that do fit in 32k. So the only change is enabling yarn scaling at 4x, which is just a configuration setting. GGUF has these configuration settings embedded in the file format for ease of use. But you should be able to override them without downloading an entire duplicate set of weights (12 to 35 GB!). (It looks like in llama.cpp the override-kv option can be used for this, but I haven't tried it yet.)
    replies(1): >>42175997 #
    5. notjulianjaynes ◴[] No.42175742{3}[source]
    Yeah unfortunately that's the exact model I'm using (Q5 version. What I've been doing is first loading the transcript into the vector database, and then giving it a prompt thats like "summarize the transcript below: <full text of transcript>". This works surprisingly well except for one transcript I had which was of a 3 hour meeting that was per an online calculator about 38,000 tokens. Cutting the text up into 3 parts and pretending each was a seperate meeting* lead to a bunch of hallucinations for some reason.

    *In theory this shouldn't matter much for my purpose of summarizing city council meetings that follow a predictable format.

    6. syntaxing ◴[] No.42175997{4}[source]
    Oh super interesting, I didn’t know you can override this with a flag on llama.cpp.
    7. ipsum2 ◴[] No.42176314[source]
    The long context model has not been open sourced.
    8. lukev ◴[] No.42177831[source]
    I ran a couple needle-in-a-haystack type queries with just a 32k context length, and was very much not impressed. It often failed to find facts buried in the middle of the prompt, that were stated almost identically to the question being asked.

    It's cool that these models are getting such long contexts, but performance definitely degrades the longer the context gets and I haven't seen this characterized or quantified very well anywhere.

    replies(1): >>42179500 #
    9. zackangelo ◴[] No.42179500[source]
    Would you care to share your prompts?

    They posted a haystack benchmark in the blog post that seems too good to be true.

    replies(2): >>42182251 #>>42188720 #
    10. busssard ◴[] No.42182251{3}[source]
    yeah, when i saw that they have 100% coverage with 1M token, i thought this must be a placeholder image, for when the actual results come in.

    Because there is no variation, nothing.

    11. lukev ◴[] No.42188720{3}[source]
    I wasn't scientific about it, unfortunately. My searches were natural language, not token-based, though.