←back to thread

DeepSeek-v3.1

(api-docs.deepseek.com)
776 points wertyk | 1 comments | | HN request time: 0s | source
Show context
danielhanchen ◴[] No.44978800[source]
For local runs, I made some GGUFs! You need around RAM + VRAM >= 250GB for good perf for dynamic 2bit (2bit MoE, 6-8bit rest) - can also do SSD offloading but it'll be slow.

./llama.cpp/llama-cli -hf unsloth/DeepSeek-V3.1-GGUF:UD-Q2_K_XL -ngl 99 --jinja -ot ".ffn_.*_exps.=CPU"

More details on running + optimal params here: https://docs.unsloth.ai/basics/deepseek-v3.1

replies(6): >>44979837 #>>44980406 #>>44981373 #>>44982860 #>>44984274 #>>44987809 #
pshirshov ◴[] No.44979837[source]
By the way, I'm wondering why unsloth (a goddamn python library) tries to run apt-get with sudo (and fails on my nixos). Like how tf we are supposed to use that?
replies(2): >>44980068 #>>44981691 #
danielhanchen ◴[] No.44980068[source]
Oh hey I'm assuming this is for conversion to GGUF after a finetune? If you need to quantize to GGUF Q4_K_M, we have to compile llama.cpp, hence apt-get and compiling llama.cpp within a Python shell.

There is a way to convert to Q8_0, BF16, F16 without compiling llama.cpp, and it's enabled if you use `FastModel` and not on `FastLanguageModel`

Essentially I try to do `sudo apt-get` if it fails then `apt-get` and if all fails, it just fails. We need `build-essential cmake curl libcurl4-openssl-dev`

See https://github.com/unslothai/unsloth-zoo/blob/main/unsloth_z...

replies(5): >>44980567 #>>44980608 #>>44980665 #>>44982700 #>>44983011 #
elteto ◴[] No.44980567[source]
Dude, this is NEVER ok. What in the world??? A third party LIBRARY running sudo commands? That’s just insane.

You just fail and print a nice error message telling the user exactly what they need to do, including the exact apt command or whatever that they need to run.

replies(4): >>44980604 #>>44980675 #>>44980823 #>>44983311 #
danielhanchen ◴[] No.44980675[source]
Yes I had that at the start, but people kept complaining they don't know how to actually run terminal commands, hence the shortcut :(

I was thinking if I can do it during the pip install or via setup.py which will do the apt-get instead.

As a fallback, I'll probably for now remove shell executions and just warn the user

replies(2): >>44981039 #>>44982238 #
devin ◴[] No.44981039[source]
Don't optimize for these people.
replies(1): >>44981194 #
danielhanchen ◴[] No.44981194[source]
Yep agreed - I primarily thought it was a reasonable "hack", but it's pretty bad security wise, so apologies again.

The current solution hopefully is in between - ie sudo is gone, apt-get will run only after the user agrees by pressing enter, and if it fails, it'll tell the user to read docs on installing llama.cpp

replies(2): >>44981511 #>>44981845 #
woile ◴[] No.44981511{3}[source]
Don't apologize, you are doing amazing work. I appreciate the effort you put.

Usually you don't make assumptions on the host OS, just try to find the things you need and if not, fail, ideally with good feedback. If you want to provide the "hack", you can still do it, but ideally behind a flag, `allow_installation` or something like that. This is, if you want your code to reach broader audiences.

replies(2): >>44981713 #>>44981846 #
1. danielhanchen ◴[] No.44981713{4}[source]
Thank you! :)