←back to thread

1311 points msoad | 2 comments | | HN request time: 0s | source
Show context
brucethemoose2 ◴[] No.35393393[source]
Does that also mean 6GB VRAM?

And does that include Alpaca models like this? https://huggingface.co/elinas/alpaca-30b-lora-int4

replies(2): >>35393441 #>>35393450 #
sp332 ◴[] No.35393450[source]
According to https://mobile.twitter.com/JustineTunney/status/164190201019... you can probably use the conversion tools from the repo on Alpaca and get the same result.

If you want to run larger Alpaca models on a low VRAM GPU, try FlexGen. I think https://github.com/oobabooga/text-generation-webui/ is one of the easier ways to get that going.

replies(3): >>35393841 #>>35396847 #>>35397363 #
1. brucethemoose2 ◴[] No.35393841[source]
Yeah, or deepspeed presumably. Maybe torch.compile too.

I dunno why I thought llama.cpp would support gpus. shrug

replies(1): >>35395707 #
2. sp332 ◴[] No.35395707[source]
Lots of C++ programs use the GPU. It's irrelevant.