/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Llama.cpp 30B runs with only 6GB of RAM now
(github.com)
1311 points
msoad
| 1 comments |
31 Mar 23 20:37 UTC
|
HN request time: 0.205s
|
source
1.
alduin32
◴[
01 Apr 23 02:22 UTC
]
No.
35396406
[source]
▶
>>35393284 (OP)
#
Well, only if you don't count the filesystem cache as "RAM". You still need enough memory so that the kernel can hold all these pages, even if Llama.cpp is not using this memory itself.
ID:
GO
↑