/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
I Self-Hosted Llama 3.2 with Coolify on My Home Server
(geek.sg)
221 points
whitefables
| 1 comments |
16 Oct 24 05:26 UTC
|
HN request time: 0s
|
source
Show context
varun_ch
◴[
16 Oct 24 07:27 UTC
]
No.
41856480
[source]
▶
>>41855886 (OP)
#
I’m curious about how good the performance with local LLMs is on ‘outdated’ hardware like the author’s 2060. I have a desktop with a 2070 super that it could be fun to turn into an “AI server” if I had the time…
replies(7):
>>41856521
#
>>41856558
#
>>41856559
#
>>41856609
#
>>41856875
#
>>41856894
#
>>41857543
#
1.
nubinetwork
◴[
16 Oct 24 08:38 UTC
]
No.
41856894
[source]
▶
>>41856480
#
I'm happy with a Radeon VII, unless the model is bigger than 16gb...
ID:
GO
↑