/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Gemma 3 QAT Models: Bringing AI to Consumer GPUs
(developers.googleblog.com)
602 points
emrah
| 1 comments |
20 Apr 25 12:22 UTC
|
HN request time: 0s
|
source
Show context
jarbus
◴[
20 Apr 25 13:31 UTC
]
No.
43743656
[source]
▶
>>43743337 (OP)
#
Very excited to see these kinds of techniques, I think getting a 30B level reasoning model usable on consumer hardware is going to be a game changer, especially if it uses less power.
replies(1):
>>43743674
#
apples_oranges
◴[
20 Apr 25 13:34 UTC
]
No.
43743674
[source]
▶
>>43743656
#
Deepseek does reasoning on my home Linux pc but not sure how power hungry it is
replies(1):
>>43743696
#
gcr
◴[
20 Apr 25 13:38 UTC
]
No.
43743696
[source]
▶
>>43743674
#
what variant? I’d considered DeepSeek far too large for any consumer GPUs
replies(1):
>>43743721
#
scosman
◴[
20 Apr 25 13:43 UTC
]
No.
43743721
[source]
▶
>>43743696
#
Some people run Deepseek on CPU. 37B active params - it isn't fast but it's passible.
replies(1):
>>43743842
#
danielbln
◴[
20 Apr 25 14:05 UTC
]
No.
43743842
[source]
▶
>>43743721
#
Actual deepseek or some qwen/llama reasoning fine-tune?
replies(1):
>>43744550
#
1.
scosman
◴[
20 Apr 25 15:50 UTC
]
No.
43744550
{3}
[source]
▶
>>43743842
#
Actual Deepseek. 500gb of memory and a threadripper works. Not a standard PC spec, but a common ish home brew setup for single user Deepseek.
ID:
GO
↑