/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Show HN: oLLM – LLM Inference for large-context tasks on consumer GPUs
(github.com)
3 points
anuarsh
| 1 comments |
28 Aug 25 23:19 UTC
|
HN request time: 0.201s
|
source
1.
◴[
28 Aug 25 23:19 UTC
]
No.
45058122
[source]
▶
>>45058121 (OP)
#
ID:
GO
↑