/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Show HN: We made our own inference engine for Apple Silicon
(github.com)
186 points
darkolorin
| 1 comments |
15 Jul 25 11:29 UTC
|
HN request time: 0.21s
|
source
We wrote our inference engine on Rust, it is faster than llama cpp in all of the use cases. Your feedback is very welcomed. Written from scratch with idea that you can add support of any kernel and platform.
Show context
cwlcwlcwlingg
◴[
15 Jul 25 15:06 UTC
]
No.
44571918
[source]
▶
>>44570048 (OP)
#
Wondering why use Rust other than C++
replies(5):
>>44572202
#
>>44573216
#
>>44574364
#
>>44574476
#
>>44576525
#
1.
khurs
◴[
15 Jul 25 22:27 UTC
]
No.
44576525
[source]
▶
>>44571918
#
The recommendation from the security agencies is to prefer Rust over C++ as less risk of exploits.
Checked and Lama.cpp used C++ (obviously) and Llama uses Go.
ID:
GO
↑