←back to thread

899 points georgehill | 4 comments | | HN request time: 0s | source
Show context
world2vec ◴[] No.36216161[source]
Might be a silly question but is GGML a similar/competing library to George Hotz's tinygrad [0]?

[0] https://github.com/geohot/tinygrad

replies(2): >>36216187 #>>36218539 #
qeternity ◴[] No.36216187[source]
No, GGML is a CPU optimized library and quantized weight format that is closely linked to his other project llama.cpp
replies(2): >>36216244 #>>36216266 #
1. ggerganov ◴[] No.36216266[source]
ggml started with focus on CPU inference, but lately we have been augmenting it with GPU support. Although still in development, it already has partial CUDA, OpenCL and Metal backend support
replies(3): >>36216327 #>>36216442 #>>36219452 #
2. qeternity ◴[] No.36216327[source]
Hi Georgi - thanks for all the work, have been following and using since the availability of Llama base layers!

Wasn’t implying it’s CPU only, just that it started as a CPU optimized library.

3. freedomben ◴[] No.36216442[source]
As a person burned by nvidia, I can't thank you enough for the OpenCL support
4. ignoramous ◴[] No.36219452[source]
(a novice here who knows a couple of fancy terms)

> ...lately we have been augmenting it with GPU support.

Would you say you'd then be building an equivalent to Google's JAX?

Someone even asked if anyone would build a C++ to JAX transpiler [0]... I am wondering if that's something you may implement? Thanks.

[0] https://news.ycombinator.com/item?id=35475675