←back to thread

157 points galeos | 1 comments | | HN request time: 0.215s | source
Show context
dailykoder ◴[] No.42191742[source]
I have read about it quite a few weeks ago the first time and I found it very interesting.

Now that I have done more than enough CPU design inside FPGAs, I wanted to try something new, some computation heavy things that could benefit from an FPGA. Does anyone here know how feasable it'd be to implement something like that on an FPGA? I only have rather small chips (artix-7 35T and polarfire SoC with 95k logic slices). So I know I won't be able to press a full LLM into that, but something should be possible.

Maybe I should refresh the fundamentals though and start with MNIST. But the question is rather: What is a realistic goal that I could possibly reach with these small FPGAs? Performance might be secondary, I am rather interested in what's possible regarding complexity/features on a small device.

Also has anyone here compiled openCL (or GL?) kernels for FPGAs and can give me a starting point? I was wondering if it's possible to have a working backend for something like tinygrad[1]. I think this would be a good way to learn all the different layers on how such frameworks actually work

- [1] https://github.com/tinygrad/tinygrad

replies(4): >>42192593 #>>42192595 #>>42194578 #>>42197158 #
verytrivial ◴[] No.42192595[source]
You gain in potential parallelism with FPGA, so with very small "at the edge" models they could speed things up, right? But the models are always going to be large, so memory bandwidth is going to be a bottle neck unless some v fancy FPGA memory "fabric" is possible. Perhaps for extremely low latency classification tasks? I'm having trouble picturing that application though.

The code itself is surprisingly small/tight. I'm been playing with llama.cpp for the last few days. The CPU only archive is like 8Mb on gitlab, and there is no memory allocation during run time. My ancient laptop (as in 2014!) is sweating but producing spookily good output with quantized 7B models.

(I'm mainly commenting to have someone correct me, by the way, since I'm interested in this question too!)

replies(2): >>42193146 #>>42197224 #
1. UncleOxidant ◴[] No.42197224[source]
Lower latency, but also much lower power. This sort of thing would be of great interest to companies running AI datacenters (which is why Microsoft is doing this research, I'd think). Low latency is also quite useful for real-time tasks.

> The code itself is surprisingly small/tight. I'm been playing with llama.cpp for the last few days.

Is there a bitnet model that runs on llama.cpp? (looks like it: https://www.reddit.com/r/LocalLLaMA/comments/1dmt4v7/llamacp...) which bitnet model did you use?