230 points roryclear | 1 comments | 24 Aug 25 11:34 UTC | HN request time: 0.207s | source
This runs YOLOv8 + bytetrack with Tinygrad detections (depending on user config) are saved and can be sent to the companion iOS app along with a notification, all video processing is done locally, all footage is encrypted before leaving your computer, and the sending notifications + videos part is optional.
This uses tinygrad, so it runs well on my apple silicon macs and should be able to run on a lot of hardware (or will be able to when I remove other deps).
Where can I find the list of supported GPUs? Frigate been able to handle everything I've tried so far, all from Nvidia and AMD GPUs to even Intel iGPUs.
Maybe my view of frigate and tensorflow (assuming frigate still uses it) is outdated then. I’m referring to tinygrad vs tensorflow when I say GPU support, of course google’s tensorflow is best for google’s TPUs. I’ve had better luck using tinygrad on my personal devices, but I am biased as it’s been a while since I’ve used tensorflow
This would be a good point of differentiation to make on your GitHub page or for a technical audience on your website. Frigate is SOTA in many folks minds, and to show that you are using tinygrad over tensorflow may be a good “modern-ness” signal for that audience.
Edit: another solution in this space shows a list of supported ML runtimes, which would be good info for folks wanting to run on specific hardware. https://github.com/boquila/boquilahub
Supported runtimes list would be nice, but I don't have access to much hardware to test on. I aim to remove most dependencies and support anything that can run tinygrad + ffmpeg