←back to thread

16 points huntdunbar | 1 comments | | HN request time: 0.192s | source

Hey everyone. This is Hunter, CPO at Luxonis! We built OAK 4 (www.luxonis.com/oak4) to eliminate the need for cloud reliance or host computers in robotics & industrial automation. We brought Jetson Orin-level compute and Yocto Linux directly to our stereo cameras.

This allows you to run full CV pipelines (detection + depth + logic) entirely on-device, with no dependency on a host PC or cloud streaming. We also integrated it with Hub, our fleet management platform, to handle deployments, OTA updates, and collect "edge case" (Snaps) for model retraining.

For this generation, we shipped a Qualcomm QCS8550. This gives the device a CPU, GPU, AI accelerator, and native depth processing ISP. It achieves 52 TOPS of processing inside an IP67 housing to handle rough whether, shock, and vibration. At 25W peak, the device is designed to run reliably without active cooling.

Our ML team also released Neural Stereo Depth running our proprietary LENS(Luxonis Edge Neural Stereo) models directly on the device. Visit www.luxonis.com to learn more!

Show context
akouri ◴[] No.46232152[source]
Congrats on the launch! What kind of models can you run on the device?
replies(2): >>46232980 #>>46234059 #
1. max_mclaughlin ◴[] No.46234059[source]
You can run most standard CV models in parallel - including our own Neural Stereo depth estimation models (speed depends on the size you choose)

Here is a look at the performance of some standard models

YOLOv6 - nano: 830 FPS YOLOEv8 - large: 85 FPS DeepLabV3+: 340 FPS YOLOv8-large Pose Estimation: 170 FPS Depth Anything V2: 95 FPS