←back to thread

151 points ibobev | 5 comments | | HN request time: 0.039s | source
Show context
bob1029 ◴[] No.45653379[source]
I look at cross core communication as a 100x latency penalty. Everything follows from there. The dependencies in the workload ultimately determine how it should be spread across the cores (or not!). The real elephant in the room is that oftentimes it's much faster to just do the whole job on a single core even if you have 255 others available. Some workloads do not care what kind of clever scheduler you have in hand. If everything constantly depends on the prior action you will never get any uplift.

You see this most obviously (visually) in places like game engines. In Unity, the difference between non-burst and burst-compiled code is very extreme. The difference between single and multi core for the job system is often irrelevant by comparison. If the amount of cpu time being spent on each job isn't high enough, the benefit of multicore evaporates. Sending a job to be ran on the fleet has a lot of overhead. It has to be worth that one time 100x latency cost both ways.

The GPU is the ultimate example of this. There are some workloads that benefit dramatically from the incredible parallelism. Others are entirely infeasible by comparison. This is at the heart of my problem with the current machine learning research paradigm. Some ML techniques are terrible at running on the GPU, but it seems as if we've convinced ourselves that GPU is a prerequisite for any kind of ML work. It all boils down to the latency of the compute. Getting data in and out of a GPU takes an eternity compared to L1. There are other fundamental problems with GPUs (warp divergence) that preclude clever workarounds.

replies(7): >>45660423 #>>45661402 #>>45661430 #>>45662310 #>>45662427 #>>45662527 #>>45667568 #
bsenftner ◴[] No.45660423[source]
Astute points. I've worked on an extremely performant facial recognition system (tens of millions of face compares per second per core) that lives in L1 and does not use the GPU for the FR inference at all, only for the display of the video and the tracked people within. I rarely even bother telling ML/DL/AI people it does not use the GPU, because I'm just tired of the argument that "we're doing it wrong".
replies(4): >>45663377 #>>45663434 #>>45663730 #>>45666183 #
zipy124 ◴[] No.45663377[source]
How are you doing tens of millions of faces per second per core, first of all assuming a 5ghz processor, that gives you 500 cycles per image if you do ten million a second, that's not nearly enough to do anything image related. Second of all L1 cache is at most in the hundreds of kilobytes, so the faces aren't in L1 but must be retrieved from elsewhere...??
replies(4): >>45663801 #>>45663834 #>>45663907 #>>45666117 #
1. Keyframe ◴[] No.45663801[source]
You can't look at it like _that_. Biometrics has its own "things". I don't know what OP is actually doing, but it's probably not classical image processing. Most probably facial features are going through some "form of LGBPHS binarized and encoded which is then fed into an adaptive bloom filter based transform"[0].

Paper quotes 76,800 bits per template (less compressed) and with 64-bit words it's what, 1200 64-bit bitwise ops. at 4.5 Ghz it's 4.5b ops per second / 1200 ops per per comparison which is ~3.75 million recognitions per second. Give or take some overhead, it's definitely possible.

[0] https://www.christoph-busch.de/files/Gomez-FaceBloomFilter-I...

Cache locality is a thing. Like in raytracing and the old confucian adage that says "Primary rays cache, secondary trash".

replies(1): >>45664454 #
2. reactordev ◴[] No.45664454[source]
Correct, it’s probably distance of a vector or something like that after the bloom. Take the facial points as a vec<T> as you only have a little over a dozen and it’s going to fit nicely in L1.
replies(1): >>45668775 #
3. bsenftner ◴[] No.45668775[source]
NDA prevents me from saying anything beyond the compares are minimal representatives of a face template, and those stream through the core's caches.
replies(2): >>45669495 #>>45672890 #
4. reactordev ◴[] No.45669495{3}[source]
Queue the “If I were to build it…” ;)
5. bsenftner ◴[] No.45672890{3}[source]
A public report from the employer about the tech https://cyberextruder.com/wp-content/uploads/2022/06/Accurac... (I no longer work there.)