←back to thread

58 points ibobev | 5 comments | | HN request time: 0.86s | source
1. sheepscreek ◴[] No.44382488[source]
I’m guessing it’s because they’re using all the computing power the GPU has to offer in CUDA mode, as opposed to sharing the GPU with other functions (when in RTX).
replies(2): >>44384359 #>>44384406 #
2. colechristensen ◴[] No.44384359[source]
Yup this is an "assume spherical cow" situation where it's not dishonest, but you can't draw any real world conclusions from the experiment unless you happen to be working in a very restricted space.
replies(1): >>44384383 #
3. ChocolateGod ◴[] No.44384383[source]
Wouldn't you need to in a real world scenario make the CUDA cores aware of the game geometry adding more work on the CPU?
replies(1): >>44388850 #
4. atq2119 ◴[] No.44384406[source]
More likely it's because the scene they're using is completely unrepresentative of what people are interested in: almost no triangles, primarily procedural nodes (for spheres), and in general a fairly simple scene.
5. touisteur ◴[] No.44388850{3}[source]
Ideally you don't make the cuda cores aware but rather the ray-tracing circuitry. RT cores are designed to perform ray-triangle intersections in a BVH. You get the teraflops and memory bandwidth (or more of it) if you fit the RT-core computing model.

And in most cases it's ok to spend time on one CPU function (creating and loading the BVH) against the hundred thousands of frames you'll be drawing on GPU.