←back to thread

490 points jarmitage | 6 comments | | HN request time: 0.401s | source | bottom
1. VyseofArcadia ◴[] No.40681631[source]
Aren't warps already architectural elements of nvidia graphics cards? This name collision is going to muddy search results.
replies(2): >>40682012 #>>40686086 #
2. logicchains ◴[] No.40682012[source]
>Aren't warps already architectural elements of nvidia graphics cards?

Architectural elements of _all_ graphics cards.

replies(1): >>40682322 #
3. VyseofArcadia ◴[] No.40682322[source]
Unsure of how authoritative this is, but this article[0] seems to imply it's a matter of branding.

> The efficiency of executing threads in groups, which is known as warps in NVIDIA and wavefronts in AMD, is crucial for maximizing core utilization.

[0] https://www.xda-developers.com/how-does-a-graphics-card-actu...

replies(1): >>40684297 #
4. logicchains ◴[] No.40684297{3}[source]
ROCm also refers to them as warps https://rocm.docs.amd.com/projects/HIP/en/latest/understand/... :

>The threads are executed in groupings called warps. The amount of threads making up a warp is architecture dependent. On AMD GPUs the warp size is commonly 64 threads, except in RDNA architectures which can utilize a warp size of 32 or 64 respectively. The warp size of supported AMD GPUs is listed in the Accelerator and GPU hardware specifications. NVIDIA GPUs have a warp size of 32.

replies(1): >>40687804 #
5. ahfeah7373 ◴[] No.40686086[source]
There is also already WARP in the graphics world:

https://learn.microsoft.com/en-us/windows/win32/direct3darti...

Its basically the software implementation of DirectX

6. int_19h ◴[] No.40687804{4}[source]
It actually kinda makes some sense when you realize that "warp" is a reference to warp threads in actual weaving: https://en.wikipedia.org/wiki/Warp_and_weft.