←back to thread

490 points jarmitage | 4 comments | | HN request time: 0.863s | source
Show context
VyseofArcadia ◴[] No.40681631[source]
Aren't warps already architectural elements of nvidia graphics cards? This name collision is going to muddy search results.
replies(2): >>40682012 #>>40686086 #
1. logicchains ◴[] No.40682012[source]
>Aren't warps already architectural elements of nvidia graphics cards?

Architectural elements of _all_ graphics cards.

replies(1): >>40682322 #
2. VyseofArcadia ◴[] No.40682322[source]
Unsure of how authoritative this is, but this article[0] seems to imply it's a matter of branding.

> The efficiency of executing threads in groups, which is known as warps in NVIDIA and wavefronts in AMD, is crucial for maximizing core utilization.

[0] https://www.xda-developers.com/how-does-a-graphics-card-actu...

replies(1): >>40684297 #
3. logicchains ◴[] No.40684297[source]
ROCm also refers to them as warps https://rocm.docs.amd.com/projects/HIP/en/latest/understand/... :

>The threads are executed in groupings called warps. The amount of threads making up a warp is architecture dependent. On AMD GPUs the warp size is commonly 64 threads, except in RDNA architectures which can utilize a warp size of 32 or 64 respectively. The warp size of supported AMD GPUs is listed in the Accelerator and GPU hardware specifications. NVIDIA GPUs have a warp size of 32.

replies(1): >>40687804 #
4. int_19h ◴[] No.40687804{3}[source]
It actually kinda makes some sense when you realize that "warp" is a reference to warp threads in actual weaving: https://en.wikipedia.org/wiki/Warp_and_weft.