←back to thread

23 points robertvc | 3 comments | | HN request time: 0s | source
Show context
subharmonicon ◴[] No.45154175[source]
TLDR: In order to get good performance you need to use vendor-specific extensions that result in the same lock-in Modular has been claiming they will enable you to avoid.
replies(2): >>45154429 #>>45156295 #
totalperspectiv ◴[] No.45154429[source]
I don’t follow your logic. Mojo can target multiple gpu vendors. What is the Modular specific lock in?
replies(2): >>45154650 #>>45156105 #
smilekzs ◴[] No.45154650[source]
Not OP but I think this could be an instance of leaky abstraction at work. Most of the time you hand-write an accelerator kernel hoping to optimize for runtime performance. If the abstraction/compiler does not fully insulate you from micro-architectural details affecting performance in non-trivial ways (e.g. memory bank conflict as mentioned in the article) then you end up still having per-vendor implementations, or compile-time if-else blocks all over the place. This is less than ideal, but still arguably better than working with separate vendor APIs, or worse, completely separate toolchains.
replies(1): >>45154893 #
whimsicalism ◴[] No.45154893[source]
Yes, it looks like they have some sort of metaprogramming setup (nicer than C++) for doing this: https://www.modular.com/mojo
replies(1): >>45158770 #
1. totalperspectiv ◴[] No.45158770[source]
I can confirm, it’s quite nice.
replies(1): >>45160567 #
2. whimsicalism ◴[] No.45160567[source]
jw: why do you use mojo here over triton or the new pythonic cute/cutlass?
replies(1): >>45167931 #
3. totalperspectiv ◴[] No.45167931[source]
Because I was originally writing some very CPU intensive SIMD stuff, which Mojo is also fantastic for. Once I got that working and running nicely I decided to try getting the same algo running on GPU since, at the time, they had just open sourced the GPU parts of the stdlib. It was really easy to get going with.

I have not used Triton/Cute/Cutlass though, so I can't compare against anything other than Cuda really.