←back to thread

311 points melodyogonna | 2 comments | | HN request time: 0s | source
Show context
nromiun ◴[] No.45138008[source]
Weird that there has been no significant adoption of Mojo. It has been quite some time since it got released and everyone is still using PyTorch. Maybe the license issue is a much bigger deal than people realize.
replies(10): >>45138022 #>>45138094 #>>45138140 #>>45138494 #>>45138853 #>>45138904 #>>45141581 #>>45141912 #>>45142155 #>>45144921 #
jb1991 ◴[] No.45138022[source]
It says at the top:

> write state of the art kernels

Mojo seems to be competing with C++ for writing kernels. PyTorch and Julia are high-level languages where you don't write the kernels.

replies(3): >>45138088 #>>45138136 #>>45138151 #
1. Alexander-Barth ◴[] No.45138088[source]
Actually in julia you can write kernels with a subset of the julia language:

https://cuda.juliagpu.org/stable/tutorials/introduction/#Wri...

With KernelAbstractions.jl you can actually target CUDA and ROCm:

https://juliagpu.github.io/KernelAbstractions.jl/stable/kern...

For python (or rather python-like), there is also triton (and probably others):

https://pytorch.org/blog/triton-kernel-compilation-stages/

replies(1): >>45147144 #
2. davidatbu ◴[] No.45147144[source]
Chris's claim (at least with regards to Triton) is that it avails 80% of the performance, and they're aiming for closer to 100%.