←back to thread

311 points melodyogonna | 5 comments | | HN request time: 0.025s | source
Show context
nromiun ◴[] No.45138008[source]
Weird that there has been no significant adoption of Mojo. It has been quite some time since it got released and everyone is still using PyTorch. Maybe the license issue is a much bigger deal than people realize.
replies(10): >>45138022 #>>45138094 #>>45138140 #>>45138494 #>>45138853 #>>45138904 #>>45141581 #>>45141912 #>>45142155 #>>45144921 #
1. jb1991 ◴[] No.45138022[source]
It says at the top:

> write state of the art kernels

Mojo seems to be competing with C++ for writing kernels. PyTorch and Julia are high-level languages where you don't write the kernels.

replies(3): >>45138088 #>>45138136 #>>45138151 #
2. Alexander-Barth ◴[] No.45138088[source]
Actually in julia you can write kernels with a subset of the julia language:

https://cuda.juliagpu.org/stable/tutorials/introduction/#Wri...

With KernelAbstractions.jl you can actually target CUDA and ROCm:

https://juliagpu.github.io/KernelAbstractions.jl/stable/kern...

For python (or rather python-like), there is also triton (and probably others):

https://pytorch.org/blog/triton-kernel-compilation-stages/

replies(1): >>45147144 #
3. jakobnissen ◴[] No.45138136[source]
I think Julia aspires to be performant enough that you can write the kernels in Julia, so Julia is more like Mojo + Python together.

Although I have my doubts that Julia is actually willing to make the compromises which would allow Julia to go that low level. I.e. semantic guarantees about allocations and inference, guarantees about certain optimizations, and more.

4. pjmlp ◴[] No.45138151[source]
You can write kernels with Python using CUDA and Open API SDKs in 2025, that is one of the adoption problems regarding Mojo.
5. davidatbu ◴[] No.45147144[source]
Chris's claim (at least with regards to Triton) is that it avails 80% of the performance, and they're aiming for closer to 100%.