Most active commenters
  • pjmlp(4)

←back to thread

311 points melodyogonna | 30 comments | | HN request time: 0.625s | source | bottom
1. nromiun ◴[] No.45138008[source]
Weird that there has been no significant adoption of Mojo. It has been quite some time since it got released and everyone is still using PyTorch. Maybe the license issue is a much bigger deal than people realize.
replies(10): >>45138022 #>>45138094 #>>45138140 #>>45138494 #>>45138853 #>>45138904 #>>45141581 #>>45141912 #>>45142155 #>>45144921 #
2. jb1991 ◴[] No.45138022[source]
It says at the top:

> write state of the art kernels

Mojo seems to be competing with C++ for writing kernels. PyTorch and Julia are high-level languages where you don't write the kernels.

replies(3): >>45138088 #>>45138136 #>>45138151 #
3. Alexander-Barth ◴[] No.45138088[source]
Actually in julia you can write kernels with a subset of the julia language:

https://cuda.juliagpu.org/stable/tutorials/introduction/#Wri...

With KernelAbstractions.jl you can actually target CUDA and ROCm:

https://juliagpu.github.io/KernelAbstractions.jl/stable/kern...

For python (or rather python-like), there is also triton (and probably others):

https://pytorch.org/blog/triton-kernel-compilation-stages/

replies(1): >>45147144 #
4. fnands ◴[] No.45138094[source]
It's still very much in a beta stage, so a little bit hard to use yet.

Mojo is effectively an internal tool that Modular have released publicly.

I'd be surprised to see any serious adoption until a 1.0 state is reached.

But as the other commented said, it's not really competing with PyTorch, it's competing with CUDA.

5. jakobnissen ◴[] No.45138136[source]
I think Julia aspires to be performant enough that you can write the kernels in Julia, so Julia is more like Mojo + Python together.

Although I have my doubts that Julia is actually willing to make the compromises which would allow Julia to go that low level. I.e. semantic guarantees about allocations and inference, guarantees about certain optimizations, and more.

6. pjmlp ◴[] No.45138140[source]
I personally think they overshot themselves.

First of all some people really like Julia, regardless of how it gets discussed on HN, its commercial use has been steadily growing, and has GPGPU support.

On the other hand, regardless of the sore state of JIT compilers on CPU side for Python, at least MVidia and Intel are quite serious on Python DSLs for GPGPU programming on CUDA and One API, so one gets close enough to C++ performance while staying in Python.

So Mojo isn't that appealing in the end.

replies(3): >>45138834 #>>45141743 #>>45141841 #
7. pjmlp ◴[] No.45138151[source]
You can write kernels with Python using CUDA and Open API SDKs in 2025, that is one of the adoption problems regarding Mojo.
8. pansa2 ◴[] No.45138494[source]
Sounds to me like it's very incomplete:

> maybe a year, 18 months from now [...] we’ll add classes

9. nickpsecurity ◴[] No.45138834[source]
Here's some benefits it might try to offer as differentiators:

1. Easy packaging into one executable. Then, making sure that can be reproducible across versions. Getting code from prior, AI papers to rub can be hard.

2. Predictability vs Python runtime. Think concurrent, low-latency GC's or low/zero-overhead abstractions.

3. Metaprogramming. There have been macro proposals for Python. Mojo could borrow from D or Rust here.

4. Extensibility in a way where extensions don't get too tied into the internal state of Mojo like they do Python. I've considered Python to C++, Rust, or parallelized Python schemes many times. The extension interplay is harder to deal with than either Python or C++ itself.

5. Write once, run anywhere, to effortlessly move code across different accelerators. Several frameworks are doing this.

6. Heterogenous, hot-swappable, vendor-neutral acceleration. That's what I'm calling it when you can use the same code in a cluster with a combination of Nvidia GPU', AMD GPU's, Gaudi3's, NPU's, SIMD chips, etc.

replies(1): >>45139344 #
10. singularity2001 ◴[] No.45138853[source]
Is it really released? Last time I checked it was not open sourced I don't want to rely on some proprietary vaporware stack.
replies(1): >>45139218 #
11. melodyogonna ◴[] No.45138904[source]
It is not ready for general-purpose programming. Modular itself tried offering a Mojo api for their MAX engine, but had to give up because the language still evolved too rapidly for such an investment.

As per the roadmap[1], I expect to start seeing more adoption once phase 1 is completed.

1. https://docs.modular.com/mojo/roadmap

12. melodyogonna ◴[] No.45139218[source]
It is released but not open-source. Modular was aiming to open-source the compiler by Q4 2026; however, Chris now says they could be able to do that considerably faster, perhaps early 2026[1].

If you're interested, they think the language will be ready for open source after completing phase 1 of the roadmap[2].

1.https://youtu.be/I0_XvXXlG5w?si=KlHAGsFl5y1yhXnm&t=943

2. https://docs.modular.com/mojo/roadmap

13. pjmlp ◴[] No.45139344{3}[source]
Agree in most points, however I still can't use it today on Windows, and it needs that unavoidable framework.

Languages on their own is very hard to gain adoption.

14. raggi ◴[] No.45141581[source]
I'm on the systems side, and I find some of what Chris and team are doing with Mojo pretty interesting and could be useful to eradicate a bunch of polyglot ffi mess across the board. I can't invest in it or even start discussions around using it until it's actually open.
replies(1): >>45143092 #
15. dsharlet ◴[] No.45141743[source]
The problem I've seen is this: in order to get good performance, no matter what language you use, you need to understand the hardware and how to use the instructions you want to use. It's not enough to know that you want to use tensor cores or whatever, you also need to understand the myriad low level requirements they have.

Most people that know this kind of thing don't get much value out of using a high level language to do it, and it's a huge risk because if the language fails to generate something that you want, you're stuck until a compiler team fixes and ships a patch which could take weeks or months. Even extremely fast bug fixes are still extremely slow on the timescales people want to work on.

I've spent a lot of my career trying to make high level languages for performance work well, and I've basically decided that the sweet spot for me is C++ templates: I can get the compiler to generate a lot of good code concisely, and when it fails the escape hatch of just writing some architecture specific intrinsics is right there whenever it is needed.

replies(1): >>45142092 #
16. mvieira38 ◴[] No.45141841[source]
> First of all some people really like Julia, regardless of how it gets discussed on HN, its commercial use has been steadily growing

Got any sources on that? I've been interested in learning Julia for a while but don't because it feels useless compared to Python, especially now with 3.13

replies(3): >>45142754 #>>45146523 #>>45146841 #
17. ModernMech ◴[] No.45141912[source]
They’re not going to see serious adoption before they open source. It’s just a rule of programming languages at this point if you don’t have the clout to force it, and Modular does not. People have been burned too many times by closed source languages.
18. adgjlsfhk1 ◴[] No.45142092{3}[source]
The counterpoint to this is that having a language that has a graceful slide between python like flexibility and hand optimized assembly is really useful. The thing I like most about Julia is it is very easy to both write fast somewhat sloppy code (e.g. for exploring new algorithms), but then you can go through and tune it easily for maximal performance and get as fast as anything out there.
replies(1): >>45143169 #
19. poly2it ◴[] No.45142155[source]
I definitely think the license is a major holdback for the language. Very few individuals or organisation for that matter would like to invest in a new closed stack. CUDA is accepted simply because it has been along for such a long time. GPGPU needs a Linux moment.
replies(1): >>45142478 #
20. ◴[] No.45142478[source]
21. adgjlsfhk1 ◴[] No.45142754{3}[source]
what about python 3.13 is significant for you? if it's multithreading you likely should be prepared for disappointment. Free threading is ~30% slower than GIL and the first rule of multi threaded code is to first optimize the hell out of the single threaded version.
replies(1): >>45143147 #
22. bobajeff ◴[] No.45143092[source]
Yeah I'm in the same boat. I plan to prototype in python and then speed up the slow bits in a low level language. I've narrowed my options to C++ and Mojo.

C++ just seems like a safer bet but I'd love something better and more ergonomic.

23. wolvesechoes ◴[] No.45143147{4}[source]
Probably the same stuff as with 3.12 or 3.11 or 3.10: good docs, huge ecosystem, wide knowledge base, detailed reference.
24. wolvesechoes ◴[] No.45143169{4}[source]
> easily for maximal performance and get as fast as anything out there.

Optimizing Julia is much harder than optimizing Fortran or C.

replies(1): >>45149479 #
25. subharmonicon ◴[] No.45144921[source]
The market tends to be pretty efficient for things like these. We’ve seen significant rapid adoption of several different ML solutions over the last decade, yet Mojo languishes. I think that’s a clear sign they aren’t solving the real-world pain points that users are hitting, and are building a rather niche solution that only appeals to a small number of people, no matter how good their execution may be.
26. xgdgsc ◴[] No.45146523{3}[source]
https://www.reddit.com/r/Julia/comments/1efxp0j/comment/lfob...
27. pjmlp ◴[] No.45146841{3}[source]
Of course, because Internet is where we always have to prove ourselves.

https://info.juliahub.com/industries/case-studies-1/author/j...

replies(1): >>45167878 #
28. davidatbu ◴[] No.45147144{3}[source]
Chris's claim (at least with regards to Triton) is that it avails 80% of the performance, and they're aiming for closer to 100%.
29. postflopclarity ◴[] No.45149479{5}[source]
for equal LOC, sure. for equal semantics, less true
30. mvieira38 ◴[] No.45167878{4}[source]
I was asking as someone wanting to learn Julia but weighting the industry benefits, not as a devil's advocate