Most active commenters
  • pjmlp(3)

←back to thread

311 points melodyogonna | 13 comments | | HN request time: 0.515s | source | bottom
Show context
nromiun ◴[] No.45138008[source]
Weird that there has been no significant adoption of Mojo. It has been quite some time since it got released and everyone is still using PyTorch. Maybe the license issue is a much bigger deal than people realize.
replies(10): >>45138022 #>>45138094 #>>45138140 #>>45138494 #>>45138853 #>>45138904 #>>45141581 #>>45141912 #>>45142155 #>>45144921 #
1. pjmlp ◴[] No.45138140[source]
I personally think they overshot themselves.

First of all some people really like Julia, regardless of how it gets discussed on HN, its commercial use has been steadily growing, and has GPGPU support.

On the other hand, regardless of the sore state of JIT compilers on CPU side for Python, at least MVidia and Intel are quite serious on Python DSLs for GPGPU programming on CUDA and One API, so one gets close enough to C++ performance while staying in Python.

So Mojo isn't that appealing in the end.

replies(3): >>45138834 #>>45141743 #>>45141841 #
2. nickpsecurity ◴[] No.45138834[source]
Here's some benefits it might try to offer as differentiators:

1. Easy packaging into one executable. Then, making sure that can be reproducible across versions. Getting code from prior, AI papers to rub can be hard.

2. Predictability vs Python runtime. Think concurrent, low-latency GC's or low/zero-overhead abstractions.

3. Metaprogramming. There have been macro proposals for Python. Mojo could borrow from D or Rust here.

4. Extensibility in a way where extensions don't get too tied into the internal state of Mojo like they do Python. I've considered Python to C++, Rust, or parallelized Python schemes many times. The extension interplay is harder to deal with than either Python or C++ itself.

5. Write once, run anywhere, to effortlessly move code across different accelerators. Several frameworks are doing this.

6. Heterogenous, hot-swappable, vendor-neutral acceleration. That's what I'm calling it when you can use the same code in a cluster with a combination of Nvidia GPU', AMD GPU's, Gaudi3's, NPU's, SIMD chips, etc.

replies(1): >>45139344 #
3. pjmlp ◴[] No.45139344[source]
Agree in most points, however I still can't use it today on Windows, and it needs that unavoidable framework.

Languages on their own is very hard to gain adoption.

4. dsharlet ◴[] No.45141743[source]
The problem I've seen is this: in order to get good performance, no matter what language you use, you need to understand the hardware and how to use the instructions you want to use. It's not enough to know that you want to use tensor cores or whatever, you also need to understand the myriad low level requirements they have.

Most people that know this kind of thing don't get much value out of using a high level language to do it, and it's a huge risk because if the language fails to generate something that you want, you're stuck until a compiler team fixes and ships a patch which could take weeks or months. Even extremely fast bug fixes are still extremely slow on the timescales people want to work on.

I've spent a lot of my career trying to make high level languages for performance work well, and I've basically decided that the sweet spot for me is C++ templates: I can get the compiler to generate a lot of good code concisely, and when it fails the escape hatch of just writing some architecture specific intrinsics is right there whenever it is needed.

replies(1): >>45142092 #
5. mvieira38 ◴[] No.45141841[source]
> First of all some people really like Julia, regardless of how it gets discussed on HN, its commercial use has been steadily growing

Got any sources on that? I've been interested in learning Julia for a while but don't because it feels useless compared to Python, especially now with 3.13

replies(3): >>45142754 #>>45146523 #>>45146841 #
6. adgjlsfhk1 ◴[] No.45142092[source]
The counterpoint to this is that having a language that has a graceful slide between python like flexibility and hand optimized assembly is really useful. The thing I like most about Julia is it is very easy to both write fast somewhat sloppy code (e.g. for exploring new algorithms), but then you can go through and tune it easily for maximal performance and get as fast as anything out there.
replies(1): >>45143169 #
7. adgjlsfhk1 ◴[] No.45142754[source]
what about python 3.13 is significant for you? if it's multithreading you likely should be prepared for disappointment. Free threading is ~30% slower than GIL and the first rule of multi threaded code is to first optimize the hell out of the single threaded version.
replies(1): >>45143147 #
8. wolvesechoes ◴[] No.45143147{3}[source]
Probably the same stuff as with 3.12 or 3.11 or 3.10: good docs, huge ecosystem, wide knowledge base, detailed reference.
9. wolvesechoes ◴[] No.45143169{3}[source]
> easily for maximal performance and get as fast as anything out there.

Optimizing Julia is much harder than optimizing Fortran or C.

replies(1): >>45149479 #
10. xgdgsc ◴[] No.45146523[source]
https://www.reddit.com/r/Julia/comments/1efxp0j/comment/lfob...
11. pjmlp ◴[] No.45146841[source]
Of course, because Internet is where we always have to prove ourselves.

https://info.juliahub.com/industries/case-studies-1/author/j...

replies(1): >>45167878 #
12. postflopclarity ◴[] No.45149479{4}[source]
for equal LOC, sure. for equal semantics, less true
13. mvieira38 ◴[] No.45167878{3}[source]
I was asking as someone wanting to learn Julia but weighting the industry benefits, not as a devil's advocate