←back to thread

311 points melodyogonna | 2 comments | | HN request time: 2.692s | source
Show context
nromiun ◴[] No.45138008[source]
Weird that there has been no significant adoption of Mojo. It has been quite some time since it got released and everyone is still using PyTorch. Maybe the license issue is a much bigger deal than people realize.
replies(10): >>45138022 #>>45138094 #>>45138140 #>>45138494 #>>45138853 #>>45138904 #>>45141581 #>>45141912 #>>45142155 #>>45144921 #
pjmlp ◴[] No.45138140[source]
I personally think they overshot themselves.

First of all some people really like Julia, regardless of how it gets discussed on HN, its commercial use has been steadily growing, and has GPGPU support.

On the other hand, regardless of the sore state of JIT compilers on CPU side for Python, at least MVidia and Intel are quite serious on Python DSLs for GPGPU programming on CUDA and One API, so one gets close enough to C++ performance while staying in Python.

So Mojo isn't that appealing in the end.

replies(3): >>45138834 #>>45141743 #>>45141841 #
1. nickpsecurity ◴[] No.45138834[source]
Here's some benefits it might try to offer as differentiators:

1. Easy packaging into one executable. Then, making sure that can be reproducible across versions. Getting code from prior, AI papers to rub can be hard.

2. Predictability vs Python runtime. Think concurrent, low-latency GC's or low/zero-overhead abstractions.

3. Metaprogramming. There have been macro proposals for Python. Mojo could borrow from D or Rust here.

4. Extensibility in a way where extensions don't get too tied into the internal state of Mojo like they do Python. I've considered Python to C++, Rust, or parallelized Python schemes many times. The extension interplay is harder to deal with than either Python or C++ itself.

5. Write once, run anywhere, to effortlessly move code across different accelerators. Several frameworks are doing this.

6. Heterogenous, hot-swappable, vendor-neutral acceleration. That's what I'm calling it when you can use the same code in a cluster with a combination of Nvidia GPU', AMD GPU's, Gaudi3's, NPU's, SIMD chips, etc.

replies(1): >>45139344 #
2. pjmlp ◴[] No.45139344[source]
Agree in most points, however I still can't use it today on Windows, and it needs that unavoidable framework.

Languages on their own is very hard to gain adoption.