but all the normal marketing words: in my opinion it is fast, expressive, and has particularly good APIs for array manipulation
I'll warn you that Julia's ML ecosystem has the most competitive advantage on "weird" types of ML, involving lots of custom gradients and kernels, integration with other pieces of a simulation or diffeq, etc.
if you just want to throw some tensors around and train a MLP, you'll certainly end up finding more rough edges than you might in PyTorch
What Julia needs though: wayyyy more thorough tooling to support auto generated docs, well integrated with package management tooling and into the web package management ecosystem. Julia attracts really cutting edge research and researchers writing code. They often don't have time to write docs and that shouldn't really matter.
Julia could definitely use some work in the areas discussed in this podcast, not so much the high level interfaces but the low level ones. That's really hard though!
Combine that with all the cutting edge applied math packages often being automatically compatible with the autodiff and GPU array backends, even if the library authors didn't think about that... it's a recipe for a lot of interesting possibilities.
In practice, the Julia package ecosystem is weak and generally correctness is not a high priority. But the language is great, if you're willing to do a lot of the work yourself.