The strong AI focus seems to be a sign of the times, and not actually something that makes sense imo.
The strong AI focus seems to be a sign of the times, and not actually something that makes sense imo.
Are you sure about that? I think Mojo was always talked about as "The language for ML/AI", but I'm unsure if Mojo was announced before the current hype-cycle, must be 2-3 years at this point right?
Swift has some nice features. However, the super slow compilation times and cryptic error messages really erase any gains in productivity for me.
- "The compiler is unable to type-check this expression in reasonable time?" On an M3 Pro? What the hell!?
- To find an error in SwiftUI code I sometimes need to comment everything out block by block to narrow it down and find the culprit. We're getting laughs from Kotlin devs.
It has been Mojo's explicit goal from the start. It has it's roots in the time that Chris Lattner spent at Google working on the compiler stack for TPUs.
It was explicitly designed to by Python-like because that is where (almost) all the ML/AI is happening.
https://techcrunch.com/2023/08/24/modular-raises-100m-for-ai...
You don’t raise $130M at a $600M valuation to make boring old dev infrastructure that is sorely needed but won’t generate any revenue because no one is willing to pay for general purpose programming languages in 2025.
You raise $130M to be the programming foundation of next Gen AI. VCs wrote some big friggen checks for that pitch.
https://www.cocoawithlove.com/blog/2016/07/12/type-checker-i...
let a: Double = -(1 + 2) + -(3 + 4) + -(5)
Still fails on a very recent version of Swift, Swift 6.1.2, if my test works.
Chris Lattner mentions something about him being more of an engineer than a mathematician, but a responsible and competent computer science engineer that is designing programming languages with complex type systems, absolutely has to at least be proficient in university-level mathematics and relevant theory, or delegate out and get computer scientists to find and triple-check any relevant aspects of the budding programing language.
That programming language designers have to be careful about a type system and its type checking's asymptotic time complexity was 100% widely known before Swift was first created. Some people like to diss on mathematics, but this stuff can have severe practical and widespread engineering consequences. I don't expect everyone to master everything, but budding programming language designers should then at least realize that there may be important issues, and for instance mitigate any issues with having one or more experts checking the relevant aspects.
> Still fails on a very recent version of Swift, Swift 6.1.2, if my test works.
FWIW, the situation with this expression (and others like it) has improved recently:
- 6.1 fails to type check in ~4 seconds
- 6.2 fails to type check in ~2 seconds (still bad obviously, but it's doing the same amount of work in less time)
- latest main successfully type checks in 7ms. That's still a bit too slow though, IMO. (edit: it's just first-time deserialization overhead; if you duplicate the expression multiple times, subsequent instances type check in <1ms).
These days though the type checker is not where compile time is mostly spent in Swift; usually it’s the various SIL and LLVM optimization passes. While the front end could take care to generate less redundant IR upfront, this seems like a generally unavoidable issue with “zero cost abstraction” languages, where the obvious implementation strategy is to spit out a ton of IR, inline everything, and then reduce it to nothing by transforming the IR.
That’s really only true if you have overloading though! Without overloading there are no disjunction choices to attempt, and if you also have principal typing it makes the problem of figuring out diagnostics easier, because each expression has a unique most general type in isolation (so your old CSDiag design would actually work in such a language ;-) )
But perhaps a language where you have to rely on generics for everything instead of just overloading a function to take either an Int or a String is a bridge too far for mainstream programmers.
It feels like a much better design point overall.
My original reply was just to point out that constraint solving, in the abstract, can be a very effective and elegant approach to these problems. There’s always a tradeoff, and it all depends on the combination of other features that go along with it. For example, without bidirectional inference, certain patterns involving closures become more awkward to express. You can have that, without overloading, and it doesn’t lead to intractability.
In my opinion, constraint solving would be a bad design point for Mojo, and I regret Swift using it. I'm not trying to say that constraint solving is bad for all languages and use-cases.