So if you want to encode matrix multiplication, then you'll always have to write `mat1 #* mat2`. This feels like a hack, and isn't all that elegant, but it'd be clear that every usage of such an operator is a disguised function call. (And according to what Andrew Kelley said, it's all about not hiding function calls or complex operations in seemingly innocent 'overloaded' symbols.)
If you want to take this one step further you'd probably have to allow users to define infix functions, which would be its own can of worms.
Honestly, I am not particularly happy with any of these ideas, but I can't think of anything better either!
But I wish they added:
1. Ability to declare completely pure functions that have no loops except those which can be evaluated at compile time. Something with similar constraints as eBPF in other words. These could be useful in many contexts. 2. Ability to explicitly import overloaded operators, that can only be implemented using these pure guaranteed-to-finish-in-a-fixed-number-of-cycles functions.
Then you'd get operator overloading that can be used to implement any kind of mathematical function, but not all kinds of crazy DSL-like stuff which is outside the scope of Zig (I have nothing against that, I've done crazy stuff in Ruby myself, but it's not suitable for Zig)
So I would ask you this: what portion of your program suffers from a lack of user-defined infix operators and how big of a problem is it overall? Even if it turns out that the problem is worth fixing in the language, it often makes sense to wait some years and then prioritise the various problems that have been reported. Zig's simplicity and its no-overload (not just operator overloads!) single-dispatch is among its greatest features, and meant to be one of its greatest draws.
This removes the need for operator overloading for a vector type, which covers most use cases of operator overloading and I often in fact think is the only legitimate use case.
I don't get to use `*` for matrix multiplication, but I have found I do not mind using a function for this.
I have only been playing with this in small toy programs that don't to much serious linear algebra and I haven't looked at the asm I am generating with this approach, but I have been enjoying it so far!
https://www.godbolt.org/z/7zbxnncv6
...maybe one day there will also be a @Matrix builtin.
https://www.godbolt.org/z/v8Ta8hEbv
Zig is exactly the kind of language where you'd want to build a performance-oriented primitive like that, but AFAICT the language doesn't let you do it ergonomically.
@Matrix makes less sense because when it gets big, where are you getting memory from?
And those operators wouldn't have any precedence.
> If you want to take this one step further you'd probably have to allow users to define infix functions, which would be its own can of worms.
As long as these infix function are preceded by a recognizable operator ("#" in your example), I think that this would be fine.
vec2..4 and matching matrix types up to 4x4 is basically also what's provided in GPU shading languages as primitive types, and personally I would prefer such a set of "SIMD-y" primitive types for Zig (maybe a bit more luxurious than @Vector, e.g. with things like component swizzling syntax - basically what this Clang extension offers: https://clang.llvm.org/docs/LanguageExtensions.html#vectors-...).
#{
m3 = m1 * m2 + m3;
m3 += m4;
}
Basically, pure syntactic sugar to help the author express intent without having to add a bunch of line-chatter.Speaking of operator-overloading, I really wish C++ (anyone!) had a `.` prefix for operator-overloading which basically says "this is more arguments for the highest-precedence operator in the current expression:
a := b * c .+ d;
Which translates to: a := fma(b, c, d)
I don’t have much game dev experience though outside of simple games using libraries like raylib to just move and draw stuff. Maybe once things get complicated enough they are all like bevy.
You've dramatically overstated your case, since that's true of every Lisp-like language.
Lisp is a perfectly suitable language for developing mathematics in, see SICM [0] for details.
If you want to see SICM in action, the Emmy Computer Algebra System [1] [2] [3] [4] is a Clojure project that ported SICM to both Clojure and Clerk notebooks (like Jupyter notebooks, but better for programmers).
[0] https://mitpress.mit.edu/9780262028967/structure-and-interpr...
[1] Emmy project: https://emmy.mentat.org/
[2] Emmy source code: https://github.com/mentat-collective/emmy
[3] Emmy implementation talk (2017): "Physics in Clojure" https://www.youtube.com/watch?v=7PoajCqNKpg
[4] Emmy notebooks talk (2023): "Emmy: Moldable Physics and Lispy Microworlds": https://www.youtube.com/watch?v=B9kqD8vBuwU
In C++ (EVE, Vc, Highway, xsimd, stdlib), you can specify the ABI of a vector, which allows you to make platform specific optimizations in multifunctions. Or you can write vector code of a lowest-common-denominator width (like 16 bytes, rather than specifying the number of lanes), which runs the same on NEON and SSE2. Or you can write SIMD that is automatically natively optimized for just a single platform. These features are available on every notable C++ SIMD library, and they're basically indispensable for serious performance code.
https://odin-lang.org/docs/overview/#swizzle-operations
Matrix types are also built in:
https://odin-lang.org/docs/overview/#matrix-type
I’ve thought for a little while that Odin could be a secret weapon for game dev and similar pieces of software.
Additionally, for efficient math code you often want vector / matrix types in AOSOA fasion: for example Vec3<Float8> to store an AVX lane for each X/Y/Z component. I want vector/matrix operations to work on SIMD lanes, not just for scalar types, and Zig currently can't support math operators on these kinds of types.
https://github.com/floooh/sokol-odin
It's a very enjoyable language!
https://cljdoc.org/d/org.mentat/emmy/0.30.0/doc/data-types/m...
This is the complaint I was responding to. Here is that code in Clojure (a Lisp):
// What the GP claims is bad for doing math:
plus(a,b)
minus(a,b)
assign(a,b) // <= I have no idea what this does, or has to do with math.
// Let's actually use the original math operators, but with function notation:
+(a,b)
-(a,b)
// And here's the Clojure/Lisp syntax for the same:
(+ a b)
(- a b)
Lisp doesn't have "operators", so it doesn't have "operator overloading." What it does have is multi-dispatch, so yeah, the implementation of `+` can depend on the (dynamic) types of both `a` and `b`. That's a good thing, it means that the `+` and `-` tokens aren't hard-coded to whatever the language designer decided they should be in year 0, with whatever precedence and evaluation rules they picked at the time.The point I'm making is that you absolutely DO NOT need to have special-cased, infix math operators to "do math" in a reasonable, readable way. SICP is proof, and Emmy is a breeze to work with. And it turns out, there are a lot of advantages in NOT hard-coding your infix operators and precedence rules into the syntax of the language.
I am not familiar with Emmy but I'm guessing that the usual work flow will involve an interactive shell with many calls to `render` to display expressions in infix notation so that you can better check if the expression you typed is actually what you meant to type.
The infix notation, although arbitrary and not as logically simple as other notations, is almost universal in the math-speaking world. Most mathematicians and engineers have have years of experience staring at infix expressions on blackboards, and disseminate new knowledge using this notation, and do new calculations in this notation.
In reading your reply, I think that maybe some tooling that could auto-insert corresponding infix-like comments above an AST-like syntax could be a solution for writing such code in Zig.
<33
In pretty much all languages operators are just sugar for calling a method. There is no difference other than an easier to read syntax.
In rust for example, doing a + b is exactly the same as doing a.add(b).
In python it's exactly the same as doing a.__add__(b).
In C++, my understanding is that its sugar for a.operator+(b) or operator+(a, b).
I think there are some arguments against operator overloading but "spooky action at a distance" doesn't seem to be a very good one to me.
I agree that adding too many features can make a language too large and bloated. However, I disagree that this is always the case. For example, adding features that make it easier to code math is not necessarily a bad thing. In fact, it is a good thing, as it can make programming more accessible to a wider range of people.
Additionally, math is often used in fields that require high speed, such as computer graphics and game development, Computer vision, Robotics, Machine learning, Natural language processing (NLP), Mathematical modeling, all kinds of scientific computing (Computational physics, Computational chemistry, Computational biology...) As a result, low-level programming languages are often used to implement the core code in these fields. As you see, Math is essential for many fields.