Go has sub-second build times even on massive code-bases. Why? because it doesn't do a lot at build time. It has a simple module system, (relatively) simple type system, and leaves a whole bunch of stuff be handled by the GC at runtime. It's great for its intended use case.
When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.
A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.
Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.
Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.
The compiler is optimized for compilation speed, not runtime performance. Generally speaking, it does well enough. Especially because it's usecase is often applications where "good enough" is good enough (IE, IO heavy applications).
You can see that with "gccgo". Slower to compile, faster to run.
I can believe that, but even so it's caused by the type system monomorphising everything. When it use qsort from libc, you are using per-compiled code from a library. When you use slice::sort(), you get custom assembler compiled to suit your application. Thus, there is a lot more code generation going on, and that is caused by the tradeoffs they've made with the type system.
Rusts approach give you all sorts of advantages, like fast code and strong compile time type checking. But it comes with warts too, like fat binaries, and a bug in slice::sort() can't be fixed by just shipping of the std dynamic library, because there is no such library. It's been recompiled, just for you.
FWIW, modern C++ (like boost) that places everything in templates in .h files suffers from the same problem. If Swift suffers from it too, I'd wager it's the same cause.
I was all excited to conduct the "cargo check; mrustc; cc" is 100x faster experiment, but I think at best, the multiple is going to be pretty small.
This has tradeoffs: increased ABI stability at the cost of longer compile times.
For pure computational workloads, it'll be faster. However, anything with heavy allocation will suffer as apparently the gccgo GC and GC related optimizations aren't as good as cgo's.
I’d like to see tooling for this to pinpoint bottlenecks - it’s not always obvious what’s making builds slow.
Since fast compilation was a goal, every part of the design was looked at through a rough "can this be a horrible bottleneck?", and discarded if so. For example, the import (package) system was designed to avoid the horrible, inefficient mess of C++. It's obvious that you never want to compile the same package more than once and that you need to support parallel package compilation. These may be blindingly obvious, but if you don't think about compilation speed at design time, you'll get this wrong and will never be able to fix it.
As far as optimizations vs compile speed goes, it's just a simple case of diminishing returns. Since Rust has maximum possible perfomance as a goal, it's forced to go well into the diminishing returns territory, sacrificing a ton of compile speed for minor performance improvements. Go has far more modest performance goals, so it can get 80% of the possible performance for only 20% of the compile cost. Rust can't afford to relax its stance because it's competing with languages like C++, and to some extent C, that are willing to go to any length to squeeze out an extra 1% of perfomance.
But not having to is a win, as the monomorphised sorts are just much faster at runtime than having to do an indirect call for each comparison.
Could you expand on that, please? Every time you run dynmically linked program, it is linked at runtime. (unless it explicitly avoids linking unneccessary stuff by dlopening things lazily; which pretty much never happens). If it is fine to link on every program launch, linking at build time should not be a problem at all.
If you want to have link time optimization, that's another story. But you absolutely don't have to do that if you care about build speed.
If it improves compile time, that sounds like a bug in the compiler or the design of the language itself.
DLLs got their start when early windowing systems didn't quite fit on the workstations of the era in the late 80s / early 90s.
In about 4 minutes both Microsoft and GNU were like, "let me get this straight, it will never work on another system and I can silently change it whenever I want?" Debian went along because it gives distro maintainers degrees of freedom they like and don't bear the costs of.
Fast forward 30 years and Docker is too profitable a problem to fix by the simple expedient of calling a stable kernel ABI on anything, and don't even get me started on how penetrated everything but libressl and libsodium are. Protip: TLS is popular with the establishment because even Wireshark requires special settings and privileges for a user to see their own traffic, security patches my ass. eBPF is easier.
Dynamic linking moves control from users to vendors and governments at ruinous cost in performance, props up bloated industries like the cloud compute and Docker industrial complex, and should die in a fire.
Don't take my word for it, swing by cat-v.org sometimes and see what the authors of Unix have to say about it.
I'll save the rant about how rustc somehow manages to be slower than clang++ and clang-tidy combined for another day.
Compilation speed depends on what you do with a language. "Fast" is not an absolute, and for most people it depends heavily on community habits. Rust habits tend to favor extreme optimizability and/or extreme compile-time guarantees, and that's obviously going to be slower than simpler code.
There are multiple caveats on providing this to users (we can't assume that macro invocations are idempotent, so the new behavior would have to be opt in, and this only benefits incremental compilation), but it's in our radar.
Nah. Slow type checking in Swift is primarily caused by the fact that functions and operators can be overloaded on type.
Separately-compiled generics don't introduce any algorithmic complexity and are actually good for compile time, because you don't have to re-type check every template expansion more than once.
I suspect this leaks into both compile-time and run-time costs.
If fact, if there was anything remotely controversial about a bunch of extremely specific, extremely falsifiable claims I made, one imagines your rebuttal would have mentioned at least one.
I said inflmatory things (Docker is both arsonist and fireman at ruinous cost), but they're fucking true. That Alpine in the Docker jank? Links musl!
But people should make an informed choice, and there isn't any noble or high minded or well-meaning reason to try to shout that information down.
Don't confidently assert falsehoods unless you're prepared to have them refuted. You're entitled to peddle memes and I'm entitled to reply with corrections.
Go and Dlang compilers were designed by those that are really good at compiler design and that's why they're freaking fast. They designed the language around the compiler constraints and at the same managed to make the language intuitive to use. For examples, Dlang has no macro and no unnecessary symbols look-up for the ambiguous >>.
Because of these design decisions both Go and Dlang are anomaly for fast compilation. Dlang in particular is notably more powerful and expressive compared to C++ and Rust even with its unique hybrid GC and non-GC compilation.
In automotive industry it's considered a breakthrough and game changing achievement if you have a fast transmission for seamless auto and manual transmission such as found in the latest Koenigsegg hypercar [1]. In programming industry however, nobody seems to care. Walter Bright the designer of Dlang has background in mechanical engineering and it shows.
[1] Engage Shift System: Koenigsegg new hybrid manual and automatic gearbox in CC850:
https://www.topgear.com/car-news/supercars/heres-how-koenigs...
Not all programmers of course - if you look at std there are many places that split types into generic and non-generic parts so the compiler will reuse as much code as possible, but it does come at the cost of additional complexity. Worse if you aren't already aware of why they are doing it, the language does a marvellous job of hiding the reason that complexity is there. I'd wager a novice Rust programmer is as befuddled by it as a JavaScript programmer coming across his first free() call in C.
I have this dream of a language like Rust that makes the trade-off plain, so the programmer is always aware of "this is a zero cost abstraction - you're just making it plain via the type system your doing the right thing" and "I'm going to have to generate a lot of code for this". Then go a step further and put the types and source you want to export to other libraries in a special elf section in the .so so you don't need the source to link against it, then go another step further and make the programmer using the .so explicitly instantiate any stuff that does require a lot of code generated so he is aware of what is happening.
That said, I don't think it would help the compile time problem in most cases. C++ already does something close by forcing you to put exported stuff in .h files, and they ended up with huge .h files and slow compiles anyway.
Nevertheless doing that would make for a Rust like language, that, unlike Rust, supported an eco-system of precompiled libraries just like C does. Rust is so wedded to transparent monomorphisation it looks near impossible now.
Go got famous compile times, because for a decade a new generation educated in scripting languages and used to badly configured C and C++ projects, took for innovation, what was actually a return to old values in compiler development.
Unfortunately that doesn't seem to ever be a scenario cargo will support out of the box.
I think lazily linking is the default even if you don't use dlopen, i.e. every symbol gets linked upon first use. Of course that has the drawback, that the program can crash due to missing/incompatible libraries in the middle of work.
Anyway, while what you said is theoretically half-true, a fairly large number of libraries are not designed/encapsulated well. This means almost all of their symbols are exported dynamically, so, the idea that there are only "few public exported symbols" is unfortunately false.
However, something almost no one ever mentions is that ELF was actually designed to allow dynamic libraries to be fairly performant. It isn't something I would recommend, as it breaks many assumptions on Unices, (while you don't get the benefits of LTO) you can achieve code generation almost equivalent to static linking by using something like "-fno-semantic-interposition -Wl,-Bsymbolic,-z,now". MaskRay has a good explanation on it: https://maskray.me/blog/2021-05-16-elf-interposition-and-bsy...