Most active commenters

    ←back to thread

    302 points Bogdanp | 17 comments | | HN request time: 0.001s | source | bottom
    Show context
    taylorallred ◴[] No.44390996[source]
    So there's this guy you may have heard of called Ryan Fleury who makes the RAD debugger for Epic. The whole thing is made with 278k lines of C and is built as a unity build (all the code is included into one file that is compiled as a single translation unit). On a decent windows machine it takes 1.5 seconds to do a clean compile. This seems like a clear case-study that compilation can be incredibly fast and makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.
    replies(18): >>44391046 #>>44391066 #>>44391100 #>>44391170 #>>44391214 #>>44391359 #>>44391671 #>>44391740 #>>44393057 #>>44393294 #>>44393629 #>>44394710 #>>44395044 #>>44395135 #>>44395226 #>>44395485 #>>44396044 #>>44401496 #
    1. dhosek ◴[] No.44391170[source]
    Because Russt and Swift are doing much more work than a C compiler would? The analysis necessary for the borrow checker is not free, likewise with a lot of other compile-time checks in both languages. C can be fast because it effectively does no compile-time checking of things beyond basic syntax so you can call foo(char) with foo(int) and other unholy things.
    replies(5): >>44391210 #>>44391240 #>>44391254 #>>44391268 #>>44391426 #
    2. drivebyhooting ◴[] No.44391210[source]
    That’s not a good example. Foo(int) is analyzed by compiler and a type conversion is inserted. The language spec might be bad, but this isn’t letting the compiler cut corners.
    3. steveklabnik ◴[] No.44391240[source]
    The borrow checker is usually a blip on the overall graph of compilation time.

    The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.

    replies(1): >>44392271 #
    4. taylorallred ◴[] No.44391254[source]
    These languages do more at compile time, yes. However, I learned from Ryan's discord server that he did a unity build in a C++ codebase and got similar results (just a few seconds slower than the C code). Also, you could see in the article that most of the time was being spent in LLVM and linking. With a unity build, you nearly cut out link step entirely. Rust and Swift do some sophisticated things (hinley-milner, generics, etc.) but I have my doubts that those things cause the most slowdown.
    5. Thiez ◴[] No.44391268[source]
    This explanation gets repeated over and over again in discussions about the speed of the Rust compiler, but apart from rare pathological cases, the majority of time in a release build is not spent doing compile-time checks, but in LLVM. Rust has zero-cost abstractions, but the zero-cost refers to runtime, sadly there's a lot of junk generated at compile-time that LLVM has to work to remove. Which is does, very well, but at cost of slower compilation.
    replies(1): >>44391818 #
    6. jvanderbot ◴[] No.44391426[source]
    If you'd like the rust compiler to operate quickly:

    * Make no nested types - these slow compiler time a lot

    * Include no crates, or ones that emphasize compiler speed

    C is still v. fast though. That's why I love it (and Rust).

    replies(1): >>44394947 #
    7. vbezhenar ◴[] No.44391818[source]
    Is it possible to generate less junk? Sounds like compiler developers took a shortcuts, which could be improved over time.
    replies(3): >>44392001 #>>44392115 #>>44394849 #
    8. rcxdude ◴[] No.44392001{3}[source]
    Probably, but it's the kind of thing that needs a lot of fairly significant overhauls in the compiler architecture to really move the needle on, as far as I understand.
    9. zozbot234 ◴[] No.44392115{3}[source]
    You can address the junk problem manually by having generic functions delegate as much of their work as possible to non-generic or "less" generic functions (Where a "less" generic function is one that depends only on a known subset of type traits, such as size or alignment. Then delegating can help the compiler generate fewer redundant copies of your code, even if it can't avoid code monomorphization altogether.)
    replies(1): >>44394609 #
    10. kimixa ◴[] No.44392271[source]
    While the borrow checker is one big difference, it's certainly not the only thing the rust compiler offers on top of C that takes more work.

    Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.

    Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P

    replies(1): >>44395387 #
    11. andrepd ◴[] No.44394609{4}[source]
    Isn't something like this blocked on the lack of specialisation?
    replies(1): >>44395868 #
    12. LtdJorge ◴[] No.44394849{3}[source]
    Well, zero-cost abstractions are still abstractions. It’s not junk per-se, but things that will be optimized out if the IR has enough information to safely do so, so basically lots of extra metadata to actually prove to LLVM that these things are safe.
    13. windward ◴[] No.44394947[source]
    >Make no nested types

    I wouldn't like it that much

    14. simonask ◴[] No.44395387{3}[source]
    It truly is not about bounds checks. Index lookups are rare in practical Rust code, and the amount of code generated from them is miniscule.

    But it _is_ about the sheer volume of stuff passed to LLVM, as you say, which comes from a couple of places, mostly related to monomorphization (generics), but also many calls to tiny inlined functions. Incidentally, this is also what makes many "modern" C++ projects slow to compile.

    In my experience, similarly sized Rust and C++ projects seem to see similar compilation times. Sometimes C++ wins due to better parallelization (translation units in Rust are crates, not source files).

    15. dwattttt ◴[] No.44395868{5}[source]
    I believe the specific advice they're referring to has been stable for a while. You take your generic function & split it into a thin generic wrapper, and a non-generic worker.

    As an example, say your function takes anything that can be turned into a String. You'd write a generic wrapper that does the ToString step, then change the existing function to just take a String. That way when your function is called, only the thin outer function is monomorphised, and the bulk of the work is a single implementation.

    It's not _that_ commonly known, as it only becomes a problem for a library that becomes popular.

    replies(1): >>44397255 #
    16. estebank ◴[] No.44397255{6}[source]
    To illustrate:

      fn foo<S: Into<String>>(s: S) {
          fn inner(s: String) { ... }
          inner(s.into())
      }
    replies(1): >>44399834 #
    17. ◴[] No.44399834{7}[source]