Most active commenters
  • pizlonator(4)
  • johncolanduoni(3)

←back to thread

597 points pizlonator | 12 comments | | HN request time: 0.001s | source | bottom
Show context
crawshaw ◴[] No.45134578[source]
It is great that Fil-C exists. This is the sort of technique that is very effective for real programs, but that developers are convinced does not work. Existence proofs cut through long circular arguments.
replies(2): >>45134840 #>>45135366 #
johncolanduoni ◴[] No.45134840[source]
What do the benchmarks look like? My main concern with this approach would be that the performance envelope would eliminate it for the use-cases where C/C++ are still popular. If throughput/latency/footprint are too similar to using Go or what have you, there end up being far fewer situations in which you would reach for it.
replies(1): >>45134852 #
pizlonator ◴[] No.45134852[source]
Some programs run as fast as normally. That's admittedly not super common, but it happens.

Some programs have a ~4x slowdown. That's also not super common, but it happens.

Most programs are somewhere in the middle.

> for the use-cases where C/C++ are still popular

This is a myth. 99% of the C/C++ code you are using right now is not perf sensitive. It's written in C or C++ because:

- That's what it was originally written in and nobody bothered to write a better version in any other language.

- The code depends on a C/C++ library and there doesn't exist a high quality binding for that library in any other language, which forces the dev to write code in C/C++.

- C/C++ provides the best level of abstraction (memory and syscalls) for the use case.

Great examples are things like shells and text editors, where the syscalls you want to use are exposed at the highest level of fidelity in libc and if you wrote your code in any other language you'd be constrained by that language's library's limited (and perpetually outdated) view of those syscalls.

replies(8): >>45134950 #>>45135063 #>>45135080 #>>45135102 #>>45135517 #>>45136755 #>>45137524 #>>45143638 #
johncolanduoni ◴[] No.45135102[source]
While there are certainly other reasons C/C++ get used in new projects, I think 99% not being performance or footprint sensitive is way overstating it. There's tons of embedded use cases where a GC is not going to fly just from a code size perspective, let alone latency. That's mostly where I've often seen C (not C++) for new programs. Also, if Chrome gets 2x slower I'll finally switch back to Firefox. That's tens of millions of lines of performance-sensitive C++ right there.

That actually brings up another question: how would trying to run a JIT like V8 inside Fil-C go? I assume there would have to be some bypass/exit before jumping to generated code - would there need to be other adjustments?

replies(7): >>45135144 #>>45135158 #>>45135395 #>>45135400 #>>45135515 #>>45136267 #>>45138618 #
1. pizlonator ◴[] No.45135158[source]
Most C/C++ code for old or new programs runs on a desktop or server OS where you have lots of perf breathing room. That’s my experience. And that’s frankly your experience too, if you use Linux, Windows, or Apple’s OSes

> how would trying to run a JIT like V8 inside Fil-C go?

You’d get a Fil-C panic. Fil-C wouldn’t allow you to PROT_EXEC lol

replies(2): >>45135232 #>>45135330 #
2. addaon ◴[] No.45135232[source]
> Most C/C++ code for old or new programs runs on a desktop or server OS where you have lots of perf breathing room. That’s my experience. And that’s frankly your experience too, if you use Linux, Windows, or Apple’s OSes

What if I also use cars, and airplanes, and dishwashers, and garage doors, and dozens of other systems? At what point does most of the code I interact with /not/ have lots of breathing room? Or does the embedded code that makes the modern world run not count as "programs"?

replies(2): >>45135279 #>>45135425 #
3. pizlonator ◴[] No.45135279[source]
You have a good point!

First of all, I’m not advocating that people use Fil-C in places where it makes no sense. I wouldn’t want my car’s control system to use it.

But car systems are big if they have 100 million lines of code or maybe a billion. But your desktop OS is at like 10 billion and growing! Throw in the code that runs in servers that you rely on and we might be at 100 billion lines of C or C++

4. johncolanduoni ◴[] No.45135330[source]
Thanks for telling me what my experience is, but I can think of plenty of C/C++ code on my machine that would draw ire from ~all it's users if it got 2x slower. I already mentioned browsers but I would also be pretty miffed if any of these CPU-bound programs got 2x slower:

* Compilers (including clang)

* Most interpreters (Python, Ruby, etc.)

* Any simulation-heavy video game (and some others)

* VSCode (guess I should've stuck with Sublime)

* Any scientific computing tools/libraries

Sure, I probably won't notice if zsh or bash got 2x slower and cp will be IO bound anyway. But if someone made a magic clang pass that made most programs 2x faster they'd be hailed as a once-in-a-generation genius, not blown off with "who really cares about C/C++ performance anyway?". I'm not saying there's no place for trading these overheads for making C/C++ safer, but treating it as a niche use-case for C/C++ is ludicrous.

replies(4): >>45135422 #>>45136378 #>>45136399 #>>45149655 #
5. pjmlp ◴[] No.45135422[source]
Many compilers are bootstrapped.

Ruby is partially written in Rust nowadays.

VSCode uses plenty of Rust and .NET AOT on its extensions, alongside C++, and more recently Webassembly, hence why it is the only Electron garbage with acceptable performance.

Unity and Unreal share a great deal of games, with plenty of C#, Blueprints, Verse, and a GC for C++.

6. pjmlp ◴[] No.45135425[source]
Some of that is thankfully running Ada.
replies(1): >>45139590 #
7. zelphirkalt ◴[] No.45136378[source]
Question is, whether one would really notice a slowdown of factor 2 in a browser. For example, if it takes some imaginary 2ms to close a tab, would one notice, if it now took 4ms? And for page rendering the bottleneck might be retrieving those pages.
replies(2): >>45136830 #>>45137704 #
8. spacechild1 ◴[] No.45136399[source]
I would like to add:

* DAWs and audio plugins

* video editors

Audio plugins in particular need to run as fast as possible because they share the tiny time budget of a few milliseconds with dozens or even hundreds of other plugins instances. If everthing is suddenly 2x slower, some projects simply won't anymore in realtime.

9. saagarjha ◴[] No.45136830{3}[source]
Yes, people will absolutely notice. There's plenty of interactions that take 500ms that will now take a second.
10. const_cast ◴[] No.45137704{3}[source]
2 - 4 ms? No. The problem is that many web applications are already extremely slow and bogged down in the browser. 500 ms - 1s? Yes, definitely people will notice. Although that only really applies to React applications that do too much, network latency isn't affected.
11. addaon ◴[] No.45139590{3}[source]
Not in my case.
12. pizlonator ◴[] No.45149655[source]
I’m already living on a Fil-C compiled CPython. It doesn’t matter.

And a Fil-C compiled text editor. Not VSCode, but still

I absolutely do think you could make the browser 5x slower (in CPU time - not in IO time) and you wouldn’t care. For example Lockdown Mode really doesn’t change your UX. Or using a browser on a 5x slower computer. You barely notice.

And most of the extant C++ code doesn’t fit into any of the categories you listed.