Some programs have a ~4x slowdown. That's also not super common, but it happens.
Most programs are somewhere in the middle.
> for the use-cases where C/C++ are still popular
This is a myth. 99% of the C/C++ code you are using right now is not perf sensitive. It's written in C or C++ because:
- That's what it was originally written in and nobody bothered to write a better version in any other language.
- The code depends on a C/C++ library and there doesn't exist a high quality binding for that library in any other language, which forces the dev to write code in C/C++.
- C/C++ provides the best level of abstraction (memory and syscalls) for the use case.
Great examples are things like shells and text editors, where the syscalls you want to use are exposed at the highest level of fidelity in libc and if you wrote your code in any other language you'd be constrained by that language's library's limited (and perpetually outdated) view of those syscalls.
I am curious though. What assumptions do you think I'm making that you think are invalid?
(Browser performance is like megapixels or megahertz … a number that marketing nerds can use to flex, but that is otherwise mostly irrelevant)
When I say 99% of the C code you use, I mean “use” as a human using a computer, not “use” as a dependency in your project. I’m not here to tell you that your C or C++ project should be compiled with Fil-C. I am here to tell you that most of the C/C++ programs you use as an end user could be compiled with Fil-C and you wouldn’t experience an degraded experience if that happened
4x slowdown may be absolutely irrelevant in case of a software that spends most of its time waiting on IO, which I would wager a good chunk of user-facing software does. Like, if it has an event loop and does a 0.5 ms calculation once every second, doing the same calculation in 2 ms is absolutely not-noticeable.
For compilers, it may not make as much sense (not even due to performance reasons, but simply because a memory issue taking down the program would still be "well-contained", and memory leaks would not matter much as it's a relatively short-lived program to begin with).
And then there are the truly CPU-bound programs, but seriously, how often do you [1] see your CPU maxed out for long durations on your desktop PC?
[1] not you, pizlonator, just joining the discussion replying to you
They are most commonly running continuously, and reacting to different events.
You can't do the IO work that depends on a CPU work ahead of time, and neither can you do CPU work that depends on IO. You have a bunch of complicated interdependencies between the two, and the execution time is heavily constrained by this directed graph. No matter how efficient your data manipulation algorithm is, if you still have to wait for it to load from the web/file.
Just draw a Gantt chart and sure, sum the execution time. My point is that due to interdependencies you will have a longest lane and no matter what you do with the small CPU parts, you can only marginally affect the whole.
It gets even more funny with parallelism (this was just concurrency yet), where a similar concept is named Amdahl's law.
And I would even go as far and claim that what you may win by C you often lose several-folds due to going with a simpler parallelism model for fear of Segfaults, which you could fearlessly do in a higher-level language.
Wait - what was that part about Amdahl's law?
Also segfaults are unrelated to parallelism.
My point is that you often see a simpler algorithm/data structure in C for fear of a memory issue/not getting some edge case right.
What part are you disagreeing with? That parallel code has more gotchas, that make a footgun-y language even more prone to failures?