Of course, the article doesn't mention lambdas.
I just had a PR on an old C++ project, and spending 8 years in the web ecosystem have raised the bar around tooling expectations.
Rust is particularly sweet to work with in that regard.
Smart pointers are neat but they are not a solution for memory safety. Just using standard containers and iterators can lead to lots of footguns, or utils like string_view.
When I made a meme about C++ [1] I was purposeful in choosing the iceberg format. To me it's not quite satisfying to say that C++ is merely complex or vast. A more fitting word would be "arcane", "monumental" or "titanic" (get it?). There's a specific feeling you get when you're trying to understand what the hell is an xvalue, why std::move doesn't move or why std::remove doesn't remove.
The Forest Gump C++ is another meme that captures this feeling very well (not by me) [2].
What it comes down to is developer experience (DX), and C++ has a terrible one. Down to syntax and all the way up to package management a C++ developper feels stuck to a time before they were born. At least we have a lot of time to think about all that while our code compiles. But that might just be the price for all the power it gives you.
You know, not sure I even agree with the memory leaks part. If you define a memory leak very narrowly as forgetting to free a pointer, this is correct. But in my experience working with many languages including C/C++, forgotten pointers are almost never the problem. You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects or bursty memory allocation patterns. And these occur in all languages.
If I'm writing a small utility or something the Makefile typically looks something like this:
CC=clang
PACKAGES=libcurl libturbojpeg
CFLAGS=-Wall -pedantic --std=gnu17 -g $(shell pkg-config --cflags $(PACKAGES))
LDLIBS=$(shell pkg-config --libs $(PACKAGES))
ALL: imagerunner
imagerunner: imagerunner.o image_decoder.o downloader.oYou could also inherit a massive codebase old enough to need a prostate exam that was written by many people who wanted to prove just how much of the language spec they could use.
If selecting a job mostly under the Veil of Ignorance, I'll take a large legacy C project over C++ any day.
Running unit tests with the address sanitizer and UB sanitizer enabled go a long way towards addressing most memory safety bugs. The kind of C++ you write then is a far cry from what the haters complain about with bad old VC6 era C++.
Nitpick, I guess, but Windows 1.0 was released in November 1985:
> you can write perfectly fine code without ever needing to worry about the more complex features of the language
Not really because of undefined behaviour. You must be aware of and vigilant about the complexities of C++ because the compiler will not tell you when you get it wrong.
I would argue that Rust is at least in the same complexity league as C++. But it doesn't matter because you don't need to remember that complexity to write code that works properly (almost all of the time anyway, there are some footguns in async Rust but it's nothing on C++).
> Now is [improved safety in Rust rewrites] because of Rust? I’d argue in some small part, yes. However, I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase.
A factor, sure. The biggest? Doubtful. It isn't only Rust's safety that helps here, it's its excellent type system.
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
Somehow managed to fit two fallacies in one sentence!
1. The fallacy of the grey - no language is perfect therefore they are all the same.
2. "I don't make mistakes."
> Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues.
Not true. As I said already Rust's very strong type system helps to make applications less buggy even ignoring memory safety bugs.
> Yes, C++ can be made safer; in fact, it can even be made memory safe. There are a number of libraries and tools available that can help make C++ code safer, such as smart pointers, static analysis tools, and memory sanitizers
lol
> Avoid boost like the plague.
Cool, so the ecosystem isn't confusing but you have to avoid one of the most popular libraries. And Boost is fine anyway. It has lots of quite high quality libraries, even if they do love templates too much.
> Unless you are writing a large and complex application that requires the specific features provided by Boost, you are better off using other libraries that are more modern and easier to use.
Uhuh what would you recommend instead of Boost ICL?
I guess it's a valiant attempt but this is basically "in defense of penny farthings" when the safety bicycle was invented.
And out of all the tools and architecture I work with, C++ has been some of the least problematic. The STL is well-formed and easy to work with, creating user-defined types is easy, it's fast, and generally it has few issues when deploying. If there's something I need, there's a very high chance a C or C++ library exists to do what I need. Even crossing multiple major compiler versions doesn't seem to break anything, with rare exceptions.
The biggest problem I have with C++ is how easy it is to get very long compile times, and how hard it feels like it is to analyze and fix that on a 'macro' (whole project) level. I waste ungodly amounts of time compiling. I swear I'm going to be on deaths door and see GCC running as my life flashes by.
Some others that have been not-so-nice:
* Python - Slow enough to be a bottleneck semi-frequently, hard to debug especially in a cross-language environment, frequently has library/deployment/initialization problems, and I find it generally hard to read because of the lack of types, significant whitespace, and that I can't easily jump with an IDE to see who owns what data. Also pip is demon spawn. I never want to see another Wheel error until the day I die.
* VSC's IntelliSense - My god IntelliSense is picky. Having to manually specify every goddamn macro, one at a time in two different locations just to get it to stop breaking down is a nightmare. I wish it were more tolerant of having incomplete information, instead of just shutting down completely.
* Fortran - It could just be me, but IDEs struggle with it. If you have any global data it may as well not exist as far as the IDE is concerned, which makes dealing with such projects very hard.
* CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
COBOL sticks around 66 years after its first release. Fortran is 68 years old and is still enormously relevant. Much, much more software was written in newer languages and has become so complex that replacements have become practically impossible (Fuchsia hasn't replaces Linux in Google products, wayland isn't ready to replace X11 etc)
You can do much better in CMake if you put some effort into cleaning it up - I have little hope anyone will do this though. We have a hard time getting developers to clean up messes in production code and that gets a lot more care and love.
Because it's a re-write, you already know all the requirements. You know what works and what doesn't. You know what kind of data should be laid out and how to do it.
Because of that, a fresh re-write will often erase bugs (including memory ones) that were present originally.
Sure, there are still Fortran codes. But I can hardly imagine that Fortran still plays a big role in another 68 years from now on.
If you can restrict to using the 'good' parts than it can be OK, but it's pulling in a huge dependency for very little gain these days.
Borrowing from stack is super useful when your lambda also lives in the stack; stack escaping is a problem, but it can be made harder by having templates take Fn& instead of const Fn& or Fn&&; that or just a plain function pointer.
> Just use whatever parts of the language you like without worrying about what's most performant!
It's not about performant. It's about understanding someone else's code six months after they've been fired, and thus restricting what they can possibly have done. And about not being pervasively unsafe.
> "I don’t think C++ is outdated by any stretch of the imagination", "matter of personal taste".
Except of course for header files, forward declarations, Make, the true hell of C++ dependency management (there's an explicit exhortation not to use libraries near the bottom), a thousand little things like string literals actually being byte pointers no matter how thoroughly they're almost compatible with std::string, etc. And of course the pervasive unsafety. Yes, it sure was last updated in 2023, the number of ways of doing the same thing has been expanded from four to five but the module system still doesn't work.
> You can write unsafe code in Python! Rewriting always makes the code more safe whether it's in Rust or not!
No. Nobody who has actually used Rust can reasonably arrive at this opinion. You can write C++ code that is sound; Rust-fluent people often do. The design does not come naturally just because of the process of rewriting, this is an entirely ridiculous thing to claim. You will make the same sorts of mistakes you made writing it fresh, because you are doing the same thing as you were when writing it fresh. The Rust compiler tells you things you were not thinking of, and Rust-fluent people write sound C++ code because they have long since internalized these rules.
And the crack about Python is just stupid. When people say 'unsafe' and Rust in the same sentence, they are obviously talking about UB, which is a class of problem a cut above other kinds of bugs in its pervasiveness, exploitability, and ability to remain hidden from code review. It's 'just' memory safety that you're controlling, which according to Microsoft is 70% of all security related bugs. 70% is a lot! (plus thread safety, if this was not mentioned you know they have not bothered using Rust)
In fact the entire narrative of 'you'll get it better the second time' is nonsense, the software being rewritten was usually written for the first time by totally different people, and the rewriters weren't around for it or most of the bugfixes. They're all starting fresh, the development process is nearly the same as the original blank slate was - if they get it right with Rust, then Rust is an active ingredient in getting it right!
> Just use smart pointers!
Yes, let me spam angle brackets on every single last function. 'Write it the way you want to write it' is the first point in the article, and here is the exact 'write it this way' that was critiquing. And you realistically won't do it on every function so it is just a matter of time until one of the functions you use regular references with creates a problem.
The author is arguing that the main reason rewriting a C++ codebase in Rust makes it more memory-safe is not because it was done in Rust, but because it benefits from lessons learned and knowledge about the mistakes done during the first iteration. He acknowledges Rust will also play a part, but that it's minor compared to the "lessons learned" factor.
I'm not sure I buy the argument, though. I think rewrites usually introduce new bugs into the codebase, and if it's not the exact same team doing the rewrite, then they may not be familiar with decisions made during the first version. So the second version could have as many flaws, or worse.
My viewpoint on the language is that there are certain types of engineers who thrive in the complexity that is easy to arrive at in a C++ code base. These engineers are undoubtedly very smart, but, I think, lack a sense of aesthetics that I can never get past. Basically, the r/atbge of programming languages (Awful Taste But Great Execution).
Even the Go authors themselves on Go's website display a process of debugging memory usage that looks identical to a workflow you would have done in C++. So, like, what's the point? Just use C++.
I really do think Go is nice, but at this point I would relegate it to the workplace where I know I am working with a highly variable team of developers who in almost all cases will have a very poor background in debugging anything meaningful at all.
What exactly do you mean by a "Wheel error"? Show me a reproducer and a proper error message and I'll be happy to help to the best of my ability.
By and large, the reason pip fails to install a package is because doing so requires building non-Python code locally, following instructions included in the package. Only in rare cases are there problems due to dependency conflicts, and these are usually resolved by creating a separate environment for the thing you're trying to install — which you should generally be doing anyway. In the remaining cases where two packages simply can't co-exist, this is fundamentally Python's fault, not the installer's: module imports are cached, and quite a lot of code depends on the singleton nature of modules for correctness, so you really can't safely load up two versions of a dependency in the same process, even if you hacked around the import system (which is absolutely doable!) to enable it.
As for finding significant whitespace (meaning indentation used to indicate code structure; it's not significant in other places) hard to read, I'm genuinely at a loss to understand how. Python has types; what it lacks is manifest typing, and there are many languages like this (including Haskell, whose advocates are famous for explaining how much more "typed" their language is than everyone else's). And Python has a REPL, the -i switch, and a built-in debugger in the standard library, on top of not requiring the user to do the kinds of things that most often need debugging (i.e. memory management). How can it be called hard to debug?
Yes, this is a serious flaw in the author's argument. Does he think the exact same team that built version 1.0 in C++ is the one writing 2.0 in Rust? Maybe that happens sometimes, I guess, but to draw a general lesson from that seems weird.
This... doesn't really hold water. You have to learn about what the insane move semantics are (and the syntax for move ctors/operators) to do fairly basic things with the language. Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood. Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
> Here’s a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to.
This has me scratching my head a bit. In spite of C++ being nearly a superset of C, they are very different languages, and idiomatic C++ doesn't look very much like C. In fact, I'd argue that most of the stuff C++ adds to C allows you to write code that's much cleaner than the equivalent C code, if you use it the intended way. The one big exception I can think of is template metaprogramming, since the template code can be confusing, but if done well, the downstream code can be incredibly clean.
There's an even bigger problem with this recommendation, which is how it relates to something else talked about in the article, namely "safety." I agree with the author that modern C++ can be a safe language, with programmer discipline. C++ offers a very good discipline to avoid resource leaks of all kinds (not just memory leaks), called RAII [1]. The problem here is that C++ code that leverages RAII looks nothing like C.
Stepping back a bit, I feel there may be a more fundamental fallacy in this "C++ is Hard to Read" section in that the author seems to be saying that C++ can be hard to read for people who don't know the language well, and that this is a problem that should be addressed. This could be a little controversial, but in my opinion you shouldn't target your code to the level of programmers who don't know the language well. I think that's ultimately neither good for the code nor good for other programmers. I'm definitely not an expert on all the corners of C++, but I wouldn't avoid features I am familiar with just because other programmers might not be.
The observation is that second implementation of a successful system is often much less successful, overengineered, and bloated, due to programmer overconfidence.
On the other hand, I am unsure of how frequently the second-system effect occurs or the scenarios in which it occurs either. Perhaps it is less of a concern when disciplined developers are simply doing rewrites, rather than feature additions. I don't know.
Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
C++20 does have C11's designated initialization now, which helps in some cases, but that was a pain for a long time.
enums and conversion between integers is very strict in C++.
`char * message = "Hello"` is valid C but not C++ (since you cannot mutate the pointed to string, it must be `const` in C++)
C99 introduced variadic macros that didn't become standard C++ until 2011.
C doesn't allow for empty structs. You can do it in C++, but sizeof(EmptyStruct) is 1. And if C lets you get away with it in some compilers, I'll bet it's 0.
Anyway, all of these things and likely more can ruin your party if you think you're going to compile C code with a C++ compiler.
Also don't forget if you want code to be C callable in C++ you have to use `extern "C"` wrappers.
Rc::Weak does the same thing in Rust, but I rarely see anyone use it.
Also, avoid using C++ classes while you're at it.
I recently had to go back to writing C++ professionally after a many-year hiatus. We code in C++23, and I got a book to refresh me on the basics as well as all the new features.
And man, doing OO in C++ just plain sucks. Needing to know things like copy and swap, and the Rule of Three/Five/Zero. Unless you're doing trivial things with classes, you'll need to know these things. If you don't need to know those things, you might as well stick to structs.
Now I'll grant C++23 is much nicer than C++03 (just import std!) I was so happy to hear about optional, only to find out how fairly useless it is compared to pretty much every language that has implemented a "Maybe" type. Why add the feature if the compiler is not going to protect you from dereferencing without checking?
I don't think move semantics are really that bad personally, and some languages move by default (isn't that Rust's whole thing?).
What I don't like is the implicit ambiguous nature of "What does this line of code mean out of context" in C++. Good luck!
I have hope for C++front/Cpp2. https://github.com/hsutter/cppfront
(oh and I think you can write a whole book on the different ways to initialize variables in C++).
The result is you might be able to use C++ to write something new, and stick to a style that's readable... to you! But it might not make everyone else who "knows C++" instantly able to work on your code.
- Use a build system like make, you can't just `c++ build`
- Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
- Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
- Oh also understand the compiler doesn't actually output what you want, you also need a linker
- That linker also doesn't know where to find things, so you need the external tool to use it
- Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
Now you can see why things like IDEs became default tools for teaching students how to write C and C++, because there's no "open a text editor and then `c++ build file.cpp` to get output" for anything except hello world examples.
CLOS seems pretty good, but then again I'm a bit inexperienced. Bring back Dylan!
Rust's move semantics are good! C++'s have a lot of non-obvious footguns.
> (oh and I think you can write a whole book on the different ways to initialize variables in C++).
Yeah. Default init vs value init, etc. Lots of footguns.
Why go through all the trouble to make a better array, and require the user to call a special .at() function to get range checking rather than the other way around? I promptly went into my standard library and reversed that decision because if i'm going to the trouble to use a C++ array class, it better damn well give me a tiny bit of additional protection. The .at() call should have been the version that reverted to C array behavior without the bounds checking.
And its these kinds of decisions repeated over and over. I get its a committee. Some of the decisions won't be the best, but by 2011 everyone had already been complaining about memory safety issues for 15+ years and there wasn't enough politics on the comittee to recognize that a big reason for using C++ over C was the ability of the language to protect some of the sharper edges of C?
Problem is, if you’re using C++ for anything serious, like the aforementioned game development, you will almost certainly have to use the existing libraries; so you’re forced to match whatever coding style they chose to use for their codebase. And in the case of Unreal, the advice “stick to the STL” also has to be thrown out since Unreal doesn’t use the STL at all. If you could use vanilla, by-the-books C++ all the time, it’d be fine, but I feel like that’s quite rare in practice.
Re Matlab: I still see it thriving in the industry, for better or worse. Many engineers just seem to love it. I haven't seen many users of Julia yet. Where do you see those? I think that Julia deserves a fair chance, but it just doesn't have a presence in the fields I work in.
By contrast, my experience with C++ to Rust rewrites is that the inability of Rust to express some useful and common C++ constructs causes the software architecture to diverge to the point where you might as well just be rewriting it from scratch because it is too difficult to track the C++ code.
This wasn't possible when they were added to the language and wasn't really transparent until C++17 or so but it has grown to be a useful safety feature.
I think Rust is probably doing the majority of the work unless you’re writing everything in unsafe. And why would you? Kinda defeats the purpose.
[1] Not me making this up - I started getting into guns and this is what people say.
Unless you use the C++20 [[no_unique_address]] attribute, in which case it is 0 (if used correctly).
> That’s how I feel when I see these companies claim that rewriting their C++ codebases in Rust has made them more memory safe. It’s not because of Rust, it’s because they took the time to rethink and redesign...
If they got the program to work at all in Rust, it would be memory-safe. You can't claim that writing in a memory-safe language is a "minor" factor in why you get memory safety. That could never be proven or disproven.
As for significant whitespace, the problem is that I'm often dealing with files with several thousand lines of code and heavily nested functions. It's very easy to lose track of scope in that situation. Am I in the inner loop, or this outer loop? Scrolling up and down, up and down to figure out where I am. Feels easier to make mistakes as well.
It works well if everything fits on one screen, it gets harder otherwise, at least for me.
As for types, I'm not claiming it's unique to Python. Just that it makes working with Python harder for me. Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
As for debugging, it's great if you have pure Python. Mix other languages in and suddenly it becomes pain. There's no way to step from another language into Python (or vice-versa), at least not cleanly and consistently. This isn't always true for compiled->compiled. I can step from C++ into Fortran just fine.
Meanwhile in Rust you can freely borrow from the stack in closures, and the borrow checker ensures that you'll not screw up. That's what (psychological) safety feels like.
> I'm often dealing with files with several thousand lines of code and heavily nested functions.
This is the problem. Also, a proper editor can "fold" blocks for you.
> Being able to see the type of data at a glance tells me a LOT about what the code is doing and how it's doing it - and Python doesn't let me see this information.
If you want to use annotations, you can, and have been able to since 3.0. Since 3.5 (see https://peps.python.org/pep-0484/; it's been over a decade now), there's been a standard for understanding annotations as type information, which is recognized by multiple different third-party tools and has been iteratively refined ever since. It just isn't enforced by the language itself.
> Mix other languages in and suddenly it becomes pain.... This isn't always true for compiled->compiled.
Sure, but then you have to understand the assembly that you've stepped into.
I can't fix that. I just work here. I've got to deal with the code I've got to deal with. And for old legacy code that's sprawling, I find braces help a LOT with keeping track of scope.
>Sure, but then you have to understand the assembly that you've stepped into.
Assembly? I haven't touched raw assembly since college.
How exactly are they more helpful than following the line of the indentation that you're supposed to have as a matter of good style anyway? Do you not have formatting tools? How do you not have a tool that can find the top of a level of indentation, but do have one that can find a paired brace?
>Assembly? I haven't touched raw assembly since college.
How exactly does your debugger know whether the compiled code it stepped into came from C++ or Fortran source?
Not sure how relevant the "in order to use a tool, you need to learn how to use the tool".
Or from the other side: not sure what I should think about the quality of the work produced by people who don't want to learn relatively basic skills... it does not take two PhDs to understand how to use pkg-config.
I'm not defending TFA, I'm saying if you're going to reject the argument you must quote it in full, without leaving the main part.
Yeah, sorry, but no, ask some long-term developers about how this often goes.
We live in a special time when general processing efficiency has always been increasing. The future is full of domain specific hardware (enabling the continued use of COBOL code written for slower mainframes). Maybe this will be a half measure like cuda or your c++ will just be a thin wrapper around a makeYoutube() ASIC
Of course if there is a breakthrough in general purpose computing or a new killer app it will wipe out all those products which is why they don't just do it now
Now you mostly get an error to the effect of "constraint foo not satisfied by type bar" at the point of use that tells you specifically what needs to change about the type or value to satisfy the compiler.
Frankly the idea that your compiler driver should not be a basic build system, package manager, and linker is an idea best left in the 80s where it belongs.
Only if you have full control on what others are writing. In reality, you're going to read a lot, lots of "clever" codes. And I'm saying as a person who have written a good amount of template meta programming codes. Even for me, some codes take hours to understand and I was usually able to cut 90% of its code after that.
I wish I didn’t have to know about std::launder but I do
In C and C++ no such thing exists. It is walking in a minefield. It is worse with C++ because they piled so much stuff, nobody knows on the top of their head how a variable is initialized. The initialization rules are insane: https://accu.org/journals/overload/25/139/brand_2379/
So if you are doing peaky memory stuff with complex partially self-initializing code in C++, there are so many ways of blowing yourself and your entire team up without knowing which bit of code you committed years ago caused it.
C++ designated initializers are slightly different in that the initialization order must match the declared member order. That is not required in C.
Even if we take this claim at face value, isn’t that great?
Memory safety is a HUGE source of bugs and security issues. So the author is hand-waving away a really really good reason to use Rust (or other memory safe by default language).
Overall I agree this seems a lot like “I like C++and I’m good at it so it’s fine” with justifications created from there.
On legacy code bases, sure. C++ rules in legacy C++ codebases. That’s kind of a given isn’t it? So that’s not a benefit. Just a fact.
Your move.
Problem 1: You might fail to initialize an object in memory correctly.
Solution 1: Constructors.
Problem 2: Now you cannot preallocate memory as in SLAB allocation since the constructor does an allocator call.
Solution 2: Placement new
Problem 3: Now the type system has led the compiler to assume your preallocated memory cannot change since you declared it const.
Solution 3: std::launder()
If it is not clear what I mean about placement new and const needing std::lauder(), see this:
https://miyuki.github.io/2016/10/21/std-launder.html
C has a very simple solution that avoids this chain. Use structured programming to initialize your objects correctly. You are not going to escape the need to do this with C++, but you are guaranteed to have to consider a great many things in C++ that would not have needed consideration in C since C avoided the slippery slope of syntactic sugar that C++ took.
However, it's definitely wrong to say that the typical tools are "non-portable". The UNIX-style C++ toolchains work basically anywhere, including Windows, although I admit some of the tools require MSys/Cygwin. You can definitely use GNU Makefiles with pkg-config using MSys2 and have a fine experience. Needless to say, this also works on Linux, macOS, FreeBSD, Solaris, etc. More modern tooling like CMake and Ninja work perfectly fine on Windows and don't need any special environment like Cygwin or MSys, can use your MSVC installation just fine.
I don't really think applying the mantra of Rust package management and build processes to C++ is a good idea. C++'s toolchain is amenable to many things that Rust and Cargo aren't. Instead, it'd be better to talk about why C++ sucks to use, and then try to figure out what steps could be taken to make it suck less. Like:
- Building C++ software is hard. There's no canonical build system, and many build systems are arcane.
This one really might be a tough nut to crack. The issue is that creating yet another system is bound to just cause xkcd 927. As it is, there are many popular ways to build, including GNU Make, GNU Autotools + Make, Meson, CMake, Visual Studio Solutions, etc.
CMake is the most obvious winner right now. It has achieved defacto standard support. It works on basically any operating system, and IDEs like CLion and Visual Studio 2022 have robust support for CMake projects.
Most importantly, building with CMake couldn't be much simpler. It looks like this:
$ cmake -B .build -S .
...
$ cmake --build .build
...
And you have a build in .build. I think this is acceptable. (A one-step build would be simpler, but this is definitely more flexible, I think it is very passable.)This does require learning CMake, and CMake lists files are definitely a bit ugly and sometimes confusing. Still, they are pretty practical, and rather easy to get started with, so I think it's a clear win. CMake is the "defacto" way to go here.
- Managing dependencies in C++ is hard. Sometimes you want external dependencies, sometimes you want vendored dependencies.
This problem's even worse. CMake helps a little here, because it has really robust mechanisms for finding external dependencies. However, while robust, the mechanism is definitely a bit arcane; it has two modes, the legacy Find scripts mode, and the newer Config mode, and some things like version constraints can have strange and surprising behavior (it differs on a lot of factors!)
But sometimes you don't want to use external dependencies, like on Windows, where it just doesn't make sense. What can do you really do here?
I think the most obvious thing to do is use vcpkg. As the name implies, it's Microsoft's solution to source-level dependencies. Using vcpkg with Visual Studio and CMake is relatively easy, and it can be configured with a couple of JSON files (and there is a simple CLI that you can use to add/remove dependencies, etc.) When you configure your CMake build, your dependencies will be fetched and built appropriately for your targets, and then CMake's find package mechanism can be used just as it is used for external dependencies.
CMake itself is also capable of vendoring projects within itself, and it's absolutely possible to support all three modalities of manual vendoring, vcpkg, and external dependencies. However, for obvious reasons this is generally not advisable. It's really complicated to write CMake scripts that actually work properly in every possible case, and many cases need to be prevented because they won't actually work.
All of that considered, I think the best existing solution here is CMake + vcpkg. When using external dependencies is desired, simply not using vcpkg is sufficient and the external dependencies will be picked up as long as they are installed. This gives an experience much closer to what you'd expect from a modern toolchain, but without limiting you from using external dependencies which is often unavoidable in C++ (especially on Linux.)
- Cross-compiling with C++ is hard.
In my opinion this is mostly not solved by the "defacto" toolchains. :)
It absolutely is possible to solve this. Clang is already better off than most of the other C++ toolchains in that it can handle cross-compiling with selecting cross-compile targets at runtime rather than build time. This avoids the issue in GCC where you need a toolchain built for each target triplet you wish to target, but you still run into the issue of needing libc/etc. for each target.
Both CMake and vcpkg technically do support cross-compilation to some extent, but I think it rarely works without some hacking around in practice, in contrast to something like Go.
If cross-compiling is a priority, the Zig toolchain offers a solution for C/C++ projects that includes both effortless cross-compiling as well as an easy to use build command. It is probably the closest to solving every (toolchain) problem C++ has, at least in theory. However, I think it doesn't really offer much for C/C++ dependencies yet. There were plans to integrate vcpkg for this I think, but I don't know where they went.
If Zig integrates vcpkg deeply, I think it would become the obvious choice for modern C++ projects.
I get that by not having a "standard" solution, C++ remains somewhat of a nightmare for people to get started in, and I've generally been doing very little C++ lately because of this. However I've found that there is actually a reasonable happy path in modern C++ development, and I'd definitely recommend beginners to go down that path if they want to use C++.
Alternative libraries like QT are more coherent and better thought out.
There are a lot of problems, but having to carefully construct the build environment is a minor one time hassle.
Then repeated foot guns going off, no toes left, company bankrupt and banking system crashed, again
Why performance-critical domains? Does C++ have a performance edge over Rust?
It is even if if you do
> But here’s the thing: all programming languages are unsafe if you don’t know what you’re doing.
But here's the thing, that's not a good argument because...
> will just make it a lot harder to have memory leaks or safety issues.
... in reality it's not "just". "Just makes it better" means it's better
One of the most common complaints is the lack of a package manager. I think this stems from a fundamental misunderstanding of how the ecosystem works. Developers accustomed to language-specific dependency managers like npm or pip find it hard to grasp that for C++, the system's package manager (apt, dnf, brew) is the idiomatic way to handle dependencies.
Another perpetual gripe is that C++ is bad because it is overly complex and baroque, usually from C folks like Linus Torvalds[1]. It's pretty ironic, considering the very compiler they use for C (GCC), is written in C++ and not in C.
[1]: Torvalds' comment on C++ <https://harmful.cat-v.org/software/c++/linus>
I once encountered this situation with C# code written by an undergraduate, rewrote it from scratch in C++ and got a better result. In hindsight, the result would have been even better in C since I spent about 80% of my time fighting with C++ to try to use every language feature possible. I had just graduated from college and my code whole better, did a number of things wrong too (although far fewer to my credit). I look back at it in hindsight and think less is more when it comes to language features.
I actually am currently maintaining that codebase at a health care startup (I left shortly after it was founded and rejoined not that long ago). I am incrementally rewriting it to use a C subset of C++ whenever I need to make a change to it. At some point, I expect to compile it as C and put C++ behind me.
Perhaps AI will get reliable enough to pour through these double-digit million LOC codebases and convert them flawlessly, but that looks like it's decades off at this point.
There is nothing you can do in C++ that you cannot do in C due to Turing Completeness. Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
Unfortunately, many languages allow `string + int`, which is quite problematic. Java is to blame for some of this.
And C++ is even worse since literals are `const char[]` which decays to pointer.
Languages okay by my standard but not yours include: Python, Ruby.
I've observed the existence in larger projects of "build engineers" whose sole job is to keep the project building on a regular cadence. These jobs predominantly seem to exist in C++ land.
These languages are not among the top contenders for new projects. They're a legacy problem, and are kept alive only by a slowly shrinking number of projects. It may take a while to literally drop to zero, but it's a path of exponential decay towards extinction.
C++ has strong arguments for sticking around as a legacy language for several too-big-to-rewrite C++ projects, but it's becoming less and less attractive for starting new projects.
C++ needs a better selling point than being a language that some old projects are stuck with. Without growth from new projects, it's only a matter of time until it's going to be eclipsed by other languages and relegated to shrinking niches.
Okay, but is that actually a good idea? Merely saying that something is idiomatic isn't a counterargument to an allegation that the ecosystem has converged on a bad idiom.
For software that's going to be distributed through that same package manager, yes, sure, that's the right way to handle dependencies. But if you're distributing your app in a format that makes the dependencies self-contained, or not distributing it at all (just running it on your own machines), then I don't see what you gain from letting your operating system decide which versions of your dependencies to use. Also this doesn't work if your distro doesn't happen to package the dependency you need. Seems better to minimize version skew and other problems by having the files that govern what versions of dependencies to use (the manifest and lockfile) checked into source control and versioned in lockstep with the application code.
Also, the GCC codebase didn't start incorporating C++ as an implementation language until eight years after Linus wrote that message.
Merely parsing C++ code requires a higher time complexity than parsing C code (linear time parsers cannot be used for C++), which is likely where part of the long compile times originate. I believe the parsing complexity is related to templates (and the headers are full of them), but there might be other parts that also contribute to it. Having to deal with far more abstractions is likely another part.
That said, I have been incrementally rewriting a C++ code base at a health care startup into a subset of C with the goal of replacing the C++ compiler with a C compiler. The closer the codebase comes to being C, the faster it builds.
On Windows and OSX it's even easier - if you're okay writing only for those platforms.
It's more difficult to learn, and it seems convoluted for people coming from Python and Javascript, but there are a lot of advantages to not having package management and build tooling tightly integrated with the language or compiler, too.
It's really not about being hard to grasp. Once you need a different dependency version than the system provides, you can't easily do it. (Apart from manual copies) Even if the library has the right soname version preventing conflicts (which you can do in C, but not really C++ interfaces), you still have multiple versions of headers to deal with. You're losing features by not having a real package manager.
While this is technically true, a more satisfying rationale is provided by Stroustrup here[0].
> Many common things have ways of being done in C that work equally well or even better. For example, you can use balanced binary search trees in C without type errors creating enormous error messages from types that are sentences if not paragraphs long. Just grab BSD’s sys/tree.h, illumnos’ libuutil or glib for some easy to use balanced binary search trees in C.
Constructs such as sys/tree.h[1] replicate the functionality of C++ classes and templates via the C macro processor. While they are quite useful, asserting that macro-based definitions provide the same type safety as C++ types is simply not true.
As to the whether macro use results in "creating enormous error messages" or not, that depends on the result of the textual substitution. I can assure you that I have seen reams of C compilation error messages due to invalid macro definitions and/or usage.
Before C++ added it we relied on undefined behavior that the compilers agreed to interpret in the necessary way if and only if you made the right incantations. I’ve seen bugs in the wild because developers got the incantations wrong. std::launder makes it explicit.
For the broader audience because I see a lot of code that gets this wrong, std::launder does not generate code. It is a compiler barrier that blocks constant folding optimizations of specific in-memory constants at the point of invocation. It tells the compiler that the constant it believes lives at a memory address has been modified by an external process. In a C++ context, these are typically restricted to variables labeled ‘const’.
This mostly only occurs in a way that confuses the compiler if you are doing direct I/O into the process address space. Unless you are a low-level systems developer it is unlikely to affect you.
Maybe GNU Emacs has a larger percentage remaining intact; at least it retains some architectural idiosyncrasies from 1980s.
As of Fortran, modern Fortran is a pretty nice and rich language, very unlike the Fortran-77 I wrote at high school.
Find an IDE or extension which provides the nesting context on top of the editor. I think vs code has it built in these days.
Either way, it's hard not to draw parallels between all the drama in US politics and the arguments about language choice sometimes; it feels like both sides lack respect for the other, and it makes things unnecessarily tense.
Your choice: do you have the most senior engineers spend time sporadically maintaining the build system, perhaps declaring fires to try to pay off tech debt, or hire someone full time, perhaps cheaper and with better expertise, dedicated to the task instead?
CI is an orthogonal problem but that too requires maintenance - do you maintain it ad-hoc or make it the official responsibility for someone to keep maintained and flexible for the team’s needs?
I think you think I’m saying the task is keeping the build green whereas I’m saying someone has to keep the system that’s keeping the build green going and functional.
There are many high-level C++ applications that would probably be best implemented in a modern GC language. We could skip the systems language discussion entirely because it is weird that we are using one.
There are also low-level applications like high-performance database kernels where the memory management models are so different that conventional memory safety assumptions don’t apply. Also, their performance is incredibly tightly coupled to the precision of their safety models. It is no accident that these have proven to be memory safe in practice; they would not be usable if they weren’t. A lot of new C++ usage is in these areas.
Rust to me slots in as a way to materially improve performance for applications that might otherwise be well-served by Java.
Shai-Hulud malware attack: Tinycolor and over 40 NPM packages compromised (stepsecurity.io)
935 points by jamesberthoty 16 hours ago | flag | hide | 730 comments
Maybe obstreperous dependency management ends up being the winning play in 2025 :)
For example:
#include <iostream>
#define SQL(statement) #statement
int main (int ac, const char *av[])
{
const char *select = SQL(select * from some_table);
std::cout << select << std::endl;
return 0;
}The scenario you are describing does not make sense for the commonly accepted industry definition of "build system." It would make sense if, instead, the description was "application", "product", or "system."
Many software engineers use and interpret the phrase "build system" to be something akin to make[0] or similar solution used to produce executable artifacts from source code assets.
0 - https://man.freebsd.org/cgi/man.cgi?query=make&apropos=0&sek...
I don't know what IDE GP might be using, but mixed-language debuggers for native code are pretty simple as long as you just want to step over. Adding support for Fortran to, say, Visual Studio wouldn't be a huge undertaking. The mechanism to detect where to put the cursor when you step into a function is essentially the same as for C and C++. Look at the instruction pointer, search the known functions for an address that matches, and jump to the file and line.
It seems to me that the people/committees who built C++ just spent decades inventing new and creative ways for developers to shoot themselves in the foot. Like, why does the language need to offer a hundred different ways to accomplish each trivial task (and 98 of them are bad)?
Let no one accuse the committee of being unresponsive.
Because the point was not to make an array type that's safe by default, but rather to make an array type that behaves like an object, and can be returned, copied, etc. I mean, I agree with you, I think operator[]() should range-check by default, but you're simply misunderstanding the rationale for the class.
The same applies to many of the other baseless complaints I'm seeing here, learn to use your tools fools.
I’m not sure why you’re dismissing it as something else without knowing any of the details or presuming I don’t know what I’m talking about.
What happens in practice is people end up writing their own insecure code instead of using someone else's insecure code. Of course, we can debate the tradeoffs of one or the other!
> Countless companies have cited how they improved their security or the amount of reported bugs or memory leaks by simply rewriting their C++ codebases in Rust. Now is that because of Rust? I’d argue in some small part, yes.
Just delete this. Even an hour's familiarity with Rust will give you a visceral understanding that "Rewrites of C++ codebases to Rust always yield more memory-safe results than before" is absolutely not because "any rewrite of an existing codebase is going to yield better results". If you don't have that, skip it, because it weakens the whole piece.
> Unless you are a low-level systems developer it is unlikely to affect you.
Making new data structure is common. Serializing classes into buffers is common.
It's true that Rust makes it much harder to leak memory compared to C and even C++, especially when writing idiomatic Rust -- if nothing else, simply because Rust forces the programmer to think more deeply about memory ownership.
But it's simply not the case that leaking memory in Rust requires unsafe blocks. There's a section in the Rust book explaining this in detail[1] ("memory leaks are memory safe in Rust").
[1] https://doc.rust-lang.org/book/ch15-06-reference-cycles.html
I feel like I always hear this argument for continuing to use C++.
I, on the other hand, want a language that doesn't make me feel like I'm walking a tightrope with every line of code I write. Not sure why people can't just admit the humans are not robots and will write incorrect code.
Some lvalue move copy constructor double rainbow, and you’re left wondering wtf
Arithmetic addition and sequence concatenation are very very different.
——
Scala got this right as well (except strings, Java holdover)
Concatenation is ++
It's hard enough to get programmers to care enough about how their code affects build times. Modules make it impossible for them to care, and will lead to horrible problems when building large projects.
std::launder is a tool for object instances that magically appear where other object instances previously existed but are not visible to the compiler. The typical case is some kind of DMA like direct I/O. The compiler can’t see this at compile time and therefore assumes it can’t happen. std::launder informs the compiler that some things it believes to be constant are no longer true and it needs to update its priors.
In any case, if you want safety and performance, use Rust.
This is in big part also because of committee, that prefers hundred-line template monster "can_this_call_that_v" to a language feature, probably thinking that by not including something in language standard and offloading it to library they do good job.
(NaN + 0.0) != 0.0 + NaN
Inf + -Inf != Inf
I suspect the algebraists would also be pissed if you took away their overloads for hypercomplex numbers and other exotic objects.
But a shared_ptr manages at least 3 things: control block lifetime, pointee lifetime, and the lifetime of the underlying storage. The weak pointer shares ownership of the control block but not the pointee. As I understand this is because the weak_ptr needs to modify the control block to try and lock the pointer and to do so it must ensure the control block's lifetime has not ended. (It manages the control blocks lifetime by maintaining a weak count in the control block but that is not really why it shares ownership.)
As a bonus trivia, make_shared uses a single allocation for both the control block and the owned object's storage. In this case weak pointers share ownership of the allocation for the pointee in addition to the control block itself. This is viewed as an optimization except in the case where weak pointers may significantly outlive the pointee and you think the "leaked" memory is significant.
Quoting cppreference [0]:
If any std::weak_ptr references the control block created by std::make_shared after the lifetime of all shared owners ended, the memory occupied by T persists until all weak owners get destroyed as well, which may be undesirable if sizeof(T) is large.
[0] https://en.cppreference.com/w/cpp/memory/shared_ptr/make_sha...Whereas the other around, porting a C++ program to Rust without knowing Rust is challenging initially (to understand the borrow checker) but orders of magnitude easier to maintain.
Couple that with easily being about to `cargo add` dependencies and good language server features, and the developer experience in Rust blows C++ out of the water.
I will grant that change is hard for people. But when working on a team, Rust is such a productivity enhancer that should be a no-brainer for anyone considering this decision.
You wish.
These jobs exist for companies with large monorepos in other languages too and/or when you have many projects.
Plenty of stuff to handle in big companies (directory ownership, Jenkins setup, in-company dependency management and release versioning, developer experience in genernal, etc.)
But just keeping track of all the features and the exotic ways they interact is a full time job. There are people who have dedicated entire lives to understanding even a tiny corner of the language, and they still don't manage.
Not worth the effort for me, there are other languages.
I think this is one of the worst (and most often repeated arguments) about C++. C and C++ are inherently unsafe in ways that trip up _all_ developers even the most seasoned ones, even when using ALL the modern C++ features designed to help make C++ somewhat safer.
Not any less than other parts of the language. If you capture by reference you need to mind your lifetimes. If you need something more dynamic then capture by copy and use pointers as needed. It unfortunate the developer who introduced that bug you mentioned didn't keep that in mind, but this is not a problem that lambdas introduced; it's been there all along. The exact same thing would've happened if they had stored a reference to a dynamic object in another dynamic object. If the latter lives longer than the former you get a dangling reference.
>In any case, if you want safety and performance, use Rust.
Personally, I prefer performance and stability. I've already had to fix broken dependencies multiple times after a new rustc version was released. Wake me up when the language is done evolving on a monthly basis.
you don't need to understand what an overloaded operator is doing any more than you have to understand the implementation of every function you call, recursively
The funniest thing happened when I needed to compile a C file as part of a little Rust project, and it turned out one of the _easiest_ ways I've experienced of compiling a tiny bit of C (on Windows) was to put it inside my Rust crate and have cargo do it via a C compiler crate.
I've been a software developer for nearly 2 decades at this point, contributed to several rewrites and oversaw several rewrites of legacy software.
From my experience I can assure you that rewriting a legacy codebase to modern C++ will yield a better and safer codebase overall.
There are multiple factors that contribute to this, such one of which is what I reffer to as "lessons learnt" if you have a stable team of developers maintaining a legacy codebase they will know where the problematic areas are and will be able to avoid re-creating them in a rewrite.
An additonal factor to consider is that a lot of legacy C++ codebases can not be upgraded to use modern language features like smart pointers. The value smart pointers provide in a full rewrite can not be overstated.
Then there's also the factor that is a bit anecdotal which is I find that there are less C++ devs in general as there was 15 years ago, but those that stayed / survived are generally better and more experienced with very few enthusiastic juniors coming in.
I'm sorry you did not enjoy the article though, but thank you for giving it your time and reading it that part I really appreciate.
Maybe you can do that. But you are probably working in a team. And inevitably someone else in your team thinks that operator overloading and template metaprogramming are beautiful things, and you have to work with their code. I speak from experience.
Can't agree there. Why wouldn't they be usable if they weren't memory safe?
Can you give me an example of this mythical "memory safe in practice" database?
Not Postgresql at least: https://www.postgresql.org/support/security/
You don't want std::launder for any of that. If you must create object instances from random preexisting bytes you want std::bit_cast or https://en.cppreference.com/w/cpp/memory/start_lifetime_as.h...
However if I may raise my counter point I like to have a rule that C++ should be written mostly as if you were writing C as much as possible until you need some of it's additional features and complexities.
Problem is when somebody on the team does not share this view though, that much is true :)
Modern database kernels are memory-bandwidth bound. Micro-managing the memory is a core mechanic as a consequence. It is difficult to micro-manage memory with extreme efficiency if it isn’t implicitly safe. Companies routinely run formal model checkers like TLA+ on these implementations. It isn’t a rando spaffing C++ code.
I’ve used PostgreSQL a lot but no one thinks of it as highly optimized.
> You're gonna be dealing with issues involving "peaky" memory usage e.g. erroneously persistent references to objects
I use Rust in a company in a team who made the C++ -> Rust switch for many system services we provide on our embedded devices. I use Rust daily. I am aware that leaking is actually safe.
How do you define “need” for extra features? C and C++ can fundamentally both do the same thing so if you’re going to write C style C++, why not just write C and avoid all of C++’s foot guns?
The best attitude in programmers (regardless of the language) is the awareness that "my code probably contains embarrassing bugs, I just haven't found them yet". Act accordingly.
There are of course lots of valid reasons to continue to use C/C++ on projects where it is used and there are a lot such projects. Rewrites are disruptive, time consuming, expensive, and risky.
It is true that there are ways in C++ to mitigate some of these issues. Mostly this boils down to using tools, libraries, and avoiding some of the more dark corners of the language and standard library. And if you have a large legacy code base, adopting some of these practices is prudent.
However, a lot of this stuff boils down to discipline and skill. You need to know what to use and do, and why. And then you need to be disciplined enough to stick with that. And hope that everybody around you is equally skilled and disciplined.
However, for new projects, there usually are valid alternatives. Even performance and memory are not the arguments they used to be. Rust seems to be building a decent reputation for combining compile time safety with performance and robustness; often beating C/C++ implementations of things where Rust is used to provide a drop in replacement. Given that, I can see why major companies are reluctant to take on new C/C++ projects. I don't think there are many (or any) upsides to the well documented downsides.
* The author is confusing memory safety with other kinds of safety. This is evident from the fact that they say you can write unsafe code in GC languages like python and javascript. unsafe != memory unsafe. Rust only gives you memory safety, it won't magically fix all your bugs.
* The slippery slope trick. I've seen this so often, people say because Rust has unsafe keyword it's the same as c/c++. The reason it's not is because in c/c++ you don't have any idea where to look for undefined behaviour. In Rust at least the code points you to look at the unsafe blocks. The difference is of degree which for practial purposes makes a huge difference.
As for why not just go for C. You can write C++ fully as if it were C, you can not ever turn C into C++
(Note: I'm not saying it is deeply flawed, just that this particular way of using it suggests so).
I work on large C++ projects with 1-2 dozen third party C and C++ library dependencies, and they're all built from source (git submodules) as part of one CMake build.
It's not easy but it is fairly simple.
Rust is a systems language but it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models, among many other well-known quirks as a systems language. Other verifiable safety models exist that don’t have these issues. C++, for better or worse, can deal with this stuff in a straightforward way.
If by scientific ecosystems you mean people making prototypes for papers, then yes. But in commercial, industrial setting there is still no alternative for many of Matlab toolboxes, and as for Julia, as cool as it is, you need to be careful to distinguish between real usage and vetted marketing materials created by JuliaSim.
These are mostly inconsequential when using code other people write. It is trivial to mix C and C++ object files, and where the differences (in headers) do matter, they can be ifdefed away.
> void * implicit casting in C just works, but in C++ it must be an explicit cast (which is kind of funny considering all the confusing implicit behavior in C++).
This makes sense because void* -> T* is a downcast. I find the C behavior worse.
> enums and conversion between integers is very strict in C++.
As it should, but unscoped enums are promoted to integers the same way they are in C
> `char * message = "Hello"` is valid C but not C++
Code smell anyway, you can and should use char[] in both languages
You didn't mention the difference in inline semantics which IMO has more impact than what you cited
This is a strength not a weakness because it allows you to choose your build system independently of the language. It also means that you get build systems that can support compiling complex projects using multiple programming languages.
> Understand that C++ compilers by default have no idea where most things are, you have to tell them exactly where to search
This is a strength not a weakness because it allows you to organize your dependencies and their locations on your computer however you want and are not bound by whatever your language designer wants.
> Use an external tool that's not your build system or compiler to actually inform the compiler what those search paths are
This is a strength not a weakness because you are not bound to a particular way of how this should work.
> Oh also understand the compiler doesn't actually output what you want, you also need a linker
This is a strength not a weakness because now you can link together parts written in different programming languages which allows you to reuse good code instead of reinventing the universe.
> That linker also doesn't know where to find things, so you need the external tool to use it
This is a strength not a weakness for the reasons already mentioned above.
> Oh and you still have to use a package manager to install those dependencies to work with pkg-config, and it will install them globally. If you want to use it in different projects you better hope you're ok with them all sharing the same version.
This is a strength not a weakness because you can have fully offline builds including ways to distribute dependencies to air-gapped systems and are not reliant on one specific online service to do your job.
Also all of this is a non-issue if you use a half-modern build system. Conflating the language, compiler, build system and package manager is one of the main reason why I stay away from "modern" programming languages. You are basically arguing against the Unix philosophy of having different tools that work together with each tool focusing on one specific task. This allows different tools to evolve independently and for alternatives to exist rather than a single tool that has to fit everyone.
In a complete tangent I think that "smart guns" that only let you shoot bullseye targets, animals and designated un-persons are not far off.
But C++ doesn't have that problem. Sure, a separate operator would have been cleaner (but | is already used for bitwise or) but I have never seen any bug that resulted from it and have never felt it to be an issue when writing code myself.
You really should not have global data. Modules are the way to go and have been since Fortran90.
> CMake - I'm amazed it works at all. It looks great for simple toy projects and has the power to handle larger projects, but it seems to quickly become an ungodly mess of strange comments and rules that aren't spelled out - and you have no way of stepping into it and seeing what it's doing. I try to touch it as infrequently as possible. It feels like C macros, in a bad way.
I like how you wrote my feelings so accurately :D
It's like a well equiped workshop, just because you have access to a chainsaw but do not need to use it to build a table does not mean it's a bad workshop.
C is very barebones, languages like C++. C#, Rust and so on are not. Just because you don't need all of it's features does not make those languages inherently bad.
Great question or in this case counter-counter point though.
The first two are already used for bitwise and logical or and the third isn't available in ASCII so I still think overloading + was a reasonable choice and doesn't cause any actual problems IME.
Massive cope, there's no excuse for the lack of decent infrastructure. I mean, the C++ committee for years said explicitly that they don't care about infrastructure and build systems, so it's not really surprising.
C and C++ have an answer to the dependency problem, you just have to learn how to do it. It's not rocket science, but you have to learn something. Modern languages remove this barrier, so that people who don't want to learn can still produce stuff. Good for them.
Most templates are much easier to read in comparison.
That's exactly my point: if you think that calling `cmake --build build` is "magic", then maybe you don't have the right profile to use C++ in the first place, because you will have to learn some harder concepts there (like... pointers).
To be honest, I find it hard to understand how a software developer can write code and still consider that command line instructions are "magic incantations". To me it's like saying that calling a function like `println("Some text, {}, {}", some_parameter, some_other_parameter)` is a "magic incantation". Calling a function with parameters counts as "the basics" to me.
You get to choose between 25 flint-bladed axes, some of which are coated in modern plastic, when you really want a chainsaw.
Exactly: it makes many things nicer to use than the language package managers, e.g. when maintaining a Linux distribution.
But people generally don't know how one maintains a Linux distribution, so they can't really see the use-case, I guess.
But in industrial settings where multi groups share and change libs something like debpkg may be used. You add caching and you can go quite deep quickly esp after bolting on cdci.
One must cop to the fact that a go build or zig build is just fundamentally better.
Yes! I believe this is powerful: if CMake is used properly, it does not have to know where the dependencies come from, it will just "find" them. So they could be installed on the system, or fetched by a package manager like vcpkg or conan, or just built and installed manually somewhere.
> Cross-compiling with C++ is hard.
Just wanted to mention the dockcross project here. I find it very useful (you just build in a docker container that has the toolchain setup for cross-compilation) and it "just works".
To me skill and effort is misplaced and wasted when it's spent on manually checking invariants that a compiler could check better automatically, or implementing clever workarounds for language warts that no longer provide any value.
Removal of busywork and pointless obstacles won't make smart programmers dumb and lazy. It allows smart programmers to use their brainpower on bigger more ambitious problems.
Executables with debug symbols contain the names of the source files it was built from. Your debugger understands the debug symbols, or you can use tools like `addr2line` to find the source file and line number of an instruction in an executable.
Debugger does not need to understand the source language. It's possible to cross language boundaries in just vanilla GDB for example.
Which is a recurring theme in C++: the default behavior is unsafe (in order to be faster), and there is a method to do the safe thing. Which is exactly the opposite of what it should be.
You are allowed to use a lot of `unsafe` if you really need to. How much `unsafe` do you use in C++?
> it is uncomfortable with core systems-y things like DMA because it breaks lifetime and ownership models,
Sure, it means it can't prove memory safety. But that just takes you back to parity with C++. It feels bad in Rust because normally you can do way better than that, but this isn't an argument for C++.
Most arguments in the article boil down to "c++ has the reputation of X, which is partly true, but you can avoid problems with discipline". Amusingly, this also applies to assembly. This is _exactly_ why I don't want to code in c++ anymore: I don't want the constant cognitive load to remember not to shoot myself in the foot, and I don't want to spend time debugging silly issues when I screw up. I don't want the outdated tooling, compilation model and such.
Incidentally, I've also been coding in Rust for 5 years or so, and I'm always amazed that code that compiles actually works as intended and I can spend time on things that matter.
Going back to c++ makes me feel like a caveman coder, every single time.
That said, I still think it's a rather weak argument, even if we do accept that the rewrite will do most of the bug removal, since we aren't stupid and move to smart pointers, more stl usage and for each loops. "Most" is not "all".
AUR stands for "Arch User Repository". It's not the official system repository.
> I'm getting the impression that C/C++ cultists love it whenever there's an npm exploit
I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.
My problem with language package managers is that people love them precisely because they don't want to learn how to deal with dependencies. Which is actually the problem: if I pull a random Rust library, it will itself pull many transitive dependencies. I recently compared two implementations of the same standard (C++ vs Rust): in C++ it had 8 dependencies (I can audit that myself). In Rust... it had 260 of them. 260! I won't even read through all those names.
"It's too hard to add a dependency in C++" is, in my opinion, missing the point. In C++, you have to actually deal with the dependency. You know it exists, you have seen it at least once in your life. The fact that you can't easily pull 260 dependencies you have never heard about is a feature, not a bug.
I would be totally fine with great tooling like cargo, if it looked like the problem of random third-party dependencies was under control. But it is not. Not remotely.
> Do these cultists just not use dependencies?
I choose my dependencies carefully. If I need a couple functions from an open source dependency I don't know, I can often just pull those two functions and maintain them myself (instead of pulling the dependency and its 10 dependencies).
> Are they just [probably inexpertly] reinventing every wheel?
I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.
Would it make me more of an expert if I was pulling, running and distributing random code from the Internet without having the smallest clue about who wrote it?
Do I need to complain about how hard CMake is and compare a command line to a "magic incantation" to be considered an expert?
Granted this is probably a novice-level problem.
I agree with the discipline aspect. C++ has a lot going against it. But despite everything it will continue to be mainstream for a long time, and by the looks of it not in the way of COBOL but more like C.
It's important to remember Rust's borrow checker was computationally infeasible 15 years ago. C & C++ are much older than that, and they come from an era where variable name length affected compilation time.
It's easy to publicly shame people who do hard things for a long time in the light of newer tools. However, many people who likes these languages are using them longer than the languages we champion today were mere ideas.
I personally like Go in these days for its stupid simplicity, but when I'm going to do something serious, I'll always use C++. You can fight me, but never pry C++ from my cold, dead hands.
For note, I don't like C & C++ because they are hard. I like them because they provide a more transparent window the processor, which is a glorified, hardware implemented PDP-11 emulator.
Last, we shall not forget that all processors are C VMs, anyway.
Not for temporaries initialized from a string constant. That would create a new array on the stack which is rarely what you want.
And for globals this would preclude the the data backing your string from being shared with other instances of the same string (suffix) unless you use non-standard compiler options, which is again undesirable.
In modern C++ you probably want to convert to a string_view asap (ideally using the sv literal suffix) but that has problems with C interoperability.
You need to know the operator overload semantics for a particular use case? It is not exactly hidden lore, there are even man pages (libstdc++-doc, man 3 std::ostream) or just use std::println.
You are stuck instantiating std::vector? Then you will be stuck in any language anyway.
C++ giving you the ability to create your own containers that equal the standard library is a bonus, it doesn't make those containers harder to use.
Perhaps if C++ had a decent standardized package manager, the Python package system reuse that? ;p
I use C++ daily, and it’s an overcomplicated language. The really good thing about Rust or Zig is that (mostly) everything is explicit, and that’s a big win in my opinion.
In defense of C++, I can only say that lots of interesting projects in the world are written in it.
I think you're setting the bar a little too high. Rust's borrow-checking semantics draw on much earlier research (for example, Cyclone had a form of region-checking in 2006); and Turbo Pascal was churning through 127-character identifiers on 8088s in 1983, one year before C++ stream I/O was designed.
EDIT: changed Cyclone's "2002" to "2006".
This idea is some 10yrs behind. And no, thinking that C is "closer to the processor" today is incorrect
It makes you think it is close which in some sense is even worse
> You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language.
It is incredibly funny how this argument has been used for literally decades, but in reality, you don't see simple, readable, nor maintainable code, instead, most of the C++ code bases out there are an absolute mess.. This argument reminds me of something...
We have autoconf/automake checking if you're on a big endian PDP8 or if your compiler has support for cutting edge features like "bool"
About string literals, the C23 standard states:
It is unspecified whether these arrays are distinct provided their elements have the appropriate values. If the program attempts to modify such an array, the behavior is undefined.
therefore `char foo = "bar";` is very bad practice (compared to using const char).I assumed you wanted a mutable array of char initializable from a string literal, which is provided by std::string and char[] (depending on usecase).
> In modern C++ you probably want to convert to a string_view asap (ideally using the sv literal suffix)
I know as much
That is simply not true. You can write a lot of C++ code without even touching move stuff. Hell, we've been fine without move semantics for the last 30 years :P
> Overloaded operators like operator*() and operator<<() are widely used in the standard library so you're forced to understand what craziness they're doing under the hood
Partially true. operator*() is used through the standard library a lot, because it nicely wraps pointer semantics. Still, you don't have to know about implementation details, as they depend on how the standard library implements the underlying containers.
AFAIK operator<<() is mainly (ab)used by streams. And you can freely skip that part; many C++ developers find them unnecessarily slow and complex.
> Basic standard library datatypes like std::vector use templates, so you're debugging template instantiation issues whether you write your own templated code or not.
As long as you keep things simple, errors are going to be simple. The problem with "modern C++" is that people overuse these new features without fully comprehending their pros and cons, simply because they look cool.
I take the "use it if we can/want/is forced to" and "improve it if you want and can" approaches. Or else, leave it be.
return std::launder(static_cast<T*>(std::memmove(p, p, sizeof(T))));
trick until they properly implement it. For MMIO, reintepret_cast from integer is most likely fine.The main reason I don't want to use C/C++ are the header files. You have to write everything in a header file and then in an implementation file. Every time you want to change a function you need to do this at least twice. And you don't even get fast compilation speed compared to some languages because your headers will #include some library that is immense and then every header that includes that header will have transitive header dependencies, and to solve this you use precompiled headers which you might have to set up manually dependending on what IDE you are using.
It's all too painful.
Like, sure, you don’t have to understand cout’s implementation of operator <<, but you have to know a) that it’s overloadable in the first place, b) that overloads can be arbitrary functions on arbitrary types (surprising if coming from languages that support more limited operator overloading), and c) probably how to google/go-to-documentation on operators for a type to see what bit-shifting a string into stdio does.
That’s … a lot more to learn than, say, printf.
Sure, << for stream output is pretty unintuitive and silly. But what about pipes for function chaining/composition (many languages overload thus), or overriding call to do e.g. HTML element wrapping, or overriding * for matrices multiplied by simple ints/vectors?
Reasonable minds can and do differ about where the line is in many of those cases. And because of that variability of interpretation, we get extremely hard to understand code. As much as I have seen value in overloading at times, I’m forced to agree that it should probably not exist entirely.
Also depending on how AI assisted tooling evolves, I think it is not only C and C++ that will become a niche.
I already see this happening with the amount of low-code/no-code augmented with AI workflows, that are currently trending on SaaS products.
Akshually[1] ...
> And no, thinking that C is "closer to the processor" today is incorrect
THIS thinking is about 5 years out of date.
Sure, this thinking you exhibit gained prominence and got endlessly repeated by every critic of C who once spent a summer doing a C project in undergrad, but it's been more than 5 years that this opinion was essentially nullified by
Okay, If C is "not close to the process", what's closer?
Assembler? After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on, that has a lower bound which none of the data gets close to?You're repeating something that was fashionable years ago.
===========
[1] There's always one. Today, I am that one :-)
You can't sensibly talk about C and C++ as a single language. One is the most simple language there is, most of the rules to which can be held in the head of a single person while reading code.
The other is one of the most complex programming languages to ever have existed, in which even world-renowned experts in lose their facility for the language after a short break from it.
#include <iostream> #include <bitset>
class Bits { unsigned value; public: explicit Bits(unsigned v) : value(v) {}
// Shift operator ^ : positive = left, negative = right
Bits operator^(int shift) const {
if (shift > 0) {
return Bits(value << shift);
} else if (shift < 0) {
return Bits(value >> -shift);
} else {
return *this; // no shift
}
}
friend std::ostream& operator<<(std::ostream& os, const Bits& b) {
return os << std::bitset<8>(b.value); // print 8 bits
}
};int main() { Bits x(0b00001111);
std::cout << "x = " << x << "\n";
std::cout << "x ^ 2 = " << (x ^ 2) << " (shift left 2)\n";
std::cout << "x ^ -2 = " << (x ^ -2) << " (shift right 2)\n";
}Most people don't write C, nor use the C compiler, even when writing C. You use C++ and the C++ compiler. For (nearly) all intents and purposes, C++ has subsumed and replaced C. Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features. It's still C++ on the C++ compiler.
Actual uses of actual C are pretty esoteric and rare in the modern era. Everything else is varying degrees of C++.
These assignment semantics work how real life works. If I give you this Rubik's Cube now you have the Rubik's Cube and I do not have it any more. This unlocks important optimisations for non-trivial objects which have associated resources, if I can give you a Rubik's Cube then we don't need to clone mine, give you the clone and then destroy my original which is potentially much more work.
C++ 98 didn't have such semantics, and it had this property called RAII which means when a local variable leaves scope we destroy any values in that variable. So if I have a block of code which makes a local Rubik's Cube and then the block ends the Rubik's Cube is destroyed, I wrote no code to do that it just happens.
Thus for compatibility, C++ got this terrible "C++ move" where when I give you a Rubik's Cube, I also make a new hollow Rubik's Cube which exists just to say "I'm not really a Rubik's Cube, sorry, that's gone" and this way, when the local variable goes out of scope the destruction code says "Oh, it's not really a Rubik's Cube, no need to do more work".
Yes, there is a whole book about initialization in C++: https://www.cppstories.com/2023/init-story-print/
For trivial objects, moving is not an improvement, the CPU can do less work if we just copy the object, and it may be easier to write code which doesn't act as though they were moved when in fact they were not - this is obviously true for say an integer, and hopefully you can see it will work out better for say an IPv6 address, but it's often better for even larger objects in some cases. Rust has a Copy marker trait to say "No, we don't need to move this type".
LLVM IR is closer. Still higher level than Assembly
The problem is thus:
char a,b,c; c = a+b;
Could not be more different between x86 and ARM
MVSC [1] and GCC [2] also have built-in static analyzers available via cl /analyze or g++ -fanalyzer these days.
There is also cppcheck [3], include-what-you-use [4] and a whole bunch more.
If you can, run all of them on your code.
[0] https://clang-analyzer.llvm.org/
[1] https://learn.microsoft.com/en-us/cpp/build/reference/analyz...
[2] https://gcc.gnu.org/onlinedocs/gcc/Static-Analyzer-Options.h...
[3] https://cppcheck.sourceforge.io/
[4] https://github.com/include-what-you-use/include-what-you-use
So your reasoning for repeating the once-fashionable statement is because "an intermediate representation that no human codes in is closer than the source code"?
(Broad, general, YMMV statement): The general C++ arc for an embedded developer looks like this:
1.) discover exceptions are way too expensive in embedded. So is RTTI.
2.) So you turn them off and get a gimped set of C++ with no STL.
3.) Then you just go back to C.
I dunno; the flaw is not really comparable, is it? The skill and discipline required to write C bug-free is an orders of magnitude less than the skill and discipline required to write C++.
Unless you read GGPs post to mean a flaw different to "skill and discipline required".
Yes.
> Most of the time when someone says something is "written in C" it actually means it's C++ without the +± features.
Those "someone's" have not written a significant amount of C. Maybe they wrote a significant amount of C++.
The cognitive load when dealing with C++ code is in no way comparable to the cognitive load required when dealing with C code, outside of code-golfing exercises which is as unidiomatic as can be for both languages.
Let's say I have matrices, and I've overloaded * for multiplying a matrix by a matrix, and a matrix by a vector, and a matrix by a number. And now I write
a = b * c;
If I'm trying to understand this as one of a series of steps of linear algebra that I'm trying to make sure are right, that is far more comprehensible than a = mat_mult(b,c);
because it uses math notation, and that's closer to the way linear algebra is written.But if I take the exact same line and try to understand exactly which functions get called, because I'm worried about numerical stability or performance or something, then the first approach hides the details and the second one is easier to understand.
This is always the way it goes with abstraction. Abstraction hides the details, so we can think at a higher level. And that's good, when you're trying to think at the higher level. When you're not, then abstraction just hides what you're really trying to understand.
My background is games and I've been heavily in Unreal lately. The language feels modern enough with smart pointers and such. Their standard library equivalent is solid.
The macros still feel very hacky and, ironically, Unreal actually does its own prepass over the source to parse for certain macros.... kind of shows that it's not a good language feature if that's needed. BUT the syntax used fits right into the language, so it feels idiomatic enough.
Templates are as powerful as they are just a mess to read.
Does anything come close to the speed and flexibility of the language? I think the biggest reason C++ sticks around is momentum but beyond that nothing _really_ replaces the messy but performance critical nature of it.
In Rust, if you have unsafe code, the onus is on you to ensure its soundness at the module level. And yes, that's harder than writing the corresponding C++, but it makes the safe code using that abstraction a lot easier to reason about. And if you don't have unsafe code (which is possible for a lot of problems), you won't need to worry about UB at all. Imagine never needing to keep all the object lifetimes in your head because the compiler does it for you.
IIRC borrow checking usually doesn't consume that much compilation time for most crates - maybe a few percent or thereabouts. Monomorphization can be significantly more expensive and that's been much more widely used for much longer.
Okay... and? The point being made was that the issue of package managers remains: do you really think users are auditing all those "lib<slam-head-on-keyboard>" dependencies that they're forced to install? Whether they install those dependencies from the official repository or from homebrew, or nix, or AUR, or whatever, is immaterial, the developer washed their hands of this, instead leaving it to the user who in all likelihood knows significantly less than the developers to be able to make an informed decision, so they YOLO it. Third-party repositories would not exist if they had no utility. But this is why Debian is so revered: they understand this dynamic and so maintain repositories that can be trusted. Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.
> "It's too hard to add a dependency in C++"
It's not hard to add a dependency. I actually prefer the dependencies-as-git-submodules approach to package managers: it's explicit and you know what you're getting and from where. But using those dependencies is a different story altogether. Don't you just love it when one or more of your dependencies has a completely different build system to the others? So now you have to start building dependencies independently, whose artefacts are in different places, etc, etc, this shouldn't be a problem.
> I, for one, do not love it when there is an exploit in a language package manager.
Oh please, I believe that about as much as ambulance chasers saying they don't love medical emergencies. Otherwise, why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm as if anyone is actually asking for that, instead of, say, what Zig or Go has? It's because of the cultism, and every npm exploit further entrenches it.
Maybe we're thinking of different things, but I don't think C++ has owning references, modern or not? There's regular references (&) which are definitely not owning, and owning pointers (unique_ptr and friends), but neither of those quite match Rust's &.
Std::vector needs launder.
The core of the borrow checker was being formulated in 2012[1], which is 13 years ago. No infeasibility then. And it's based on ideas that are much older, going back to the 90s.
Plus, you are vastly overestimating the expense of borrow checking, it is very fast, and not the reason for Rust's compile times being slow. You absolutely could have done borrow checking much earlier, even with less computing power available.
1: https://smallcultfollowing.com/babysteps/blog/2012/11/18/ima...
> After all if everything else is "Just as close as C, but not closer", then just what kind of spectrum are you measuring on
The claim about C being "close to the machine" means different things to different people. Some people literally believe that C maps directly to the machine, when it does not. This is just a factual inaccuracy. For the people that believe that there's a spectrum, it's often implied that C is uniquely close to the machine in ways that other languages are not. The pushback here is that C is not uniquely so. "just as close, but not closer" is about that uniqueness statement, and it doesn't mean that the spectrum isn't there.
Problem 2 happens only when doing SLAB allocations - say, 1% of the time when using a C++ class. (Might be more or less, depending on what problem space you're in.)
Problem 3 happens only if you are also declaring your allocated stuff const - say, maybe 20% of the time?
So, while not perfect, each solution solves most of the problem for most of the people. Complaining about std::launder is complaining that solution 2 wasn't perfect; it's not in any way an argument that solution 1 wasn't massively better than problem 1.
Here's the document I believe your parent is referring to: https://docs.google.com/document/d/e/2PACX-1vSt2VB1zQAJ6JDMa...
The claim in the article:
> Yes, C++ can be made safer; in fact, it can even be made memory safe.
The claim from this document:
> We attempted to represent ownership and borrowing through the C++ type system, however the language does not lend itself to this. Thus memory safety in C++ would need to be achieved through runtime checks.
It doesn't use "owning reference" anywhere.
And that's the problem. In other languages that have a Maybe type, it's a compile time check. If your code is not handling the "empty" case, it will simply fail to compile.
I honestly don't see any value in std::optional compared to the behavior pre-std::optional. What does it bring to the table for pointers, for example?
> Matter of personal taste, I guess, C++ is still one of the most widely used programming languages with a huge ecosystem of libraries and tools. It’s used in a wide range of applications, from game development to high-performance computing to embedded systems. Many of the most popular and widely used software applications in the world are written in C++.
> I don’t think C++ is outdated by any stretch of the imagination;
The second paragraph in this quote has zero connection to the first and the third paragraphs.
> C++ has a large ecosystem built over the span of 40 years or so, with a lot of different libraries and tools available.
Yes, exactly: it's outdated.
> the simple rule of thumb is to use the standard library wherever possible; it’s well-maintained and has a lot of useful features.
That's got to be the funniest joke in this whole article. First of all, no, its API is not really that well thought out and it took several language standards to finally make smart pointers and tuples truly convenient to use; and which implementation of "the standard library" do you even mean, by the way? There are several implementations of it, you know, of very varying quality.
And then there is an argument against using the Boost in this article which hilariously can be well applied to C++ itself. Don't use it unless you have to! There are languages that are more modern and easier to use!
> Fact is, if you wanna get into something like systems programming or game development then starting with Python or JavaScript won’t really help you much. You will eventually need to learn C or C++.
The key word is eventually. You don't start learning to e.g. play guitar on a cheap, half-broken piece of wood because you'll spend more time on fighting the instrument and fiddling with it than actually learning how to play it.
> New standards (C++20, C++23) keep modernizing the language, ensuring it stays competitive with younger alternatives. If you peel back the layers of most large-scale systems we rely on daily, you’ll almost always find C++ humming away under the hood.
Notice the dishonesty of placing these two sentences together: it seems to imply (with plausible deniability) that those "large-scale systems we rely on daily" are written in "modern" C++. No, they are absolutely not.
C++ template metaprogramming still remains extremely powerful. Projects like CUTLASS, etc could not be written to give best performance in as ergonomic a way in Rust.
There is a reason why the ML infra community mostly goes with Python-like DSL's, or template metaprogramming frameworks.
Last I checked there are no alternatives at scale for this.
Legacy code, just have to deal with it. This code predates F90.
It definitely has a lot of flaws, but in practice most of them have solutions or workarounds, and on a day-to-day basis most C++ programmers aren't struggling with this stuff.
Dynamic linking shifts responsibility for the linked libraries over to the user and their OS, and if it's an Arch user using AUR they are likely very interested in assuming that risk for themselves. 99.9% of Linux users are using Debian or Ubuntu with apt for all these libs, and those maintainers do pay a lot of attention to libraries.
I used the [[gnu::cleanup]] attribute macro (as in N3434) since it was simple and worked with the current default GCC on CE, but based on TS 25755 the implementation of defer and its optimisation should be almost trivial, and some compilers have already added it. Oh, and the polyfills don't support the braceless `defer free(p);` syntax for simple defer statements, so there goes the full compatibility story...
While there are existing areas where C diverged, as other features such as case ranges (N3370, and maybe N3601) are added that C++ does not have parity with, C++ will continue to drift further away from the "superset of C" claim some of the 'adherents' have clung to for so long. Of course, C has adopted features and syntax from C++ (C2y finally getting if-declarations via N3356 comes to mind), and some features are still likely to get C++ versions (labelled breaks come to mind, via N3355, and maybe N3474 or N3377, with C++ following via P3568), so the (in)compatibility story is simply going to continue getting more nuanced and complicated over time, and we should probably get this illusion of compatibility out of our collective culture sooner rather than later.
Maybe they did, 5 years (or more) ago when that essay came out. it was wrong even then, but repeating it is even more wrong.
> This is just a factual inaccuracy.
No. It's what we call A Strawman Argument, because no one in this thread claimed that C was uniquely close to the hardware.
Jumping in to destroy the argument when no one is making it is almost textbook example of strawmanning.
Well, you're talking about languages that don't have standards, they have a reference implementation.
IOW, no language has standards for processor intrinsics; they all have implementations that support intrinsics.
C++ can be unsafe even when you know what you're doing, since it is quite easy get something wrong by accident: index off-by-one can mean out-of-bounds access to an array, which can mean anything really. So, it's not that "all languages" are like that. That seems like a "moving the goalpost" type of logical fallacy.
And I say that as a person who writes C++ for fun an profit (well, salary) and has wasted many an hour on earning my StackOverflow C++ gold badge :-)
The post also includes other arguments which I find week regard C++ being dated. It has changed and has seen many improvements, but those have been almost exclusively _additions_, not removals or changes. Which means that the rickety old stuff is basically all still there. And then there is the ABI stability issue, which is not exactly about being old, but it is is about sticking to what's older and never (?) changing it.
Bottom line for me: C++ is useful and flexible but has many warts and some pitfalls. I'd still use it over Rust for just about anything (bias towards my experience here), but if a language came along with similar design goals to C++; a robust public standardization and implementation community; less or none of the poor design choices of C; nicer built-in constructs as opposed to having to pull yourself up by the bootstraps using the standard library; etc - I would consider using that. (And no, that language is not D.)
Whenever someone asks you about std::move, or -values I have a ready-made answer for that: https://stackoverflow.com/a/27026280/1593077
but it's true that when a user first sees `std::move(x)` with no argument saying _where_ to move it to, they either get frustrated or understand they have to get philosophical :-)
And my point in providing a concrete example, where a decision was made to prioritize unsafe behavior in a known problematic area, when they could just as well have made a half dozen other decisions which would have solved a long standing problem rather than just perpetuating it with some new syntactic sugar.
The whole point of pkg-config is to tell the compiler where those packages are.
I mean yeah, that's the point of having a tool like that. It's fine that the compiler doesn't know that, because its job is turning source into executables, not being the OS glue.
I'm not sure "having a linker" is a weakness? What are talking about?
It is true that you need to use the package manager to install the dependencies. This is more effort than having a package manager download them for you automatically, but on the other hand you don't end up in a situation where you need virtual environments for every application because they've all downloaded slightly different versions of the same packages. It's a bit of a philosophical argument as to what is the better solution.
The argument that it is too hard for students seems a bit overblown. The instructions for getting this up and running are:
1. apt install build-essential
2. extract the example files (Makefile and c file), cd into the directory
3. type "make"
4. run your program with ./programname
I'd argue that is fewer steps than setting up almost any IDE. The Makefile is 6 lines and is easy to adapt to any similar size project. The only major weakness is headers, in which case you can do something like: HEADERS=headerA.h headerB.h headerC.h
file1.o: $(HEADERS)
file2.o: $(HEADERS)
file3.o: $(HEADERS)
If you change any header it will trigger a full system rebuild, but on C projects this is fine for a long time. It's just annoying that you have to create a new entry for every c file you add to the project instead of being able to tell make to add that to every object automatically. I suspect there is a very arcane way to do this, but I try to keep it as simple as possible.The thing is, the languages like Rust only make this easier within their controlled "garden". But for C and C++, you build in the "world outside the garden" to begin with, where you are not guaranteed of everyone having prepared everything for you. So, it's harder, and you may need third-party tools or putting in some elbow grease, or both. The upside is that when rustaceans or go-phers and such wander outside their respective gardens, most of them are completely lost and have no idea what to do; but C and C++ people are kinda-sorta at home there, already.
Also: If you are on platforms which support, say, CMake - then the multi-platform C++ project is not even that painful.
That said, making std::array::operator[]() range-checking would have been worse, because it would have been the only overload that did that. Could they have, in the same version, made all the overloads range-checking? Maybe, I don't know.
The problem, for me, with overloaded operators in something like C++ is that it frequently feels like an afterthought.
Doing "overloaded operators" in Lisp (CLOS + MOP) has much better "vibes" to me than doing overloaded operators in C++ or Scala.
1. Makefiles are for build systems; they are not C++. 2. Even for building C++ - in 2017, there was no need to write bespoke Makefiles, or any Makefiles. You could, and should, have written CMake; and your CMake files would be usable and relevant today.
> Meanwhile my gradle setups have been almost unchanged since that time
... but, typically, with far narrower applicability.
Other then hardcore embedded guys and/or folks dealing with legacy C code, I and most folks i know almost always use C++ in various forms i.e. "C++ as a better C", "Object-Oriented C++ with no template shenanigans", "Generic programming in C++ with templates and no OO", "Template metaprogramming magic", "use any subset of C++ from C++98 to C++23" etc. And of course you can mix-and-match all of the above as needed.
C++'s multi-paradigm support is so versatile that i don't know why folks on HN keep moaning about its complexity; it is the price you pay for the power you get. It is the only language that i can program in for itty-bitty MCUs all the way to large complicated distributed systems on multiple servers plus i can span all of applications to systems to bare-metal programming.
I just created a subsystem for a performance intensive application -- a caching layer for millions or even billions of objects. The implementation encompasses over a 1000 LOC, but the header only includes <stdint.h>. There are about 5 forward struct declarations and maybe a dozen function calls in that API.
To a degree it might be stockholm syndrome, but I feel like having had to work around a lot of C's shortcomings I actually learned quite a lot that helps me in architecting bigger systems now. Turns out a lot of the flexibility and ease that you get from more modern languages mostly allows you to code more sloppily, but being sloppy only works for smaller systems.
So you do understand my point about AUR. AUR is like adding a third-party repo to your Debian configuration. So it's not a good example if you want to talk about official repositories.
Debian is a good example (it's not the only distribution that has that concept), which proves my point and not yours: this is better than unchecked repositories in terms of security.
> Whereas the solution C/C++ cultists seem to implicitly prefer is having no repositories because dependencies are, at best, a slippery slope.
Nobody says that ever. Either you make up your cult just to win an argument, or you don't understand what C/C++ people say. The whole goddamn point is to have a trusted system repository, and if you need to pull something that is not there, then you do it properly.
Which is better than pulling random stuff from random repositories, again.
> I actually prefer the dependencies-as-git-submodules approach
Oh right. So you do it wrong, it's good to know and it will answer your next complaint:
> Don't you just love it when one or more of your dependencies has a completely different build system to the others
I don't give a damn because I handle dependencies properly (not as git submodules). I don't have a single project where the dependencies all use the same build system. It's just not a problem at all, because I do it properly. What do I do then? Well exactly the same as what your system package manager does.
> this shouldn't be a problem.
I agree with you. Call it a footgun if you wish, you are the one pulling the trigger. It isn't a problem for me.
> why are any and all comments begging for a first-party package manager immediately swamped with strawmans about npm
Where did I do that?
> It's because of the cultism, and every npm exploit further entrenches it.
It's because npm is a good example of what happens when it goes out of control. Pip has the same problem, and Rust as well. But npm seems to be the worse, I guess because it's used by more people?
For example, if a language has non-nullable types, then you get this information locally for free everywhere, even from 3rd party code. When the language doesn't track it, then you need a linter that can do symbolic execution, construct call graphs, data flows, find every possible assignment, and still end up with a lot of unknowns and waste your time on false positives and false negatives.
Linters can't fix language semantics that create dead-ends for static analysis. It's not a matter of trying harder to make a better linter. If a language doesn't have clear-enough aliasing, immutability, ownership, thread-safety, etc. then a lot of analysis falls apart. Recovering required information from arbitrary code may be literally impossible (Rice's theorem), and getting even approximate results quickly ends up requiring whole-program analysis and prohibitively expensive algorithms.
And it's not even an either-or choice. You can have robust checks for fundamental invariants built into the language/compiler, and still use additional linters for detecting less clear-cut issues.
With a pure virtual interface you can at least track down the execution path as long as you can spot down where the object is created, but with template black magics? Good luck. Static dispatch with all those type traits and SFINAE practically makes it impossible to know before running it. Concept was supposed to solve this but this won't automatically solve all those problems lurking in legacy codes.
> but on the other hand you don't end up in a situation where you need virtual environments for every application because they've all downloaded slightly different versions of the same packages.
The real downside here is that if you need two different programs with two different versions of packages, you're stuck. This is often mitigated by things like foo vs foo2, but I have been in a situation where two projects both rely on different versions of foo2, and cannot be unified. The per-project dependency strategy handles this with ease, the global strategy cannot.
A big difference between the Ada mandate and this current push is that the current effort is not to go to one language, but to a different category of languages (specifically, "memory safe" or ones with stronger guarantees of memory safety). That leaves it much more open than the Ada mandate did. This would be much more palatable for contractors compared to the previous mandate.
In C, casting a `void *` is a code smell, I feel.
Most confusing one is how the meaning of `const` differs between C and C++; I'm pretty certain the C `const` keyword is broken compared to `const` in C++.
> I am not a C/C++ cultist at all, and I actually don't like C++ (the language) so much (I've worked with it for years). I, for one, do not love it when there is an exploit in a language package manager.
If you do neither of those things then did it ever occur to you that this might not be about YOU?
> I find it ironic that when I explain that my problem is that I want to be able to audit (and maintain, if necessary) my dependencies, the answer that comes suggests that I am incompetent and "inexpertly" doing my job.
Yeah, hi, no you didn't explain that. You're probably mistaking me for someone else in some other conversation you had. The only comment of yours prior to mine in the thread is you saying "I can use pkg-config just fine." And again, you're thinking that I'm calling YOU incompetent, or even that I'm calling you incompetent. But okay, I'm sure your code never has bugs, never has memory issues, is never poorly designed or untested, that you can whip out an OpenGL alternative whatever in no time and it be just as stable and battle-tested, and to say otherwise must be calling you incompetent. That makes total sense.
> AUR stands for "Arch User Repository". It's not the official system repository.
> So it's not a good example if you want to talk about official repositories.
I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.
--
Overall, getting bored of this, though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny. Have a nice day.
"ABI: Now or never" by Titus Winters addresses some perf leaks C++ had years ago, which it can't fix (if it retains its ABI promise). They're not big but they accumulate over time and the whole point of that document was to explain what the price is if (unlike Rust) you refuse to take steps to address it.
Rust has some places where it can't match C++ perf, but unlike that previous set Rust isn't obliged to keep one hand tied behind its back. So this gently tips the scales further towards Rust over time.
Worse, attempts to improve C++ safety often make its performance worse. There is no equivalent activity in Rust, they already have safety. So these can heap more perf woes on a C++ codebase over time.
Extremely clear at the "call site" what's going on.
Write disciplined, readable c, use valgrind and similar tools, and reap unequalled performance and maintainability
I guess forth as well... hmmm
And I think you're downplaying many of the ones I mentioned, but I think this level of "importance" is subjective to the task at hand and one's level of frustrations.
I would argue that it's reasonable to say that creating a robust data structure library at the level of the STL shouldn't be that arcane.
With the old and proprietary toolchains involved, I would bet dollars to doughnuts that there's a 50% odds of C++11 being the latest supported standard. In that context, modern C++ is the trendy language.
> Ignore the fact that having more keywords in C++ precludes the legality of some C code being C++. (`int class;`)
Your very first example reverses the definitions of superset and subset. "C++ is a superset of C" implies that C++ will have at least as many, if not more, keywords than C.
Other examples make the same mistake.
The C++ "move" is basically Rust's core::mem::take - we don't just move the T from inside our box, we have to also replace it, in this case with the default, None, and in C++ our std::unique_ptr now has no object inside it.
But while Rust can carefully move things which don't have a default, C++ has to have some "hollow" moved-from state because it doesn't have destructive move.
I think what's mean is that Rust's type system only removes one specific kind of unsafety, but if you're clueless you can still royally screw things up, in any language. No type system can stop you from hosing a database by doing things in the wrong order, say. Whether trading <insert any given combination of things Rust does that you don't like> for that additional safety is worth it is IMO a more interesting question than whether it exists at all.
Personally, I mostly agree with you. I don't much care for traits, or the lack of overloading and OO, or how fast Rust is still evolving, and wish I could have Rust's safety guarantees in a language that was more like C++. It really feels like you could get 90% of the way there without doing anything too radical, just forbidding a handful of problematic features; a few off the top of my head: naked pointers, pointer arithmetic, manual memory management, not checking array accesses by default, not initializing variables by default, allowing switches to be non-exhaustive.
Take the short-circuiting boolean operators || and &&. You can overload these in C++ but you shouldn't because the overloaded versions silently lose short-circuiting. Bjarne just didn't have a nice way to write that so, it's not provided.
So while the expression `foo(a) && bar(b)` won't execute function bar [when foo is "falsy"] if these functions just return an ordinary type which doesn't have the overloading, if they do enable overloading both functions are always executed then the results given to the overloading function.
Edited:: Numerous tweaks because apparently I can't boolean today.
Today I wouldn't recommnend Skype built in any language except Rust. But the Skype founders Ahti Heinla, Jaan Tallinn and Priit Kasesalu found exactly the right balance of C and C++ for the time.
I also wrote a few lines of code in that dialect of C++ (no exceptions). And it didn't feel much different from modern C++ (exception are really fatal errors)
And regarding to embedded, the same codebase was embedded in literally all the ubiquitous TVs of the time, even DECT phones. I bet there are only a few (if any) application codebases of significant size to have been deployed at that scale.
Start by not calling everybody disagreeing with you a cultist, next time.
> I said system package, not official repository. I don't know why you keep insisting on countering an argument I did not make. Yes, system packages can be installed from unofficial repositories. I don't know how I could've made this clearer.
It's not that it is unclear, it's just that it doesn't make sense. When we compare npm to a system package manager in this context, the thing we compare is whether or not is it curated. Agreed, I was maybe not using the right words (I should have said curated package managers vs not curated package managers), but it did not occur to me that it was unclear because comparing npm to a system package manager makes no sense otherwise. It's all just installing binaries somewhere on disk.
AUR is much like npm in that it is not curated. So if you find that it is a security problem: great! We agree! If you want to pull something from AUR, you should read its PKGBUILD first. And if it pulls tens of packages from AUR, you should think twice before you actually install it. Just like if someone tells you to do `curl https://some_website.com/some_script.sh | sudo sh`, no matter how convenient that is.
Most Linux distributions have a curated repository, which is the default for the "system package manager". Obviously, if users add custom, not curated repositories, it's a security problem. AUR is a bad example because it isn't different from npm in that regard.
> though the part where you harp on about doing dependencies properly compared to me and not elaborating one bit is very funny
Well I did elaborate at least one bit, but I doubt you are interested in more details than what I wrote: "What do I do then? Well exactly the same as what your system package manager does."
I install the dependencies somewhere (just like the system package manager does), and I let my build system find them. It could be with CMake's `find_package`, it could be with pkg-config, whatever knows how to find packages. There is no need to install the dependencies in the place where the system package manager installs stuff: it can go anywhere you want. And you just tell CMake or pkg-config or Meson or whatever you use to look there, too.
Using git submodules is just a bad idea for many reasons, including the fact that you need all of them to use the same build system (which you mentioned), or that a clean build usually implies rebuilding the dependencies (for nothing) or that it doesn't work with package managers (system or not). And usually, projects that use git submodule only support that, without offering a way to use the system package(s).
the word "only" doesn't really belong in that sentence, because these are very common in root-cause analysis of flaws by the "Common Weakness Enumeration" initiative:
https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
and having said that - I agree with you back :-) ... in fact, I think this is basically "the plan" for C++ regarding security: They'll make some static analysis warnings be considered errors for parts of your code marked "safe", and let them fly in areas marked "unsafe".
If the C++ committe can make that stick - in the public discourse and in US government circles I guess - then they will have essentially "eaten Rust's lunch". Because Rust is quite restrictive, it's somewhat of a moving target, it's kind of fussy w.r.t. use on older systems, and - it's said to be somewhat restrictive. If you take away its main selling point of safety-by-default, then there would probably not be enough of a motivation to drop C++, decades of backwards compatibility, and a huge amount of C++ and C libraries, in favor of Rust.
And this would not be the first time C++ is eating the lunch of a potential successor/competitor language; D comes to mind.
The bug I saw happened a few years ago, and convinced me to switch to Rust where it simply cannot happen.
Python is up there (down there?) with Windows as a poster child for popularity does not imply quality
If it stayed in its lane as a job control language, and they used semantic versioning then it would be OK.
But the huge balls of spaghetti Python code, that must be run in a virtual environment cause versions drive me mental
It's "great" mainly in the sense of being very large, and making your code very lage - and slow to build. I would not recommend it unless you absolutely must have some particular feature not existing elsewhere.
Here's a long list of C++ unit testing frameworks: https://en.wikipedia.org/wiki/List_of_unit_testing_framework...
And you might consider:
* doctest: https://github.com/doctest/doctest
* snitch: https://github.com/snitch-org/snitch
* ut/micro-test: https://github.com/boost-ext/ut
1. Somewhat exaggerated claim. It reduced that need; and for when you can assumpe everyting is C++20 or later.
2. Even to the extent the need for TMP was obviated in principle - it will take decades for TMP to go away in popular libraries and in people's application code. At that time, maybe, we would stoopp seeing these endless compliation artifacts.
BJARNE ME DEFEND
https://monarchies.fandom.com/wiki/In_my_defens_God_me_defen...
See Embedded C++ - https://en.wikipedia.org/wiki/Embedded_C%2B%2B
Apple's IO Kit (all kernel drivers on macOS/iphoneOS/ipadOS/watchOS) is a great example of what you're talking about. Billions of devices deployed with code built on this pattern.
That said, in the embedded world, when you get down to little 32-bit or 16-bit microcontrollers, not amd64 or aarm64 systems with lots of RAM, pure C is very prevelant. Many people don't find much value from classes when they are writing bare-metal code that primarily is twiddling bits in registers, and they also can't or don't want to pay the overhead for things like vtables when they are very RAM constrained (e.x. 64kbyte of RAM is not that uncommon in embedded).
So, I disagree with the idea that "actual uses of C are esoteric" from the post - it's very prevelant in the embedded space still. Just want people to think about it from another use case :).
The classic example of a big pure-C project at scale is the Linux kernel.
Ask Linus what he thinks of C++. His opinions are his own (EDIT: I actually like C++ a lot, please don't come at me with pitchforks! :)), I merely repost for entertainment value (from a while back):
https://lwn.net/Articles/249460/
Maybe a simpler example: go find a BSP (board support package) for the mirco of your choice. It's almost certain that all of the example code will be in C, not C++. They may or may not support building with g++, but C is the lingua franca of embedded devs.
The people coming from GC languages have the right expectations about the language taking care of lifetimes for them. I expect nothing less than technical excellence from my tooling.
>I expect nothing less than technical excellence from my tooling.
Good luck with that.
This specific seems like just gratuitously rewriting history.
I can get how you'd feel C (and certain dialects of C++) are "closer to the metal" in a certain sense: C supports very few abstractions and with fewer abstractions, there are less "things" between you and "the metal". But this is as far as it goes. C does not represent - by any stretch of imagination - an accurate computational model or a memory of a modern CPU. It does stay close to PDP-11, but calling modern CPUs "glorified hardware emulators of PDP-11" is just preposterous.
PDP-11 was an in-order CISC processor with no virtual memory, cache hierarchy, branch prediction, symmetric multiprocessing and SIMD instruction. Some modern CPUs (namely the x86/x64 family of CPUs) do emulate a CISC ISA on that is probably more RISC-like, but that's as far we can say they are trying to behave like a PDP-11 (even though the intention was to behave like a first-gen Intel Pentium).
You'd do very well as a culture war pundit. Clearly I wasn't describing a particular kind of person, no, I'm clearly I'm just talking about everyone I disagree with /s
For the power and flexibility that C++ gives you, it is worth one's time to get familiar with and learn to use its complexity.
For me, base CMake is pretty easy by now, but I'd rather troubleshoot a makefile than some obscure 3pp CMake module that doesn't do what I want. Plain old makefiles are very hackable for better or worse [1]. It's easy to solve problems with make (in bespoke ways), and at the same time this is the big issue, causing lots of custom solutions of varying correctness.
[1]: Make is easy the same way C is easy.
Not that widely. You must be thinking of the IO streams part of the library. Yes, it's rather poor in many respects. But you don't have to use it! We have perfectly nice variadic printing functions these days!
auto number = 42;
std::println("Hello, {}! The answer is {}", "world", number);Plus - the "obscure third party modules" have been getting less obscure and more standard-ish. In 2017 it was already not bad, today it's better.
Exactly this. Regardless of safety, expressiveness, control, whatever argument someone pulls from their hat to defend C++ the simple fact of a solid dependency manager cannot be overstated.
Oh the horror!
This is a strawman argument. Just because pip and npm are a mess and security liabilities does not make the c++ situation less bad. A fair comparison would be for languages that got their act together and use cargo, maven or nuget.
Linus is also not alone with his opinion in favouring Rust over C++. I would be hard pressed to use his persona in a negative case.
I've seen some seriously bad legacy code bases. In fact, I've spent many years of my career hired specifically to redesigning and rewriting software. Either legacy code or code that wasn't written by pro software engineers in the first place (e.g. engineers of other fields or scientists with various specializations).
And thus, I also observe that many people aren't aware how much software (in production) was created by people for whom creating software is a secondary or tertiary interest and skill at best.
And yes, typically it gets much better when rewritten later and by a pro.
Not all software needs that, it doesn't always work. But it needs to happen quite often for the software to stay maintainable and extensible with new features. In a lot of these cases only the original author would even dare to touch it. Good for short term job security, bad for the company.
And it isn't a requirement for the original authors to be involved (though that helps a lot usually).
I myself rewrite my own code quite often and tend to refactor and iterate on it a lot. The first shot is rarely good. But it get's good after a while.
In contrast, the standard C++ stream types have used operator<< overloading for more than 25 years. glog/gtest assertions continue to use it.
Well, git submodules are strictly inferior and you know it: you even complained about the fact that it is a pain when some dependencies use different build systems.
You choose a solution that does not work, and then you blame the tools.
>> The scenario you are describing does not make sense for the commonly accepted industry definition of "build system."
> I’m not sure why you’re dismissing it as something else without knowing any of the details or presuming I don’t know what I’m talking about.
My apologies for what I wrote giving the impression of being dismissive or implying an assessment of your knowledge. This was not my intent and instead was my expression of incredulity for a build definition requiring 20 engineers to maintain. Perhaps I misinterpreted the "cooks" responsible for build definition maintenance as being all of those 20 engineers. If so, I hope you can see how someone not involved in your project could reach this conclusion based on the quote above.
Still and all, if this[0] is the Bazel build tool you reference and its use is such that:
With that many coooks[sic], you have patches on top of patches
of your build system where everyone does the bare minimum
to meet the near term task only and it devolves into a mess
no one wants to touch over enough time.
Then the questions I would ask of the project members/stakeholders are: 1 - Does using Bazel reduce build definition
maintenance verses other build tools such as
Make/CMake/etc.?
2 - Does the engineering team value reproducible build
definitions as much as production and test source
artifacts?
3 - If not, why not?
EDIT:To clarify the rationale behind questions #2 and #3:
Build definitions are production code, because if the system cannot be built, then it cannot be released.
Test suites are production code, because if tests fail, then the build should fail and the system cannot be released.
What I am saying is that using a dependency is formalised for build systems. Be it npm, cargo, gradle, meson, cmake, you name it.
In cargo, you add a line to a toml file that says "please fetch this dependency, install it somewhere you understand, and then use if from this somewhere". What is convenient here is that you as a user don't need to know about those steps (how to fetch, how to install, etc). You can use Rust without Cargo and do everything manually if you need to, it's just that cargo comes with the "package manager" part included.
In C/C++, the build systems don't come with the package manager included. It does not mean that there are no package managers. On the contrary, there are tons of them, and the user can choose the one they want to use. Be it the system package manager, a third-party package manager like conan or vcpkg, or doing it manually with a shell/python script. And I do mean the user, not the developer. And because the user may choose the package manager they want, the developer must not interfere otherwise it becomes a pain. Nesting dependencies into your project with git submodules is a way to interfere. As a user, I absolutely hate those projects that actually made extra work to make it hard for me to handle dependencies the way I need.
How do we do that with CMake? By using find_package and/or pkg-config. In your CMakeLists.txt, you should just say `find_package(OpenSSL REQUIRED)` (or whatever it is) and let CMake find it the standard way. If `find_package` doesn't work, you can write a find module (that e.g. uses pkg-config). A valid shortcut IMO is to use pkg-config directly in CMakeLists for very small projects, but find modules are cleaner and actually reusable. CMake will search in a bunch of locations on your system. So if you want to use the system OpenSSL, you're done here, it just works.
If you want to use a library that is not on the system, you still do `find_package(YourLibrary)`, but by default it won't find it (since it's not on the system). In that case, as a user, you configure the CMake project with `CMAKE_PREFIX_PATH`, saying "before you look on the system, please look into these paths I give you". So `cmake -DCMAKE_PREFIX_PATH=/path/where/you/installed/dependencies -Bbuild -S.`. And this will not only just work, but it means that your users can choose the package manager they want (again: system, third-party like conan/vcpkg, or manual)! It also means that your users can choose to use LibreSSL or BoringSSL instead of OpenSSL, because your CMakeLists does not hardcode any of that! Your CMakeLists just says "I depend on those libraries, and I need to find them in the paths that I use for the search".
Whatever you do that makes CMake behave like a package manager (and I include CMake features like the FetchContent stuff) is IMO a mistake, because it won't work with dependencies that don't use CMake, and it will screw (some of) your users eventually. I talk about CMake, but the same applies for other build systems in the C/C++ world.
People then tend to say "yeah I am smart, but my users are stupid and won't know how to install dependencies locally and point CMAKE_PREFIX_PATH to them". To which I answer that you can offer instructions to use a third-party package manager like conan or vcpkg, or even write helper scripts that fetch, build and install the dependencies. Just do not do that inside the CMakeLists, because it will most certainly make it painful for your users who know what they are doing.
Is it simpler than what cargo or npm do? No, definitely not. Is it more flexible, totally. But it is the way it is, and it fucking works. And whoever calls themselves a C/C++ developer and cannot understand how to use the system package manager, or a conan/vcpkg and set CMAKE_PREFIX_PATH need to learn it. I won't say it's incompetence, but it's like being a C++ developer and not understanding how to use a template. It's part of the tools you must learn to use.
People will spend half a day debugging a stupid mistake in their code, but somehow can't apprehend that dealing with a dependency is also part of the job. In C/C++, it's what I explained above. With npm, properly dealing with dependencies means checking the transitive dependencies and being aware of what is being pulled. The only difference is that C/C++ makes it hard to ignore it and lose control over your dependencies, whereas npm calls it a feature and people love it for that.
I don't deny that CMake is not perfect, the syntax is generally weird, and writing find module is annoying. But it is not an excuse to make a mess at every single step of the process. And people who complain about CMake usually write horrible CMakeLists and could benefit from learning how to do it properly. I don't love CMake, I just don't have to complain about it everywhere I can because I can make it work, and it's not that painful.
However, I also got confused, and just subsituted "pointer" for "reference" in my head. References, apart from smart pointers, are indeed a problem for memory safety.
C/C++ developers clearly want a build system and package manager, hence all this fragmentation, but I can't for the life of me understand why that fragmentation is preferable. For all the concern about supply-chain attacks on npm, why is it preferable that people trust random third-party package managers and their random third-party repackages of libraries (eg: SQLite on conan and vcpkg)? And why is global installation preferable? Have we learnt nothing? There's a reason why Python has venv now; why Maven and Gradle have wrappers; etc. Projects being able to build themselves to a specification without requiring the host machine to reconfigure itself to suit the needs of this one project, is a bonus, not a drawback. Devcontainers should not need to be a thing.
If anything, this just reads like Sunk Cost Fallacy: that "it just works" therefore we needn't be too critical, and anyone who is or who calls for change just needs to git gud. It reminds me of the never-ending war over memory safety: use third-party tools if you must but otherwise just git gud. It's this kind of mindset that has people believing that C/C++'s so-called build systems are just adhering to "there should be some artificial friction when using dependencies to discourage over-use of dependencies", instead of being a Jenga tower of random tools with nothing but gravity holding it all together.
If it were up to me, C/C++ would get a more fleshed-out version of Zig's build system and package manager, ie, something unified, simple, with no central repository, project-local, exact, and explicit. You want SQLite? Just refer to SQLite git repository at a specific commit and the build system will sort it out for you. Granted, it doesn't have an official build.zig so you'll need to write your own, or trust a premade one... but that would also be true if you installed SQLite through conan of vcpkg.
I don't feel particularly antipathic towards notions of first-party build system and package manager. I find it indeniably better to have a first-party build system instead of the fragmentation that exists in C/C++. On the other hand, I don't feel like asking a 20-year old project to leave autotools just because I asked for it. Or to force people to install Python because I think Meson is cool.
As for the package manager, one issue is security: is it (even partly) curated or not? I could imagine npm offering a curated repo, and a non-curated repo. But there is also a cultural thing there: it is considered normal to have zero control over the dependencies (my this I mean that if the developer has not heard of dependencies they are pulling, then it's not under control). Admittedly it is not a tooling problem, it's a culture problem. Though the tooling allows this culture to be the norm.
When I add a C/C++ dependency to my project, I do my shopping: I go check the projects, I check how mature they are, I look into the codebase, I check who has control over it. Sometimes I will depend on the project, sometimes I will choose to fork it in order to have more control. And of course, if I can get it from the curated list offered by my distro, that's even better.
> C/C++ developers clearly want a build system and package manager, hence all this fragmentation
One thing is legacy: it did not exist before, many tools were created, and now they exist. The fact that the ecosystem had the flexibility to test different things (which surely influenced the modern languages) is great. In a way, having a first-party tool makes it harder to get that. And then there are examples like Swift where is slowly converged towards SwiftPM. But at the time CocoaPods and Carthage were invented, SwiftPM was not a thing.
Also devs want a build system and package manager, but they don't necessarily all want the same one :-). I don't use third-party package managers for instance, instead I build my dependencies manually. Which I find gives me more control, also for cross-compiling. Sometimes I have specific requirements, e.g. when building a Linux distribution (think e.g. Yocto or buildroot). And I don't usually want to depend on Python just for the sake of it, and Conan is a Python tool.
> why is it preferable that people trust random third-party package managers and their random third-party repackages of libraries (eg: SQLite on conan and vcpkg)?
It's not. Trusting a third-party package manager is actually exactly the same as trusting npm. It's more convenient, but less secure. However it's better when you can rely on a curated repository (like what Linux distributions generally provide). Not everything can be curated, but there is a core. Think OpenSSL for instance.
> And why is global installation preferable?
For those dependencies that can be curated, there is a question of security. If all your programs on your system link the same system OpenSSL, then it's super easy to update this OpenSSL when there is a security issue. And in situations where what you ship is a Linux system, then there is no point in not doing it. So there are situations where it is preferable. If everything is statically link and you have a critical fix for a common library, you need to rebuild everything.
> If it were up to me
Sure, if we were to rebuild everything from scratch... well we wouldn't do it in C/C++ in the first place, I'm pretty sure. But my Linux distribution exists, has a lot of merits, and I don't find it very nice when people try to enforce their preferences. I am fine if people want to use Flatpak, cargo, pip, nix, their system package manager, something else, or a mix of all that. But I like being able to install packages on my Gentoo system the way I like, potentially modifying them with a user patch. I like being able to choose if I want to link statically or dynamically (on my Linux, I like to link at least some libraries like OpenSSL dynamically, if I build an Android apk, I like to statically link the dependencies).
And I feel like I am not forcing anyone into doing what I like to do. I actually think that most people should not use Gentoo. I don't prevent anyone from using Flatpak or pulling half the Internet with docker containers for everything. But if they come telling me that my way is crap, I will defend it :-).
> I am somewhat at a loss.
I guess I was not trying to say "C/C++ is great, there is nothing to change". I just think it's not all crap, and I see where it all comes from and why we can't just throw everything away. There are many things to criticise, but many times I feel like criticisms are uninformed and just relying on the fact that everybody does that. Everybody spits on CMake, so it's easy to do it as well. But more often than not, if I start talking to someone who said that they cannot imagine how someone could design something as bad as CMake, they themselves write terrible CMakeLists. Those who can actually use CMake are generally a lot more nuanced.
For me it's just about rules/discipline: Commit working code with passing unottests. Everyone is responsible for fixing stuff. You break something you'll fix it.
this assertion is known disproven. seL4 is a fully memory safe (and also has even more safety baked in) major systems programming behemoth that is written on c + annotations where the analysis is conducted in a sidecar.
to obtain extra safety (but still not as safe as seL4) in rust, you must add a sidecar in the form of MIRI. nobody proposes adding MIRI into rust.
now, it is true that sel4 is a pain in the ass to write,compile+check, but there is a lot of design space in the unexplored spectrum of rust, rust+miri, sel4.
No you don't. You write your guaranteed to the public interface into the header file. When you start to put code in there it stops being a header file.
> because your headers will #include some library that is immense and then every header that includes that header will have transitive header dependencies
Your approach is what leads to this problem. Your header files should be tiny and only composed of, well, headers. Also almost all header files should have include guards, so including more then once should be a no-op.[1] Nothing stops you from including implementation files.
[1] When you say that your or compiler doesn't have that optimization: When it has the complexity to support precompiled header files, it can also implement this optimization.
C, C++ compilers by default expect libraries to be installed in the correct place, which would mean that you don't need to specify flags at all.
Also nothing stops you from writing:
#include </home/john/../../opt/my/stupid/path/last-project/final/finalv2/work/symlink-somewhere-else/foo/quick-test/school/c-tutorial/v9/1-introduction/header-file.h>The thing is, void was introduced exactly to represent this use-case without casting. The type to represent random amount of bytes, check which type and then cast is called char *. void * is for when you know exactly which type it has you just need to pass it through a generic interface, which isn't supposed to know or touch the type. The other use case is for when the object hasn't a type yet, like from malloc.
Previously everything was typed char *, then void * was used to separate the cases: casting from char *, don't cast from void *. Now in C++ both are the same again.
I think it shouldn't be a skill issue because a true professional should learn how to do it :-).
My build configs are systematically shorter than the bad ones.
Also I feel like many people really try to have CMake do everything, and as soon as you add custom functions in CMake, IMO you're doing it wrong. I have seen this pattern many times where people wrap CMake behind a Makefile, presumably because they hate having to run two commands (configure/build) instead of one (make). And then instead of having to deal with a terrible CMakeLists, they have to deal with a terrible CMakeLists and a terrible Makefile.
It's okay for the build instructions to say: "first you build the dependencies (or use a package manager for that), second you run this command to generate the protobuf files, and third you build the project". IMO if a developer cannot run 3 commands instead of one, they have to reflect on their own skills instead of blaming the tools :-).
Therein lies the issue, in my opinion: I do not believe that someone should have to be a "true professional" to be able to use a language or its tooling. This is just "git gud" mentality, which as we all [should] know [by now] cannot be relied upon. It's like that "So you're telling me I have to get experience before I get experience?" meme about entry-level jobs: if you need to "git gud" before you can use C/C++ and its tooling properly, all that means is that they'll be writing appalling code and build configs in the mean time. That's bad. Take something like AzerothCore: I'd wager that most of its mods were made by enthusiasts and amateurs. I think that's fine, or at least should be, but I'm keenly aware that C/C++ and its tooling do not cater to, nor even really accommodate amateurs (jokey eg: https://www.youtube.com/watch?v=oTEiQx88B2U). That's bad. Obviously, this is heading into the realm of "what software are you trusting unwisely", but with languages like Rust, the trust issue doesn't often include incompetence, more-so just malice: I do not tend to fear that some Rust program has RCE-causing memory issues because someone strlen'd something they shouldn't.
Not at all. I'm not saying that one should be an architect on day one. I'm saying that one should learn the basics on day one.
Learning how to install a package on a system and understanding that it means that a few files were copied in a few folders is basic. Anyone who cannot understand that does not deserve to be called a "software engineer". It has nothing to do with experience.
Except that C/C++ have entirely incongruous sets of basics compared to modern languages, which people coming to C/C++ for the first time are likely to have a passing familiarity with (unless it's their first language, of course). Yes, cmake configs can be pretty concise when only dealing with system packages, but this assumes that developers will want to do that, or whether they'll want to replicate the project-localness ideal, which complicates cmake configs. We're approaching this from entirely different places and is reminding me of the diametrically-opposed comments on this post (https://news.ycombinator.com/item?id=45328247) about READMEs.