This is my favourite take so far from this post:
“Google’s own data from September 2024 shows that Android’s memory safety vulnerabilities dropped from 76% to 24% over just six years — not by retrofitting safety features onto existing C++ code, but by writing new code in memory-safe languages (Rust, Kotlin, Java). Google’s security blog makes a fascinating observation: vulnerabilities have a half-life. Code that’s five years old has 3.4x to 7.4x lower vulnerability density than new code, because bugs get found and fixed over time. The implication is striking — if you just stop writing new unsafe code, the overall vulnerability rate drops exponentially without touching a single line of existing C++.”
Starting to transition away is perhaps the best step if these stats ring true. Then actively seeking out bad C++ practices is probably going to quietly pay dividends as well.
The preceding paragraph is this:
These are massive, legacy-heavy codebases where much of the code predates modern C++ practices. Code written with raw new/delete, C-style arrays, char* string manipulation, manual buffer management — the full catalogue of pre-C++11 antipatterns. And there’s a conflation problem: the studies report “C/C++” as a single category. The Windows kernel is largely C, not C++. Android’s HAL and native layer are heavily C. Modern C++ with RAII, smart pointers, and containers is a fundamentally different beast than malloc/free C code, but the statistic treats them as one.
And then after the paragraph you quote, the author just blasts along with the conclusion that code should be rewritten in memory safe languages like Rust, Kotlin and Java, without even touching on their own observation that modern C++ facilities do make a difference to prevalence of bugs.
What is the reduction in bug rate when rewriting legacy C++ with raw memory management vs. modern C++ with RAII and reference counted pointers? We don’t know and they don’t want to ask, because it would challenge their main thesis.
I don’t disagree that modern C++ safety still relies on the programmer making the right choices, whereas with a truly memory-safe language the compiler makes those decisions for you. But to sidestep the question completely is disingenuous and serves to make those of us who actually care about the specifics (C++ programmers, the people they’re trying to convince to retrain) to be incredibly suspicious of the whole argument.
modern C++ facilities do make a difference to prevalence of bugs.
This is true, but just saying “write modern C++!” doesn’t actually work in practice. First, there are a ton of footguns that even best-practice C++ doesn’t avoid. Using
std::shared_ptr? Great, you’re probably going to avoid memory leaks. Null pointer dereference? Not so much. What’s the modern C++ way to avoid integer overflow?Second, it’s pretty much impossible to completely avoid raw pointers etc. even if you’re trying, and good luck getting your colleagues to actually try. I can’t even get mine to write proper commit messages. You need a machine forcing them to do it properly. Something they can’t opt out of (or at least where opting out isn’t the easy lazy option).
So yeah it’s better to use modern C++ and it is an improvement, but not enough the change the conclusion that you should just use Rust instead.
Is the COBOL committee still working under the assumption that it currently is and will always be the dominant language?
The title is clickbait, but the article is well written.
It is tearing apart some points made in a talk (which I didn’t watch). The talk seems to focus on C++26 features (given that you are using C++) while the article argues why you still shouldn’t use C++ in the first place, despite the improvements. Mainly because the memory safety features are opt-in. There is also discussion about the CrowdStrike incident, and how it was more of a cultural problem than a language problem.
The article has some good points, but it read as pure LLM slop to me.
Was reminded of the meme: Heartbreaking: The worst person you know just made a great point.
Maybe my LLM detector needs an update, but only the headline triggered it. The article did the opposite for me.
Anyway, the author checks out, old github profile etc. Works in high frequency trading, which I despise because I think it is make-do work, moving money around a millisecond before anyone else has a chance, a huge technical effort with zero benefit to society compared to slower trading. I’ll file it together with adtech and bitcoin. But. The article is not about that. And I know that working in high frequency trading sure makes you qualified to talk C++ or FPGAs or anything close-to-the-metal. So, author background checks out. Verdict: not slop.

