If AI is that good, there should be an explosion of Open Source projects of good quality.
Neither of those is happening.
And if you don't need one, why write one? If there is no specific use case in mind, how do you even determine what dimension "good" is measured on?
It might be that the gp is smart enough to code without a crutch.
I don't want to speak for the person you replied to, but I think that their main point is... are they?
I see lots of articles about huge increases in productivity, but I think it's fair to argue that we've yet to see the huge increases in useful products that would surely (we hope) result from that if it were true.
People should realize that denying that AI can boost productivity in coding makes it look like they don't know how to use it, or believe in some conspiracy that no one is actually benefitting and it's all market hype from tech bros.
Most of the Internet ifra depends on libxml2, major vendors like Juniper and Cisco use it. To my knowledge Android use it as well,
Naturally, with the advancement of AI, one would expect XML would be first thing to rewrite, given that library is in the critical path literally everywhere.
I've shadowed people who believe AI is helping them, and it seems to me that some of them don't notice how much effort they're spending while others don't bother to correct the 80% version once tests are passing.
Otherwise it sounds like "many people have had their lives changed by {insert philosophical/religious movement}, so if you're not finding it true you should look into what's wrong with you."
First of all, nobody is writing and open sourcing their own XML parser in 2025, so that's hyperbole.
Second, the boilerplate to use most XML libraries can be copy/pasted out of their docs. So where is AI saving you time here? The prompting and other BS is a waste of time and just looks silly, and you still have to read and understand the code. At best it seems like breaking even.
I get that people are anxious, worried, and are going through the "cycles of grief", but do you really think that in another 2 years, let alone 5 it won't be able to code a good XML library? We are just going to have to see how things go, because they are clearly going to go, whether we want or not.
And what does open source and the quality of projects have to do with it? There were bad open source projects before GPT's release.
One thing I've noticed about the super-AI enthusiasts on HN is that not a single one ever have a single comment linking to a repo of work they've made with it.
I check. I actually always do because I'm really keen to learn how to use these magical super-AI workflows. I've watched streams, replicated clause MD files, tried all the context tricks.
I'm not even saying AI doesn't help, it's great for getting me over the blank page writer's block. It's just not great at much else.
So I've just checked your comments and not only do you not have any examples of your super-duper AI skills, but it looks like you've been in the industry less than a year, graduating from a PhD last year?
You also admit it took you a week trying to debug a problem before an AI fixed it for you. Because you'd missed some parentheses in an algo.
I'm not trying to shame you, but that does signal your inexperience. If you'd have made the code well and easy to test, you should have spotted your bad algo quickly.
So is it that we're all bad at using AI? Or is it that AI benefits inexperienced programmers more?
"Ignore your own direct experience, only research papers matter" is certainly a take.
The beautiful thing about the current generation of tools is that they are so incredibly cheap relative to historical tools intended to improve engineering productivity. You can't just run out and pick up CASE tools for less than ~$CAR to ~$HOUSE. A pro subscription to whichever AI tool you want to try is $20.
Ignore research, try them, if you have success, use them. There's no dogma here. Just empiricism.
The bad algo was a scaling problem for one equation. That particular equation wasn't some y = mx + b thing, it was the result of a discontinuous galerkin finite element scheme that I wrote from scratch. The actual equation was one that I found after about 2 pages of hand written derivations with high level math. Not really a coding issue, just an algebra issue after really intense manipulations of partial differential equations.
The fact that AI found that problem, a problem that could only be found by someone able to do complex manipulations of PDEs is incredible to me. Perhaps I didn't tell the story well in the past comment, but it isn't like I didn't know python syntax and AI held my hand.
I don't post repos because I keep my hacker news life separate from my personal life, and my repos are tied to my name.
Most major software companies are demanding that their employees use AI, so you should be able to look at any open repo from Google, Microsoft, Facebook, etc for examples of AI use in code.
To replace libxml2 across these ecosystems you would need it to be API-, ABI, and probably bug-compatible with a decrepit old C library. That's not something anyone or anything can write from just the XML spec.
And solvers are actually a simpler aspect of the project I am working on. It also includes (or rather aims to include) optimizing compiler with DAE to ODE reduction, advanced numerical debugging etc.
This is why these discussions are pointless - AI works well for some people in some contexts, for others not so much, yet both sides extrapolate their experience as universal.
It seems like developers used to always joke about how much they used stack exchange (even senior devs). Now it seems like there are suddenly so many people who claim to never need any help and can just smoothly bust out beautiful code all day long.
Maybe you just think you're being more productive ;)
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
For basically every thing you program, you need to have a really solid understanding of what data structures you will use, and solid general knowledge of the methods you want to implement.
I claim that as a conservative estimate at least 90 % (likely more than 95 %) of what I code at work (and even more for what I code privately) is of this kind.