If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?
We looked at the existing solutions, and concluded that customizing them to meet all our requirements would be a giant effort.
Meanwhile I fed the requirement doc into Claude Sonnet, and with about 3 days of prompting and debugging we had a bespoke solution that did exactly what we needed.
Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.
AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].
The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.
The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.
[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".
We get paid to solve problems, sometimes the solution is to know an existing pattern or open source implementation and use it. Aguably it usually is: we seldom have to invent new architectures, DSLs, protocols, or OSes from scratch, but even those are patterns one level up.
Whatever the AI is inside, doesn't matter: this was it solving a problem.
The only real complexity in software is describing it. There is no evidence that the tools are going to ever help with that. Maybe some kind of device attached directly to the brain that can sidestep the parts that get in the way, but that is assuming some part of the brain is more efficient than it seems through the pathways we experience it through. It could also be that the brain is just fatally flawed.
So I find your assessment pretty accurate, if only depressing.
I would be interested in knowing what in those two weeks you couldn’t figure out, but AI could.
An open source project wouldn't have those issues (someone at least understands all the code, and most edge cases have likely been ironed out) plus then you get maintenance updates for free.
Maybe, but I'm not completely convinced by this.
Prior to ChatGPT, there would be times where I would like to build a project (e.g. implement Raft or Paxos), I write a bit, find a point where I get stuck, decide that this project isn't that interesting and I give up and don't learn anything.
What ChatGPT gives me, if nothing else, is a slightly competent rubber duck. It can give me a hint to why something isn't working like it should, and it's the slight push I need to power through the project, and since I actually finish the project, I almost certain learn more than I would have before.
I've done this a bunch of times now, especially when I am trying to directly implement something directly from a paper, which I personally find can be pretty difficult.
It also makes these things more fun. Even when I know the correct way to do something, there can be lots of tedious stuff that I don't want to type, like really long if/else chains (when I can't easily avoid them).
E.g. pop songs with no original chord progressions or melodies, and hackneyed lyrics are still copyrighted.
Plagiarized and uncopyrightable code is radioactive; it can't be pulled into FOSS or commercial codebases alike.
The argument went that the main reason the now-ancient push for code reuse failed to deliver anything close to its hypothetical maximum benefit was because copyright got in the way. Result: tons and tons of wheel-reinvention, like, to the point that most of what programmers do day to day is reinvent wheels.
LLMs essentially provide fine-grained contextual search of existing code, while also stripping copyright from whatever they find. Ta-da! Problem solved.
I wonder how many programmers have assembly code skill atrophy?
Few people will weep the death of the necessity to use abstract logical syntax to communicate with a computer. Just like few people weep the death of having to type out individual register manipulations.
Personal projects are fun for the same reason that they're easy to abandon: there are no stakes to them. No one yells at you for doing something wrong, you're not trying to satisfy a stakeholder, you can develop into any direction you want. This is good, but that also means it's easy to stop the moment you get to a part that isn't fun.
Using ChatGPT to help unblock myself makes it easier for me to not abandon a project when I get frustrated. Even when ChatGPT's suggestions aren't helpful (which is often), it can still help me understand the problem by trying to describe it to the bot.
"I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this."
Anyone with 10 years in distributed systems at FAANG doesn’t need two weeks to design a distributed scheduler handling 1M+ schedules per day, that’s a solved problem in 2025 and basically a joke at that scale. That alone makes this person’s story questionable, and his comment history only adds to the doubt.
Even if LLMs make "plain English" programming viable, programmers still need to write, test, and debug lists of instructions. "Vibe coding" is different; you're telling the AI to write the instructions and acting more like a product manager, except without any of the actual communications skills that a good manager has to develop. And without any of the search and learning that I mentioned before.
For that matter, a lot of chatbots don't do learning either. Chatbots can sort of search a problem space, but they only remember the last 20-100k tokens. We don't have a way to encode tokens that fall out of that context window into some longer-term weights. Most of their knowledge comes from the information they learned from training data - again, cheated from humans, just like humans can now cheat off the AI. This is a recipe for intellectual stagnation.
[0] e.g. for malware analysis or videogame modding
I don't want to be a hater, but holy moley, that sounds like the absolute laziest possible way to solve things. Do you have training, skills, knowledge?
This is an HN comment thread and all, but you're doing yourself no favors. Software professionals should offer their employers some due diligence and deliver working solutions that at least they understand.
Assembly is just programming. It's a particularly obtuse form of programming in the modern era, but ultimately it's the same fundamental concepts as you use when writing JavaScript.
Do you learn more about what the hardware is doing when using assembly vs JavaScript? Yes. Does that matter for the creation and maintenance of most software? Absolutely not.
AI changes that, you don't need to know any computer science concepts to produce certain classes of program with AI now, and if you can keep prompting it until you get what you want, you may never need to exercise the conceptual parts of programming at all.
That's all well and good until you suddenly do need to do some actual programming, but it's been months/years since you last did that and you now suck at it.
I also think that once robots are around it will be yet another huge multiplier but this time in the real world. Sure the robot won't be as perfect as the human initially but so what. You can utilize it to do so much more. Maybe I'll bother actually buying a rundown house and renovating myself. If I know that I can just tell the robot to paint all the walls and possibly even do it 3 times with different paint then I feel far more confident that it won't be an untenable risk and bother.
for others following along: the comment history is mostly talking about how software engineering is dead because AI is real this time with a few diversions to fixate on how overpriced university pedigrees are.
I remember before ChatGPT, smart people would come on podcasts and say we were 100 or 300 years away from AGI.
Then we saw GPT shock them. The reality is these people have no idea, it’s just catchy to talk this way.
With the amount of money going into the problem and the linear increases we see over time, it’s much more likely we see AGI sooner than later.