If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software
Have it your way, but the current workflow of proompting/context engineering requires plenty of hand holding with test coverage and a whole lot of token burn to allow agentic loops to pass tests.
If you claim to be a vibe coder proompter with no understanding of how anything works under the hood and claim to build things using English as a programming language, I'd like to see your to-do app.
Traditional programming also requires iteration, testing, and debugging, so I don't see what argument you're making there.
Then when you invoke 'token burn' the question is then whether developer time costs more than compute time. Developer salaries aren't dropping while compute costs are. Or whether writing and reading syntax saves more time than pure natural language. I used to spend six figures a month on contracting out work to programmers. Now I spend thousands. I used to wait days for PRs, now the wait is in seconds, minutes and hours.
And these aren't to do apps, these are distributed, fault tolerant, load tested, fully observable and auditable, compliance controlled systems.
But when you say English as a programming language, you're implying that we have bypassed its ambiguity. If this was actually possible, we would have an English compiler, and before you suggest LLMs are compilers, they require context. Yes, you can produce code from English but it's entirely non-deterministic, and they also fool you into thinking because they can reproduce in-training material, they will be just as competent at something actually novel.
Your point about waiting on an engineer for a PR is actually moot. What is the goal? Ship a prototype? Build maintainable software? If it's the latter, agents may cost less but they don't remove your personal cognitive load. Because you can't actually let the agent develop truly unattended, you still have to review, validate and approve. And if it's hot garbage you need to spin it all over and hope it works.
So even if you are saving on a single engineer's cost, you have to count your personal cost of baby sitting this "agent". Assuming that you are designing the entire stack this can go better, but if you "forget the code even exits" and let the model also architect your stack for you then you are likely just wasting token money on proof-of-concepts rather than creating a real product.
I also find interesting that so many cult followers love to dismiss other humans in favor of this technology as if it already provides all the attributes that humans possess. As far as I'm concerned cognitive load can still only be truly decreased by having an engineer who understands your product and can champion it foward. Understanding the goal and the mission in real meaningful ways.
I said I'm doing everything a programmer does except writing syntax. So your argument about English being "ambiguous" misses the point. ⍵[⍋⍵]}⍨?10⍴100 is extremely precise to an APL programmer but completely ambiguous to everyone else. Meanwhile "generate 10 random integers from 1 to 100 and sort them in ascending order" is unambiguous to both humans and LLMs. The precision comes from clear specification, not syntax.
You're conflating oversight with "babysitting." When you review and validate code, that's normal engineering process whether it comes from humans or AI. If anything, managing human developers involves actual babysitting: handling office politics, mood swings, sick days, ego management, motivation issues, and interpersonal conflicts. CTOs or managers spend significant time on the human element that has nothing to do with code quality. You're calling technical review "babysitting" while ignoring that managing humans involves literal people management.
You've created a false choice between "prototype" and "production software" as if natural language programming can only produce one or the other. The architectural thinking isn't missing, it's just expressed in natural language rather than syntax. System design, scalability patterns, and business requirements understanding are all still required.
Your assumption that "cognitive load can only be decreased by an engineer who understands your product" ignores that someone can understand their own product better than contractors. You're acting like the goal is to "dismiss humans" when it's about finding more efficient ways to build software, I'd gladly hire other natural language developers with proper vetting, and I actually have plans to do so. And to be sure, I would rather hire the natural language developer who also knows syntax over one who doesn't, all else being equal. Emphasis on all else being equal.
The core issue is you're defending traditional methods on principle rather than engaging with whether the outcomes can actually be achieved differently.
You're calling implementation "trivial" while simultaneously arguing I should keep doing it manually. If it's trivial, why waste time on it? If it's not trivial, then automating it is obviously valuable. You can't have it both ways.
The speed difference isn't just about typing faster, it's about iteration speed. I can test ideas, refine approaches, and pivot architectural decisions in minutes and hours instead of days or weeks. When you're thinking through complex system design, that rapid feedback loop changes everything about how you solve problems.
This is like asking "why use a compiler when you could write assembly?" Higher-level abstractions aren't about reducing rigor, they're about focusing that rigor where it actually matters: on the problem domain, not the implementation mechanics.
You're defending a process based on principle rather than outcomes. I'm optimizing for results.
If you are arguing for some sort of euphoria of getting lines of code from your presumably rigorous requirements much faster, carry on. This goes both ways though, if you are claiming to be extremely rigorous in your process, I find it curious that you are wrestling with language syntax. Are you unfamiliar with the language you're developing with?
If you know the language and have gone as far as having defined the problem and solution in testable terms, the implementation should indeed be trivial. The choice of writing the code and gaining a deeper understanding of the implementation where you stand to gain from owning this part of the process come with the price of a higher time spent in the codebase, versus offloading it to the model which can be quicker, but it comes with the drawback that you will be less familiar with your own project.
The question ofhow do I implement this? Is an engineering question, not a please implement this solution I wrote in English.
You may feel like the implementation mechanics are divorced from the problem domain but I find that to hardly be the case, most projects I've worked on the implementation often informed the requirements and vice versa.
Abstractions are usually adopted when they are equivalent to the process they are abstracting. You may see capability, and indeed models are capable, but they aren't yet as reliable as the thing you allege them to be abstracting.
I think the new workflows feel faster, and may indeed be on several instances, but there is no free lunch.
You're also conflating syntax with implementation. Implementation is the logic, algorithms, and architectural decisions. Syntax is just the notation system for expressing that implementation. When you talk about 'implementation informing requirements,' you're describing the feedback loop of discovering constraints, bottlenecks, and design insights while building systems. That feedback comes from running code and testing behavior, not from typing semicolons. You're essentially arguing that the spelling of your code provides architectural insights, which is absurd.
The real issue here is that you're questioning optimization as if it indicates incompetence. It's like asking why a professional chef uses a food processor instead of chopping everything by hand. The answer isn't incompetence: it's optimization. I can spend my mental energy on architecture, system design, and problem-solving instead of semicolon placement and bracket matching.
By all means, spend your time as you wish! I know some people have a real emotional investment in the craft of writing syntax. Chop, chop, chop!
Also, are you actually using agents or just chatting with a bot and copy-pasting snippets? If you write requirements and let the agent toil, to eventually pass the tests you wrote, that's what I assume you're doing... Oh wait, are you also asking the agents to write the tests?
Here is the thing, if you wrote the code or had the LLM do it for you, who is reviewing it? If you are reviewing it, how is that eliminating actual cognitive load? If you're not reviewing it, and just taking the all tests passed as the threshold into production or worse yet, you have an agent code review it for you, then I'm actually suggesting incompetence.
Now, if you are thoroughly reviewing everything and writing your own tests, then congrats you're not incompetent. But if you're suggesting this is somehow reducing cognitive load, maybe that's true for you, in a "your truth" kind of way. If you simply prefer code reviewing as opposed to code writing have it your way.
I'm not sure you're joining the crowd that says this process makes them 100x more productive in coding tasks, I find that dubious and hilarious.
Yes, I still need to verify that the generated code implements my architectural intent correctly, but that's pattern recognition and evaluation, not generation. It's the difference between proofreading a translation versus translating from scratch. Both require language knowledge, but reviewing existing code for correctness is cognitively lighter than simultaneously managing syntax, debugging, and logic creation.
You are treating all cognitive overhead as equivalent, which is why you can't understand how automating the mechanical parts could be valuable. It's a fundamental category error on your part.
I'm talking about the entire stack of development, from the architectural as well as the actual implementation. These are intertwined and assuming they somehow live separately is significant oversight on your part. You have claimed English is the programming language.
Also. On the topic of conflating, you seem to think that LLMs have become defacto pre-compilers for English as a programming language, how do they do that exactly? In what ways do they compare/contrast to compilers?
You have only stated this as a fact, but what evidence do you have in support of this? As far as the evidence I can gather no one is claiming LLMs are deterministic, so please, support your claims to the contrary, or are you a magician?
You also seem to shift away from any pitfalls of agentic workflows by claiming to be doing all the due diligence whilst also claiming this is easier or faster for you. I sense perhaps that you are of the lol, nothing matters class of developers, reviewing some but not all the work. This will indeed make you faster, but like I said earlier, it's not a cost-free decision.
For individual developers, this is a big deal. You may not have time to wear all the hats all at once, so writing the code may be all the time you also have for code review. Getting code back from an LLM and reviewing it may feel faster but like I said unless it's correct, it's not actually saving time, maybe it feels that way, but we aren't talking about feelings or vibes, we are talking about delivery.
You've conflated "architectural feedback from running code" with "architectural feedback from typing syntax." I am explicitly saying implementation feedback comes from "running code and testing behavior, not from typing semicolons", yet you keep insisting that the mechanical act of typing syntax somehow provides architectural insights.
You've also conflated "intertwined" with "inseparable." Yes, architecture and implementation inform each other, but that feedback loop comes from executing code and observing system behavior, not from the physical act of typing curly braces. I get the exact same architectural insights from reviewing, testing, and iterating on generated code as I would from hand-typing it.
Most tellingly, you've conflated the process of writing code with the value of understanding code. I'm not eliminating understanding: I'm eliminating the mechanical overhead while maintaining all the strategic thinking. The cognitive load of understanding system design, debugging performance bottlenecks, and architectural trade-offs remains exactly the same whether I typed the implementation or reviewed a generated one.
Your entire argument rests on the false premise that wisdom somehow emerges from keystroke mechanics rather than from reasoning about system behavior. That's like arguing that handwriting essays makes you a better writer than typing them : confusing the delivery mechanism with the intellectual work.
So yes, I understand what conflating means. The question is: do you?
If all that you are really doing is writing your code in English and asking the LLM to re-write it for you in your language of choice (probably JS), then end of discussion. But your tone really implies you're a big fan of the vibes of automation this gives.
Your repeated accusations of "conflating" are a transparent attempt to deflect from the hollowness of your own arguments. You keep yapping about me conflating things. It's ironic because you are the one committing this error by treating the process of software engineering as a set of neatly separable, independent tasks.
You've built your entire argument on a fragile, false dichotomy between "strategic" and "mechanical" work. This is a fantasy. The "mechanical" act of implementation is not divorced from the "strategic" act of architecture. The architectural insights you claim to get from "running code and testing behavior" are a direct result of the specific implementation choices that were made. You don't get to wave a natural language wand, generate a black box of code, and then pretend you have the same deep understanding as someone who has grappled with the trade-offs at every level of the stack.
Implementation informs architecture, and vice versa. By offloading the implementation, you are severing a critical feedback loop and are left with a shallow, surface-level understanding of your own product.
Your food processors and compiler analogy—are fundamentally flawed because they compare deterministic tools to a non-deterministic one. A compiler or food processor doesn't get "creative." An LLM does. Building production systems on this foundation isn't "transformative"; it's reckless.
You've avoided every direct question about your actual workflow because there is clearly no rigor there. You're not optimizing for results; you're optimizing for the feeling of speed while sacrificing the deep, hard-won knowledge that actually produces robust, maintainable software. You're not building, you're just generating.