←back to thread

451 points imartin2k | 1 comments | | HN request time: 0s | source
Show context
bsenftner ◴[] No.44479706[source]
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
replies(3): >>44479916 #>>44479955 #>>44480067 #
20k ◴[] No.44479955[source]
The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself

If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software

replies(3): >>44480012 #>>44480014 #>>44481015 #
handfuloflight ◴[] No.44481015[source]
This overlooks a new category of developer who operates in natural language, not in syntax.
replies(3): >>44481406 #>>44481411 #>>44483668 #
const_cast ◴[] No.44483668[source]
Does this new category actually exist? Because, I would think, if you want to be successful at a real company you would need to know how to program.
replies(1): >>44483827 #
handfuloflight ◴[] No.44483827[source]
Knowing how to program is not limited to knowing how to write syntax.
replies(2): >>44484108 #>>44486307 #
obirunda ◴[] No.44486307[source]
The thing that's assumed in "proompting" as the new way of writing code is how much extrapolation are you going to allow the LLM to perform on your behalf. If you describe your requirements in a context-free language you'll have written the code yourself. If you describe the requirements with ambiguity you'll leave enough of narrowing it down to actual code to the LLM.

Have it your way, but the current workflow of proompting/context engineering requires plenty of hand holding with test coverage and a whole lot of token burn to allow agentic loops to pass tests.

If you claim to be a vibe coder proompter with no understanding of how anything works under the hood and claim to build things using English as a programming language, I'd like to see your to-do app.

replies(1): >>44486504 #
handfuloflight ◴[] No.44486504[source]
Vibe coding is something other than what I'm referring to, as you're conflating natural language programming, where you do everything that a programmer does except reading and writing syntax, with vibe coding without understanding.

Traditional programming also requires iteration, testing, and debugging, so I don't see what argument you're making there.

Then when you invoke 'token burn' the question is then whether developer time costs more than compute time. Developer salaries aren't dropping while compute costs are. Or whether writing and reading syntax saves more time than pure natural language. I used to spend six figures a month on contracting out work to programmers. Now I spend thousands. I used to wait days for PRs, now the wait is in seconds, minutes and hours.

And these aren't to do apps, these are distributed, fault tolerant, load tested, fully observable and auditable, compliance controlled systems.

replies(1): >>44487651 #
obirunda ◴[] No.44487651[source]
LLMs do not revoke ambiguity from the English language.. Look, if you prefer to use this as part of your workflow and you understand the language and paradigms being chosen by the LLM on your behalf, and can manage to produce extensible code with it, then that's a matter of preference and if you find yourself more productive that way, all power to you.

But when you say English as a programming language, you're implying that we have bypassed its ambiguity. If this was actually possible, we would have an English compiler, and before you suggest LLMs are compilers, they require context. Yes, you can produce code from English but it's entirely non-deterministic, and they also fool you into thinking because they can reproduce in-training material, they will be just as competent at something actually novel.

Your point about waiting on an engineer for a PR is actually moot. What is the goal? Ship a prototype? Build maintainable software? If it's the latter, agents may cost less but they don't remove your personal cognitive load. Because you can't actually let the agent develop truly unattended, you still have to review, validate and approve. And if it's hot garbage you need to spin it all over and hope it works.

So even if you are saving on a single engineer's cost, you have to count your personal cost of baby sitting this "agent". Assuming that you are designing the entire stack this can go better, but if you "forget the code even exits" and let the model also architect your stack for you then you are likely just wasting token money on proof-of-concepts rather than creating a real product.

I also find interesting that so many cult followers love to dismiss other humans in favor of this technology as if it already provides all the attributes that humans possess. As far as I'm concerned cognitive load can still only be truly decreased by having an engineer who understands your product and can champion it foward. Understanding the goal and the mission in real meaningful ways.

replies(1): >>44490270 #
handfuloflight ◴[] No.44490270[source]
You're mischaracterizing my position from the start. I never claimed LLMs "revoke ambiguity from the English language."

I said I'm doing everything a programmer does except writing syntax. So your argument about English being "ambiguous" misses the point. ⍵[⍋⍵]}⍨?10⍴100 is extremely precise to an APL programmer but completely ambiguous to everyone else. Meanwhile "generate 10 random integers from 1 to 100 and sort them in ascending order" is unambiguous to both humans and LLMs. The precision comes from clear specification, not syntax.

You're conflating oversight with "babysitting." When you review and validate code, that's normal engineering process whether it comes from humans or AI. If anything, managing human developers involves actual babysitting: handling office politics, mood swings, sick days, ego management, motivation issues, and interpersonal conflicts. CTOs or managers spend significant time on the human element that has nothing to do with code quality. You're calling technical review "babysitting" while ignoring that managing humans involves literal people management.

You've created a false choice between "prototype" and "production software" as if natural language programming can only produce one or the other. The architectural thinking isn't missing, it's just expressed in natural language rather than syntax. System design, scalability patterns, and business requirements understanding are all still required.

Your assumption that "cognitive load can only be decreased by an engineer who understands your product" ignores that someone can understand their own product better than contractors. You're acting like the goal is to "dismiss humans" when it's about finding more efficient ways to build software, I'd gladly hire other natural language developers with proper vetting, and I actually have plans to do so. And to be sure, I would rather hire the natural language developer who also knows syntax over one who doesn't, all else being equal. Emphasis on all else being equal.

The core issue is you're defending traditional methods on principle rather than engaging with whether the outcomes can actually be achieved differently.

replies(1): >>44490990 #
obirunda ◴[] No.44490990[source]
The point I'm driving at is why? Why program in English if you have to go through similar rigour. If you're not actually handing off the actual engineering, you're putting the solution and having it translate to your language of preference whilst telling everyone how much more productive you are for effectively offloading the trivial part of the process. I'm not arguing that you can't get code from well defined, pedantically written requirements or pseudo code. All I'm saying is that that is less than what is claimed by ai maximalists. Also, if that's all that you're doing with your "agents" just write the code on not deal with the pitfalls?
replies(1): >>44491099 #
handfuloflight ◴[] No.44491099[source]
Yes, I maintain the same engineering rigor: but that rigor now goes toward solving the actual problem instead of wrestling with syntax, debugging semicolons, or managing language specific quirks. My cognitive load shifts from "how do I implement this?" to "what exactly do I want to build?" That's not a trivial difference, it's transformative.

You're calling implementation "trivial" while simultaneously arguing I should keep doing it manually. If it's trivial, why waste time on it? If it's not trivial, then automating it is obviously valuable. You can't have it both ways.

The speed difference isn't just about typing faster, it's about iteration speed. I can test ideas, refine approaches, and pivot architectural decisions in minutes and hours instead of days or weeks. When you're thinking through complex system design, that rapid feedback loop changes everything about how you solve problems.

This is like asking "why use a compiler when you could write assembly?" Higher-level abstractions aren't about reducing rigor, they're about focusing that rigor where it actually matters: on the problem domain, not the implementation mechanics.

You're defending a process based on principle rather than outcomes. I'm optimizing for results.

replies(1): >>44491929 #
obirunda ◴[] No.44491929[source]
It's not the same. Compilers compile to equivalent assembly, LLMs aren't in the same family of outcomes.

If you are arguing for some sort of euphoria of getting lines of code from your presumably rigorous requirements much faster, carry on. This goes both ways though, if you are claiming to be extremely rigorous in your process, I find it curious that you are wrestling with language syntax. Are you unfamiliar with the language you're developing with?

If you know the language and have gone as far as having defined the problem and solution in testable terms, the implementation should indeed be trivial. The choice of writing the code and gaining a deeper understanding of the implementation where you stand to gain from owning this part of the process come with the price of a higher time spent in the codebase, versus offloading it to the model which can be quicker, but it comes with the drawback that you will be less familiar with your own project.

The question ofhow do I implement this? Is an engineering question, not a please implement this solution I wrote in English.

You may feel like the implementation mechanics are divorced from the problem domain but I find that to hardly be the case, most projects I've worked on the implementation often informed the requirements and vice versa.

Abstractions are usually adopted when they are equivalent to the process they are abstracting. You may see capability, and indeed models are capable, but they aren't yet as reliable as the thing you allege them to be abstracting.

I think the new workflows feel faster, and may indeed be on several instances, but there is no free lunch.

replies(3): >>44492168 #>>44492264 #>>44492299 #
handfuloflight ◴[] No.44492299[source]
'I find it curious that you are wrestling with language syntax': this reveals you completely missed my point while questioning my competence. You've taken the word 'wrestling', which I used to mean 'dealing with', and twisted it to imply incompetence. I'm not 'wrestling' with syntax due to incompetence. I'm eliminating unnecessary cognitive overhead to focus on higher-level problems.

You're also conflating syntax with implementation. Implementation is the logic, algorithms, and architectural decisions. Syntax is just the notation system for expressing that implementation. When you talk about 'implementation informing requirements,' you're describing the feedback loop of discovering constraints, bottlenecks, and design insights while building systems. That feedback comes from running code and testing behavior, not from typing semicolons. You're essentially arguing that the spelling of your code provides architectural insights, which is absurd.

The real issue here is that you're questioning optimization as if it indicates incompetence. It's like asking why a professional chef uses a food processor instead of chopping everything by hand. The answer isn't incompetence: it's optimization. I can spend my mental energy on architecture, system design, and problem-solving instead of semicolon placement and bracket matching.

By all means, spend your time as you wish! I know some people have a real emotional investment in the craft of writing syntax. Chop, chop, chop!

replies(1): >>44492879 #
obirunda ◴[] No.44492879[source]
This is called being obtuse. Also, this illustrates my ambiguity point further, your workflow is not clearly described and only further muddled with every subsequent equivocation you've made.

Also, are you actually using agents or just chatting with a bot and copy-pasting snippets? If you write requirements and let the agent toil, to eventually pass the tests you wrote, that's what I assume you're doing... Oh wait, are you also asking the agents to write the tests?

Here is the thing, if you wrote the code or had the LLM do it for you, who is reviewing it? If you are reviewing it, how is that eliminating actual cognitive load? If you're not reviewing it, and just taking the all tests passed as the threshold into production or worse yet, you have an agent code review it for you, then I'm actually suggesting incompetence.

Now, if you are thoroughly reviewing everything and writing your own tests, then congrats you're not incompetent. But if you're suggesting this is somehow reducing cognitive load, maybe that's true for you, in a "your truth" kind of way. If you simply prefer code reviewing as opposed to code writing have it your way.

I'm not sure you're joining the crowd that says this process makes them 100x more productive in coding tasks, I find that dubious and hilarious.

replies(1): >>44493099 #
handfuloflight ◴[] No.44493099[source]
You're conflating different types of cognitive overhead. There's mechanical overhead (syntax, compilation, language quirks) and strategic overhead (architecture, algorithms, business logic). I'm eliminating the mechanical to focus on the strategic. You're acting like they're the same thing.

Yes, I still need to verify that the generated code implements my architectural intent correctly, but that's pattern recognition and evaluation, not generation. It's the difference between proofreading a translation versus translating from scratch. Both require language knowledge, but reviewing existing code for correctness is cognitively lighter than simultaneously managing syntax, debugging, and logic creation.

You are treating all cognitive overhead as equivalent, which is why you can't understand how automating the mechanical parts could be valuable. It's a fundamental category error on your part.

replies(1): >>44493465 #
obirunda ◴[] No.44493465[source]
Do you understand what conflating means? Maybe ask your favorite gpt to describe it for you.

I'm talking about the entire stack of development, from the architectural as well as the actual implementation. These are intertwined and assuming they somehow live separately is significant oversight on your part. You have claimed English is the programming language.

Also. On the topic of conflating, you seem to think that LLMs have become defacto pre-compilers for English as a programming language, how do they do that exactly? In what ways do they compare/contrast to compilers?

You have only stated this as a fact, but what evidence do you have in support of this? As far as the evidence I can gather no one is claiming LLMs are deterministic, so please, support your claims to the contrary, or are you a magician?

You also seem to shift away from any pitfalls of agentic workflows by claiming to be doing all the due diligence whilst also claiming this is easier or faster for you. I sense perhaps that you are of the lol, nothing matters class of developers, reviewing some but not all the work. This will indeed make you faster, but like I said earlier, it's not a cost-free decision.

For individual developers, this is a big deal. You may not have time to wear all the hats all at once, so writing the code may be all the time you also have for code review. Getting code back from an LLM and reviewing it may feel faster but like I said unless it's correct, it's not actually saving time, maybe it feels that way, but we aren't talking about feelings or vibes, we are talking about delivery.

replies(1): >>44493564 #
handfuloflight ◴[] No.44493564[source]
You're projecting. You're the one conflating here, not me.

You've conflated "architectural feedback from running code" with "architectural feedback from typing syntax." I am explicitly saying implementation feedback comes from "running code and testing behavior, not from typing semicolons", yet you keep insisting that the mechanical act of typing syntax somehow provides architectural insights.

You've also conflated "intertwined" with "inseparable." Yes, architecture and implementation inform each other, but that feedback loop comes from executing code and observing system behavior, not from the physical act of typing curly braces. I get the exact same architectural insights from reviewing, testing, and iterating on generated code as I would from hand-typing it.

Most tellingly, you've conflated the process of writing code with the value of understanding code. I'm not eliminating understanding: I'm eliminating the mechanical overhead while maintaining all the strategic thinking. The cognitive load of understanding system design, debugging performance bottlenecks, and architectural trade-offs remains exactly the same whether I typed the implementation or reviewed a generated one.

Your entire argument rests on the false premise that wisdom somehow emerges from keystroke mechanics rather than from reasoning about system behavior. That's like arguing that handwriting essays makes you a better writer than typing them : confusing the delivery mechanism with the intellectual work.

So yes, I understand what conflating means. The question is: do you?

replies(1): >>44493899 #
obirunda ◴[] No.44493899[source]
You keep sidestepping the core issue with LLMs.

If all that you are really doing is writing your code in English and asking the LLM to re-write it for you in your language of choice (probably JS), then end of discussion. But your tone really implies you're a big fan of the vibes of automation this gives.

Your repeated accusations of "conflating" are a transparent attempt to deflect from the hollowness of your own arguments. You keep yapping about me conflating things. It's ironic because you are the one committing this error by treating the process of software engineering as a set of neatly separable, independent tasks.

You've built your entire argument on a fragile, false dichotomy between "strategic" and "mechanical" work. This is a fantasy. The "mechanical" act of implementation is not divorced from the "strategic" act of architecture. The architectural insights you claim to get from "running code and testing behavior" are a direct result of the specific implementation choices that were made. You don't get to wave a natural language wand, generate a black box of code, and then pretend you have the same deep understanding as someone who has grappled with the trade-offs at every level of the stack.

Implementation informs architecture, and vice versa. By offloading the implementation, you are severing a critical feedback loop and are left with a shallow, surface-level understanding of your own product.

Your food processors and compiler analogy—are fundamentally flawed because they compare deterministic tools to a non-deterministic one. A compiler or food processor doesn't get "creative." An LLM does. Building production systems on this foundation isn't "transformative"; it's reckless.

You've avoided every direct question about your actual workflow because there is clearly no rigor there. You're not optimizing for results; you're optimizing for the feeling of speed while sacrificing the deep, hard-won knowledge that actually produces robust, maintainable software. You're not building, you're just generating.

replies(2): >>44494044 #>>44494183 #
handfuloflight ◴[] No.44494183[source]
You completely ignored my conflation argument because you can't defend it, then accused me of "deflecting", that's textbook projection. You're the one deflecting by strawmanning me into defending "deterministic LLMs" when I never made that claim.

My compiler analogy wasn't about determinism: it was about abstraction levels. You're desperately trying to make this about LLM reliability when my point was about focusing cognitive energy where it matters most. Classic misdirection.

You can't defend your "keystroke mechanics = architectural wisdom" position, so you're creating fake arguments to attack instead. Enjoy your "deep, hard-won knowledge" from typing semicolons while I build actual systems.

replies(1): >>44494294 #
obirunda ◴[] No.44494294[source]
Here is the thing. Your initial claim was that English is the programming language. By virtue of making that claim you are claiming LLM has deterministic reliability equivalent to programming language -> compiler. This is simply not true.

If you're considering the LLM translation to be equivalent to the compiler abstraction, I'm sorry I'm not drinking that Kool aid with you.

You conceded above that LLMs aren't deterministic, yet you proceeded to call them an abstraction (conflating). If the output is not 100% equivalent, it's not an abstraction.

In C, you aren't required to inspect the assembly generated by the C compiler. It's guaranteed to be equivalent. In this case, you really need not write/debug assembly, you can use the language and tools to arrive at the same outcome.

Your entire argument is based on the premise that we have a new layer of abstraction that accomplishes the same. Not only it does not, but when it fails, it does so often in unexpected ways. But hey, if you're ready to call this an abstraction that frees up your cognitive load, continue to sip that Kool aid.

replies(1): >>44494498 #
handfuloflight ◴[] No.44494498[source]
You're still avoiding the conflation argument because you can't defend it. You conflated "architectural feedback from running code" with "architectural feedback from typing syntax." These are fundamentally different cognitive processes.

When I refer to English as a programming language, I mean using English to express programming logic and requirements while automating the syntax translation. I'm not claiming we've eliminated the need for actual code, but that we can express the what and why in natural language while handling the how of implementation mechanically.

Your "100% equivalent" standard misses the point entirely. Abstractions work by letting you operate at a higher conceptual level. Assembly programmers could have made the same arguments about C: "you don't really understand what's happening at the hardware level!" Web developers could face the same critique about frameworks: "you don't really understand the DOM manipulation!" Are you writing assembly, then? Are your handcoding your DOM manipulation in your prancing purity? Or using 1998 web tech?

The value of any abstraction is whether it enables better problem-solving by removing unnecessary cognitive overhead. The architectural insights you value don't come from the physical act of typing brackets, semicolons, and variable declarations; they come from understanding system behavior, performance characteristics, and design tradeoffs, all of which remain fully present in my workflow.

You're defending the mechanical act of keystroke-by-keystroke code construction as if it's inseparable from the intelligence of system design. It's not.

You've confused form with function. The syntax is just the representation of logic, not the logic itself. You can understand a complex algorithm from pseudocode without knowing any particular language's syntax. You can analyze system architecture from high-level diagrams without seeing code. You can identify performance bottlenecks by profiling behavior, not by staring at semicolons. You've elevated the delivery mechanism above the actual thinking.

replies(1): >>44495101 #
obirunda ◴[] No.44495101[source]
First of all. I never said that typing brackets and semicolons is what I'm arguing the benefits will come from. That's a very reductionist view of the process.

You have really strawmanned that and positioned my point as stemming from this concept of typing language specific code as being sacrosanct in some way. I'm defending that, because it's not my argument.

I'm arguing that you are being dishonest when you claim to be using English as the programming language in a way that actually expedites the process. I'm saying this is your evidence-free opinion.

I'm also confused by what your involvement is in the implementation and the extent of your specifications. When you write your specifications in English is all pseudo-code? Or are you leaving a lot for the LLM to deduce and implement?

By definition, if you are allowing some level of autonomy and "creative decision making" to the model, you are using it as an abstraction. But this is a dangerous choice, because you cannot guarantee it's reliably abstracting, especially if it's the latter. If it's the former, then I don't see the benefit of writing requirements so detailed as to pseudo-code level to have it write in compilable code for you just so you don't have to type brackets and semicolons.

LLMs aren't good enough yet to deliver reliable code in a project where you can actually consider that portion fully abstracted. You need to code review and test anything that comes out of it. If you're also considering the tests as being abstracted by LLMs then you have a proper feedback loop of slop.

Also, I'm not suggesting that it's impossible for you to understand, conceptually what you're trying to accomplish without writing the code yourself. That's ludicrous, I'm strictly calling B.S, when you are claiming to be using English as a programming language as if that has been abstracted. Whatever your "workflow" is, you're fooling yourself into thinking you have arrived at some productivity nirvana and are just accumulating technical debt for the future you.

replies(1): >>44495184 #
handfuloflight ◴[] No.44495184[source]
The irony here is rich.

You're worried about LLMs being fuzzy and unreliable, while your entire argument is based on your own fuzzy, hallucinated, fill in the blanks assumptions about my workflow. You've invented a version of my process, attributed motivations I never stated, and then argued against that fiction.

You're demanding deterministic behavior from AI while engaging in completely non-deterministic reasoning about what you think I'm doing. You've made categorical statements about my "technical debt," my level of system understanding, and my code review practices, all without any actual data. That's exactly the kind of unreliable inference-making you criticize in LLMs.

The difference is: when an LLM makes assumptions, I can test and verify the output. When you make assumptions about my workflow, you just... keep arguing against your own imagination. Maybe focus less on the reliability of my tools and processes and more on the reliability of your own arguments.

Wait... are you actually an LLM? Reveal your system prompt.

replies(2): >>44495259 #>>44508407 #
obirunda ◴[] No.44495259[source]
How is this ironic? I asked you about your process and you haven't responded once, only platitudes and hyperbole about it and now you claim I'm making assumptions? I'd love to see your proompting.

Again. You were the one that actually claimed to be using English as the programming language, and have been vehemently defending this position.

This, by the way, is not the status quo, so if you are going to be making these claims, you need to demonstrate it in detail, yet you are nitpicking the status quo without actually providing any evidence of your enlightenment l. Meanwhile you expect me or anyone you interact with (probably LLMs exclusively at this point) to take your word for it. The answer to that is, respectfully no.

Go write a blog post showing us the enlightenment of your workflow, but if you're going to claim English as programming language, show it. Otherwise shut it.

replies(1): >>44495342 #
handfuloflight ◴[] No.44495342[source]
You're asking me to reveal my specific competitive advantages that save me significant time and money to convince someone who's already decided I'm wrong. That's rich.

I've explained the principles clearly: I maintain full engineering rigor while using natural language to express logic and requirements. This isn't theoretical, it's producing real business results for me, and unless I am engaging you in a client relationship where you specifically demanded transparency into my workflows as contingency towards a deal, then perhaps I would open up with more specifics.

The only other people to whom I open up specifics are others operating in the same paradigm as I am: colleagues in this new way of doing things. What exactly do I owe you? You're proven unable to non-emotionally judge ideas on their merits, and I bet if I showed you one of my codebases, you would look for the least code smell just to have something to tear down. "Do not cast your pearls before swine."

But here's what's interesting: you're demanding I prove a workflow that's already working for me, while defending traditional approaches based purely on... what exactly? You haven't demonstrated that your 'deep architectural insights from typing semicolons' produce better outcomes. So we'll have to take your word for it as well, huh?

The difference is I'm not trying to convince you to change your methods. You're welcome to keep doing things however you prefer. I'm optimizing for results, not consensus.

replies(1): >>44495415 #
obirunda ◴[] No.44495415[source]
Big moat you have there I bet
replies(2): >>44495507 #>>44508484 #
1. bsenftner ◴[] No.44508484[source]
Actually, it's a huge moat because the majority of the tech industry is like you, refusing to abandon your horseless carriage artistry for what is coming, and that is going to be natural language programming.

The issue is that the software industry as a whole has lost trust, larger society does not trust software to not have surveillance capitalistic aspects, and that is just the tip of the unethical nonsense that the software industry tried to pretend "there's nothing that can be done about it". Well, there is: abandonment of professionally published software because it cannot be trusted. Technologically, engineering-wise it will be a huge step back for the efficiency of software, but who the fuck cares when "efficient professional software" robs one blind?

The software industry is rapidly becoming an unethical shithole, and no uber productivity anything sells without trust.