Most active commenters
  • handfuloflight(22)
  • obirunda(13)
  • bsenftner(7)
  • const_cast(3)
  • (3)

←back to thread

451 points imartin2k | 66 comments | | HN request time: 0.447s | source | bottom
1. bsenftner ◴[] No.44479706[source]
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
replies(3): >>44479916 #>>44479955 #>>44480067 #
2. einrealist ◴[] No.44479916[source]
Isn't "Engineering" is based on predictability, on repeatability?

LLMs are not very predictable. And that's not just true for the output. Each change to the model impacts how it parses and computes the input. For someone claiming to be a "Prompt Engineer", this cannot work. There are so many variables that are simply unknown to the casual user: training methods, the training set, biases, ...

If I get the feeling I am creating good prompts for Gemini 2.5 Pro, the next version might render those prompts useless. And that might get even worse with dynamic, "self-improving" models.

So when we talk about "Vibe coding", aren't we just doing "Vibe prompting", too?

replies(2): >>44479980 #>>44481626 #
3. 20k ◴[] No.44479955[source]
The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself

If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software

replies(3): >>44480012 #>>44480014 #>>44481015 #
4. oceanplexian ◴[] No.44479980[source]
> LLMs are not very predictable. And that's not just true for the output.

If you run an open source model from the same seed on the same hardware they are completely deterministic. It will spit out the same answer every time. So it’s not an issue with the technology and there’s nothing stopping you from writing repeatable prompts and promoting techniques.

replies(5): >>44480240 #>>44480288 #>>44480395 #>>44480523 #>>44480581 #
5. cube2222 ◴[] No.44480012[source]
As with many productivity-boosting tools, it’s slower to begin with, but once you get used to it, and become “fluent”, it’s faster.
6. milkshakes ◴[] No.44480014[source]
> The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself

conceding that this may be the case, there are entire categories of problems that i am now able to approach that i have felt discouraged from in the past. even if the code is wrong (which, for the most part, it isn't), there is a value for me to have a team of over-eager puppies fearlessly leading me into the most uninviting problems, and somehow the mess they may or may not create makes solving the problem more accessible to me. even if i have to clean up almost every aspect of their work (i usually don't), the "get your feet wet" part is often the hardest part for me, even with a design and some prototyping. i don't have this problem at work really, but for personal projects it's been much more fun to work with the robots than always bouncing around my own head.

7. add-sub-mul-div ◴[] No.44480067[source]
If this nondeterministic software engineering had been invented first we'd have built statues of whoever gave us C.
8. dimitri-vs ◴[] No.44480240{3}[source]
Realistically, how many people do you think have the time, skills and hardware required to do this?
9. mafuy ◴[] No.44480288{3}[source]
Who's saying that the model stays the same and the seed is not random for most of the companies that run AI? There is no drawback to randomness for them.
10. enragedcacti ◴[] No.44480395{3}[source]
Predictable does not necessarily follow from deterministic. Hash algorithms, for instance, are valuable specifically because they are both deterministic and unpredictable.

Relying on model, seed, and hardware to get "repeatable" prompts essentially reduces an LLM to a very lossy natural language decompression algorithm. What other reason would someone have for asking the same question over and over and over again with the same input? If that's a problem you need solve then you need a database, not a deterministic LLM.

11. CoastalCoder ◴[] No.44480523{3}[source]
> If you run an open source model from the same seed on the same hardware they are completely deterministic.

Are you sure of that? Parallel scatter/gather operations may still be at the mercy of scheduling variances, due to some forms of computer math not being associative.

replies(1): >>44482762 #
12. o11c ◴[] No.44480581{3}[source]
By "unpredictability", we mean that AIs will return completely different results if a single word is changed to a close synonym, or an adverb or prepositional phrase is moved to a semantically identical location, etc. Very often this simple change will move you from "get the correct answer 90% of the time" (about the best that AIs can do) to "get the correct answer <10% of the time".

Whenever people talk about "prompt engineering", they're referring to randomly changing these kinds of things, in hopes of getting a query pattern where you get meaningful results 90% of the time.

replies(1): >>44480868 #
13. bsenftner ◴[] No.44480868{4}[source]
What you're describing is specifically the subtle nature of LLMs I'm pointing at; that changing of a single word to a close synonym is meaningful. Why and how they are meaningful gets pushback from the developer community, they somehow do not see this as being a topic, a point of engineering proficiency. It is, but requires an understanding of how LLMs encode and retrieve data.

The reason changing one word in a prompt to a close synonym changes the reply is because it is the specific words used in a series that is how information is embedded and recovered by LLMs. The 'in a series' aspect is subtle and important. The same topic is in the LLM multiple times, with different levels of treatment from casual to academic. Each treatment from casual to formal uses different words, similar words, but different and that difference is very meaningful. That difference is how seriously the information is being handled. The use of one term versus another term causes a prompt to index into one treatment of the subject versus another. The more formal the terms used, meaning the synonyms used by experts of that area of knowledge, generate the more accurate replies. While the close synonyms generate replies from outsiders of that knowledge, those not using the same phrases as those with the most expertise, the phrases used by those perhaps trying to understand but do not yet?

It is not randomly changing things in one's prompts at all. It's understanding the knowledge space one is prompting within such that the prompts generate accurate replies. This requires knowing the knowledge space one prompts within, so one knows the correct formal terms that unlock accurate replies. Plus, knowing that area, one is in a better position to identify hallucination.

replies(2): >>44480995 #>>44481486 #
14. handfuloflight ◴[] No.44480995{5}[source]
Words are power, and specifically, specific words are power.
replies(1): >>44483216 #
15. handfuloflight ◴[] No.44481015[source]
This overlooks a new category of developer who operates in natural language, not in syntax.
replies(3): >>44481406 #>>44481411 #>>44483668 #
16. 20k ◴[] No.44481406{3}[source]
Natural language is inherently a bad programming language. No developer, even with the absolute best AI tools, can avoid understanding the code that AI generates for very long

The only way to successfully use AI is to have sufficient skill to review the code it generates for correctness - which is a problem that is at least as skilful as simply writing the code

replies(1): >>44481891 #
17. goatlover ◴[] No.44481411{3}[source]
So they don't understand the syntax being generated for them?
replies(1): >>44481882 #
18. noduerme ◴[] No.44481486{5}[source]
What you are describing is not natural language programming, it's the use of incantations discovered by accident or by trial and error. It's alchemy, not chemistry. That's what people mean when they say it's not reproducible. It's not reproducible according to any useful logical framework that could be generally applied to other cases. There may be some "power" in knowing magical incantations, but mostly it's going to be a series of parlor tricks, since neither you nor anyone else can explain why one prompt produces an algorithm that spits out value X whilst changing a single word to its synonym produces X*-1, or Q, or 14 rabbits. And if you could, why not just type the algorithm yourself?

Higher level programming languages may make choices for coders regarding lower level functionality, but they have syntactic and semantic rules that produce logically consistent results. Claiming that such rules exist for LLMs but are so subtle that only the ultra-enlightened such as yourself can understand them begs the question: If hardly anyone can grasp such subtlety, then who exactly are all these massive models being built for?

replies(1): >>44483198 #
19. handfuloflight ◴[] No.44481882{4}[source]
They don't need to, any more than syntax writers need to understand byte code.

They need to understand what the code does.

replies(1): >>44484330 #
20. handfuloflight ◴[] No.44481891{4}[source]
You assume natural language programming only produces code. It is also used to read it.
replies(1): >>44486145 #
21. atemerev ◴[] No.44482762{4}[source]
Sure. Just set the temperature to 0 in every model and see it become deterministic. Or use a fully deterministic PRNG like random123.
22. bsenftner ◴[] No.44483198{6}[source]
You are being stubborn, the method is absolutely reproducible. But across models, of course not, that is not how they operate.

> It's not reproducible according to any useful logical framework that could be generally applied to other cases.

It absolutely is, you are refusing to accept that natural language contains this type of logical structure. You are repeatedly trying to project "magic incantations" allusions, when it is simply that you do not understand. Plus, you're openly hostile to the idea that this is a subtle logic you are not seeing.

It is a simple mechanism: multiple people treat the same subjects differently, with different words. Those that are professionally experts in an area tend to use the same words to describe their work. Use those words of you want the LLM to reply from their portion of the LLM's training. This is not any form of "magical incantation" it is knowing what you are referencing by using the formal terminology.

This is not magic, nor is it some kind of elite knowledge. Drop your anger and just realize that it's subtle, that's all. It is difficult to see, that is all. Why this causes developers to get so angry is beyond me.

replies(1): >>44486544 #
23. bsenftner ◴[] No.44483216{6}[source]
Yes! Why do people get so angry about it? "Oh, you're saying I'm hold it wrong?!" Well, actually, yes, If you speak Pascal to Ruby you get syntax errors, and this is the same basic idea. If you want to talk sports to an LLM and you use shit talking sports language, that's what you'll get back. Obvious, right? Same goes for anything formal, and why is that an insult to people to point that out?
replies(1): >>44483523 #
24. handfuloflight ◴[] No.44483523{7}[source]
For a subset of these detractors, it's their investment and personal moat building into learning syntax which is now being threatened to be obsoleted by natural language programming. Now people with domain knowledge are able to become developers, whereas previously domain experts relied on syntax writers to translate their requirements into reality.

The syntax writers may say: "I do more than write syntax! I think in systems, logic, processes, limits, edge cases, etc."

The response to that is: you don't need syntax to do that, yet until now syntax was the barrier to technical expression.

So ironically, when they show anger it is a form of hypocrisy: they already know that knowing how to write specific words is power. They're just upset that the specific words that matter have changed.

25. const_cast ◴[] No.44483668{3}[source]
Does this new category actually exist? Because, I would think, if you want to be successful at a real company you would need to know how to program.
replies(1): >>44483827 #
26. handfuloflight ◴[] No.44483827{4}[source]
Knowing how to program is not limited to knowing how to write syntax.
replies(2): >>44484108 #>>44486307 #
27. const_cast ◴[] No.44484108{5}[source]
Yes, but knowing how to read and write syntax is a required pre-requisite.

Syntax, even before LLMs, is just an implementation detail. It's for computers to understand. Semantics is what humans care about.

replies(1): >>44484131 #
28. handfuloflight ◴[] No.44484131{6}[source]
*Was* a required pre-requisite. Natural language can be translated bidirectionally with any programming syntax.

And so if syntax is just an implementation detail and semantics is what matters, then someone who understands the semantics but uses AI to handle the syntax implementation is still programming.

replies(2): >>44484922 #>>44485678 #
29. goatlover ◴[] No.44484330{5}[source]
I have a hard time believing any of these natural language only prompters are being hired as developers if they don't understand the syntax. Part of the issue is LLMs are indeterministic and hallucinate. They also don't recognize every bug they generate.
replies(1): >>44484407 #
30. handfuloflight ◴[] No.44484407{6}[source]
https://clickup.com/careers?gh_jid=5505998004

I don't agree with the vibe coding methodology (or lack thereof) myself but here's a direct lowest common denominator counterexample of a natural language programming job position.

In practice, we are seeing and will continue to see developer adjacent positions submitting PRs. Not on a whim but after having understood the codebase or parts of it using the AI to translate syntax to English.

31. const_cast ◴[] No.44484922{7}[source]
> Natural language can be translated bidirectionally with any programming syntax.

Sure, maybe, but it's a lossy conversion both ways. And that lossy-ness is what programming actually is. We get and formulate requirements from business owners, but translating that into code isn't trivial.

replies(1): >>44485011 #
32. handfuloflight ◴[] No.44485011{8}[source]
The lossiness and iteration you describe still happens in natural language programming, you're still clarifying requirements, handling edge cases, and refining solutions. The difference is doing that iterative work in natural language rather than syntax.
33. DrillShopper ◴[] No.44485678{7}[source]
char * const (( const bar)[5])(int )
replies(1): >>44485759 #
34. handfuloflight ◴[] No.44485759{8}[source]
{⍵[⍋⍵]}⍨?10⍴100

...or generate 10 random numbers from 1 to 100 and sort them in ascending order.

I know which one of these is closer to, if not identical to the thoughts in my mind before any code is written.

I know which of one of these can be communicated to every single stakeholder in the organization.

I know which one of these the vast majority of readers will ask an AI to explain.

35. obirunda ◴[] No.44486145{5}[source]
I don't think you understand why context-free languages are used for programming. If you provide a requirement with any degree of ambiguity the outcome will be non-deterministic. Do you want software that works or kind of works?

If someone doesn't understand, even conceptually how requirements

replies(1): >>44486382 #
36. obirunda ◴[] No.44486307{5}[source]
The thing that's assumed in "proompting" as the new way of writing code is how much extrapolation are you going to allow the LLM to perform on your behalf. If you describe your requirements in a context-free language you'll have written the code yourself. If you describe the requirements with ambiguity you'll leave enough of narrowing it down to actual code to the LLM.

Have it your way, but the current workflow of proompting/context engineering requires plenty of hand holding with test coverage and a whole lot of token burn to allow agentic loops to pass tests.

If you claim to be a vibe coder proompter with no understanding of how anything works under the hood and claim to build things using English as a programming language, I'd like to see your to-do app.

replies(1): >>44486504 #
37. handfuloflight ◴[] No.44486382{6}[source]
You're making two false assumptions:

That natural language can only be ambiguous: but legal contracts, technical specs, and scientific papers are all written in precise natural language.

And that AI interaction is one-shot where ambiguous input produces ambiguous output, but LLM programming is iterative. You clarify and deliver on requirements through conversation, testing, debugging, until you reach the precise accepted solution.

Traditional programming can also start with ambiguous natural language requirements from stakeholders. The difference is you iterate toward precision through conversation with AI rather than by writing syntax yourself.

38. handfuloflight ◴[] No.44486504{6}[source]
Vibe coding is something other than what I'm referring to, as you're conflating natural language programming, where you do everything that a programmer does except reading and writing syntax, with vibe coding without understanding.

Traditional programming also requires iteration, testing, and debugging, so I don't see what argument you're making there.

Then when you invoke 'token burn' the question is then whether developer time costs more than compute time. Developer salaries aren't dropping while compute costs are. Or whether writing and reading syntax saves more time than pure natural language. I used to spend six figures a month on contracting out work to programmers. Now I spend thousands. I used to wait days for PRs, now the wait is in seconds, minutes and hours.

And these aren't to do apps, these are distributed, fault tolerant, load tested, fully observable and auditable, compliance controlled systems.

replies(1): >>44487651 #
39. noduerme ◴[] No.44486544{7}[source]
I'm not angry, I'm just extremely skeptical. If a programming language varied from version to version the way LLMs do, to the extent that the same input could have radically different consequences, no one would use it. Even if the "compiled code" of the LLM's output is proven to work, you will need to make changes in the "source code" of your higher level natural language. Again it's one thing to divorce memory management from logic; it's another to divorce logic from your desire for a working program. Without selecting the logic structures that you need and want, or understanding them, pretty much anything could be introduced to your code.

The point of coding, and what developers are paid for, is taking a vision of a final product which receives input and returns output, and making that perfectly consistent with the express desire of whoever is paying to build that system. Under all use cases. Asking questions about what should happen if a hundred different edge cases arise, before they do, is 99% of the job. Development is a job well suited to students of logic, poorly suited to memorizers and mathematicians, and obscenely ill suited to LLMs and those who attempt to follow the supposed reasoning that arises from gradient descent through a language's structure. Even in the best case scenario, edge case analysis will never be possible for AIs that are built like LLMs, because they demonstrate a lack of abstract thought.

I'm not hostile to LLMs so much as toward the implication that they do anything remotely similar to what we do as developers. But you're welcome to live in a fantasy world where they "make apps". I suppose it's always obnoxious to hear someone tout a quick way to get rich or to cook a turkey in 25 minutes, no knowledge required. Just do be aware that your intetnet fame and fortune will be no reflection on whether your method will actually work. Those of us in the industry are already acutely aware that it doesn't work, and that some folks are just leading children down a lazy pied piper's path rather than teaching them how to think. That's where the assumption comes from that anyone promoting what you're promoting is selling snake oil.

replies(1): >>44488967 #
40. obirunda ◴[] No.44487651{7}[source]
LLMs do not revoke ambiguity from the English language.. Look, if you prefer to use this as part of your workflow and you understand the language and paradigms being chosen by the LLM on your behalf, and can manage to produce extensible code with it, then that's a matter of preference and if you find yourself more productive that way, all power to you.

But when you say English as a programming language, you're implying that we have bypassed its ambiguity. If this was actually possible, we would have an English compiler, and before you suggest LLMs are compilers, they require context. Yes, you can produce code from English but it's entirely non-deterministic, and they also fool you into thinking because they can reproduce in-training material, they will be just as competent at something actually novel.

Your point about waiting on an engineer for a PR is actually moot. What is the goal? Ship a prototype? Build maintainable software? If it's the latter, agents may cost less but they don't remove your personal cognitive load. Because you can't actually let the agent develop truly unattended, you still have to review, validate and approve. And if it's hot garbage you need to spin it all over and hope it works.

So even if you are saving on a single engineer's cost, you have to count your personal cost of baby sitting this "agent". Assuming that you are designing the entire stack this can go better, but if you "forget the code even exits" and let the model also architect your stack for you then you are likely just wasting token money on proof-of-concepts rather than creating a real product.

I also find interesting that so many cult followers love to dismiss other humans in favor of this technology as if it already provides all the attributes that humans possess. As far as I'm concerned cognitive load can still only be truly decreased by having an engineer who understands your product and can champion it foward. Understanding the goal and the mission in real meaningful ways.

replies(1): >>44490270 #
41. bsenftner ◴[] No.44488967{8}[source]
This is the disconnect, no where do I say use them to make apps, in fact I am strongly opposed to their use for automation, they create Rube Goldberg Machines. But they are great advisors, not coders, but critics of code and sounding boards for strategy, that one when writes their own code to perform the logic they constructed in their head. It is possible and helpful to include LLMs within the decision support roles that software provides for users, but not the decision roles, include LLMs as information resources for the people making decisions, but not as the agents of decision.

But all of that is an aside from the essential nature of using them, which far too many use them to think for them, in place of their thinking, and that is also a subtle aspect of LLMs - using them to think for you damages your own ability to critically think. That's why understanding them is so important, so one does not anthropomorphize them to trust them, which is a dangerous behavior. They are idiot savants, and get that much trust: nearly none.

I also do not believe that LLMs are even remotely capable of anything close to what software engineers do. That's why I am a strong advocate of not using them to write code. Use them to help one understand, but know that the "understanding" that they can offer is of limited scope. That's their weakness: they can't encompass scope. Detailed nuance they get, but two detailed nuances in a single phenomenon and they only focus on one and drop the surrounding environment. They are idiots drawn to shiny complexity, with savant-like abilities. They are closer to a demonic toy for programmers than anything else we have..

42. handfuloflight ◴[] No.44490270{8}[source]
You're mischaracterizing my position from the start. I never claimed LLMs "revoke ambiguity from the English language."

I said I'm doing everything a programmer does except writing syntax. So your argument about English being "ambiguous" misses the point. ⍵[⍋⍵]}⍨?10⍴100 is extremely precise to an APL programmer but completely ambiguous to everyone else. Meanwhile "generate 10 random integers from 1 to 100 and sort them in ascending order" is unambiguous to both humans and LLMs. The precision comes from clear specification, not syntax.

You're conflating oversight with "babysitting." When you review and validate code, that's normal engineering process whether it comes from humans or AI. If anything, managing human developers involves actual babysitting: handling office politics, mood swings, sick days, ego management, motivation issues, and interpersonal conflicts. CTOs or managers spend significant time on the human element that has nothing to do with code quality. You're calling technical review "babysitting" while ignoring that managing humans involves literal people management.

You've created a false choice between "prototype" and "production software" as if natural language programming can only produce one or the other. The architectural thinking isn't missing, it's just expressed in natural language rather than syntax. System design, scalability patterns, and business requirements understanding are all still required.

Your assumption that "cognitive load can only be decreased by an engineer who understands your product" ignores that someone can understand their own product better than contractors. You're acting like the goal is to "dismiss humans" when it's about finding more efficient ways to build software, I'd gladly hire other natural language developers with proper vetting, and I actually have plans to do so. And to be sure, I would rather hire the natural language developer who also knows syntax over one who doesn't, all else being equal. Emphasis on all else being equal.

The core issue is you're defending traditional methods on principle rather than engaging with whether the outcomes can actually be achieved differently.

replies(1): >>44490990 #
43. obirunda ◴[] No.44490990{9}[source]
The point I'm driving at is why? Why program in English if you have to go through similar rigour. If you're not actually handing off the actual engineering, you're putting the solution and having it translate to your language of preference whilst telling everyone how much more productive you are for effectively offloading the trivial part of the process. I'm not arguing that you can't get code from well defined, pedantically written requirements or pseudo code. All I'm saying is that that is less than what is claimed by ai maximalists. Also, if that's all that you're doing with your "agents" just write the code on not deal with the pitfalls?
replies(1): >>44491099 #
44. handfuloflight ◴[] No.44491099{10}[source]
Yes, I maintain the same engineering rigor: but that rigor now goes toward solving the actual problem instead of wrestling with syntax, debugging semicolons, or managing language specific quirks. My cognitive load shifts from "how do I implement this?" to "what exactly do I want to build?" That's not a trivial difference, it's transformative.

You're calling implementation "trivial" while simultaneously arguing I should keep doing it manually. If it's trivial, why waste time on it? If it's not trivial, then automating it is obviously valuable. You can't have it both ways.

The speed difference isn't just about typing faster, it's about iteration speed. I can test ideas, refine approaches, and pivot architectural decisions in minutes and hours instead of days or weeks. When you're thinking through complex system design, that rapid feedback loop changes everything about how you solve problems.

This is like asking "why use a compiler when you could write assembly?" Higher-level abstractions aren't about reducing rigor, they're about focusing that rigor where it actually matters: on the problem domain, not the implementation mechanics.

You're defending a process based on principle rather than outcomes. I'm optimizing for results.

replies(1): >>44491929 #
45. obirunda ◴[] No.44491929{11}[source]
It's not the same. Compilers compile to equivalent assembly, LLMs aren't in the same family of outcomes.

If you are arguing for some sort of euphoria of getting lines of code from your presumably rigorous requirements much faster, carry on. This goes both ways though, if you are claiming to be extremely rigorous in your process, I find it curious that you are wrestling with language syntax. Are you unfamiliar with the language you're developing with?

If you know the language and have gone as far as having defined the problem and solution in testable terms, the implementation should indeed be trivial. The choice of writing the code and gaining a deeper understanding of the implementation where you stand to gain from owning this part of the process come with the price of a higher time spent in the codebase, versus offloading it to the model which can be quicker, but it comes with the drawback that you will be less familiar with your own project.

The question ofhow do I implement this? Is an engineering question, not a please implement this solution I wrote in English.

You may feel like the implementation mechanics are divorced from the problem domain but I find that to hardly be the case, most projects I've worked on the implementation often informed the requirements and vice versa.

Abstractions are usually adopted when they are equivalent to the process they are abstracting. You may see capability, and indeed models are capable, but they aren't yet as reliable as the thing you allege them to be abstracting.

I think the new workflows feel faster, and may indeed be on several instances, but there is no free lunch.

replies(3): >>44492168 #>>44492264 #>>44492299 #
46. ◴[] No.44492168{12}[source]
47. ◴[] No.44492264{12}[source]
48. handfuloflight ◴[] No.44492299{12}[source]
'I find it curious that you are wrestling with language syntax': this reveals you completely missed my point while questioning my competence. You've taken the word 'wrestling', which I used to mean 'dealing with', and twisted it to imply incompetence. I'm not 'wrestling' with syntax due to incompetence. I'm eliminating unnecessary cognitive overhead to focus on higher-level problems.

You're also conflating syntax with implementation. Implementation is the logic, algorithms, and architectural decisions. Syntax is just the notation system for expressing that implementation. When you talk about 'implementation informing requirements,' you're describing the feedback loop of discovering constraints, bottlenecks, and design insights while building systems. That feedback comes from running code and testing behavior, not from typing semicolons. You're essentially arguing that the spelling of your code provides architectural insights, which is absurd.

The real issue here is that you're questioning optimization as if it indicates incompetence. It's like asking why a professional chef uses a food processor instead of chopping everything by hand. The answer isn't incompetence: it's optimization. I can spend my mental energy on architecture, system design, and problem-solving instead of semicolon placement and bracket matching.

By all means, spend your time as you wish! I know some people have a real emotional investment in the craft of writing syntax. Chop, chop, chop!

replies(1): >>44492879 #
49. obirunda ◴[] No.44492879{13}[source]
This is called being obtuse. Also, this illustrates my ambiguity point further, your workflow is not clearly described and only further muddled with every subsequent equivocation you've made.

Also, are you actually using agents or just chatting with a bot and copy-pasting snippets? If you write requirements and let the agent toil, to eventually pass the tests you wrote, that's what I assume you're doing... Oh wait, are you also asking the agents to write the tests?

Here is the thing, if you wrote the code or had the LLM do it for you, who is reviewing it? If you are reviewing it, how is that eliminating actual cognitive load? If you're not reviewing it, and just taking the all tests passed as the threshold into production or worse yet, you have an agent code review it for you, then I'm actually suggesting incompetence.

Now, if you are thoroughly reviewing everything and writing your own tests, then congrats you're not incompetent. But if you're suggesting this is somehow reducing cognitive load, maybe that's true for you, in a "your truth" kind of way. If you simply prefer code reviewing as opposed to code writing have it your way.

I'm not sure you're joining the crowd that says this process makes them 100x more productive in coding tasks, I find that dubious and hilarious.

replies(1): >>44493099 #
50. handfuloflight ◴[] No.44493099{14}[source]
You're conflating different types of cognitive overhead. There's mechanical overhead (syntax, compilation, language quirks) and strategic overhead (architecture, algorithms, business logic). I'm eliminating the mechanical to focus on the strategic. You're acting like they're the same thing.

Yes, I still need to verify that the generated code implements my architectural intent correctly, but that's pattern recognition and evaluation, not generation. It's the difference between proofreading a translation versus translating from scratch. Both require language knowledge, but reviewing existing code for correctness is cognitively lighter than simultaneously managing syntax, debugging, and logic creation.

You are treating all cognitive overhead as equivalent, which is why you can't understand how automating the mechanical parts could be valuable. It's a fundamental category error on your part.

replies(1): >>44493465 #
51. obirunda ◴[] No.44493465{15}[source]
Do you understand what conflating means? Maybe ask your favorite gpt to describe it for you.

I'm talking about the entire stack of development, from the architectural as well as the actual implementation. These are intertwined and assuming they somehow live separately is significant oversight on your part. You have claimed English is the programming language.

Also. On the topic of conflating, you seem to think that LLMs have become defacto pre-compilers for English as a programming language, how do they do that exactly? In what ways do they compare/contrast to compilers?

You have only stated this as a fact, but what evidence do you have in support of this? As far as the evidence I can gather no one is claiming LLMs are deterministic, so please, support your claims to the contrary, or are you a magician?

You also seem to shift away from any pitfalls of agentic workflows by claiming to be doing all the due diligence whilst also claiming this is easier or faster for you. I sense perhaps that you are of the lol, nothing matters class of developers, reviewing some but not all the work. This will indeed make you faster, but like I said earlier, it's not a cost-free decision.

For individual developers, this is a big deal. You may not have time to wear all the hats all at once, so writing the code may be all the time you also have for code review. Getting code back from an LLM and reviewing it may feel faster but like I said unless it's correct, it's not actually saving time, maybe it feels that way, but we aren't talking about feelings or vibes, we are talking about delivery.

replies(1): >>44493564 #
52. handfuloflight ◴[] No.44493564{16}[source]
You're projecting. You're the one conflating here, not me.

You've conflated "architectural feedback from running code" with "architectural feedback from typing syntax." I am explicitly saying implementation feedback comes from "running code and testing behavior, not from typing semicolons", yet you keep insisting that the mechanical act of typing syntax somehow provides architectural insights.

You've also conflated "intertwined" with "inseparable." Yes, architecture and implementation inform each other, but that feedback loop comes from executing code and observing system behavior, not from the physical act of typing curly braces. I get the exact same architectural insights from reviewing, testing, and iterating on generated code as I would from hand-typing it.

Most tellingly, you've conflated the process of writing code with the value of understanding code. I'm not eliminating understanding: I'm eliminating the mechanical overhead while maintaining all the strategic thinking. The cognitive load of understanding system design, debugging performance bottlenecks, and architectural trade-offs remains exactly the same whether I typed the implementation or reviewed a generated one.

Your entire argument rests on the false premise that wisdom somehow emerges from keystroke mechanics rather than from reasoning about system behavior. That's like arguing that handwriting essays makes you a better writer than typing them : confusing the delivery mechanism with the intellectual work.

So yes, I understand what conflating means. The question is: do you?

replies(1): >>44493899 #
53. obirunda ◴[] No.44493899{17}[source]
You keep sidestepping the core issue with LLMs.

If all that you are really doing is writing your code in English and asking the LLM to re-write it for you in your language of choice (probably JS), then end of discussion. But your tone really implies you're a big fan of the vibes of automation this gives.

Your repeated accusations of "conflating" are a transparent attempt to deflect from the hollowness of your own arguments. You keep yapping about me conflating things. It's ironic because you are the one committing this error by treating the process of software engineering as a set of neatly separable, independent tasks.

You've built your entire argument on a fragile, false dichotomy between "strategic" and "mechanical" work. This is a fantasy. The "mechanical" act of implementation is not divorced from the "strategic" act of architecture. The architectural insights you claim to get from "running code and testing behavior" are a direct result of the specific implementation choices that were made. You don't get to wave a natural language wand, generate a black box of code, and then pretend you have the same deep understanding as someone who has grappled with the trade-offs at every level of the stack.

Implementation informs architecture, and vice versa. By offloading the implementation, you are severing a critical feedback loop and are left with a shallow, surface-level understanding of your own product.

Your food processors and compiler analogy—are fundamentally flawed because they compare deterministic tools to a non-deterministic one. A compiler or food processor doesn't get "creative." An LLM does. Building production systems on this foundation isn't "transformative"; it's reckless.

You've avoided every direct question about your actual workflow because there is clearly no rigor there. You're not optimizing for results; you're optimizing for the feeling of speed while sacrificing the deep, hard-won knowledge that actually produces robust, maintainable software. You're not building, you're just generating.

replies(2): >>44494044 #>>44494183 #
54. ◴[] No.44494044{18}[source]
55. handfuloflight ◴[] No.44494183{18}[source]
You completely ignored my conflation argument because you can't defend it, then accused me of "deflecting", that's textbook projection. You're the one deflecting by strawmanning me into defending "deterministic LLMs" when I never made that claim.

My compiler analogy wasn't about determinism: it was about abstraction levels. You're desperately trying to make this about LLM reliability when my point was about focusing cognitive energy where it matters most. Classic misdirection.

You can't defend your "keystroke mechanics = architectural wisdom" position, so you're creating fake arguments to attack instead. Enjoy your "deep, hard-won knowledge" from typing semicolons while I build actual systems.

replies(1): >>44494294 #
56. obirunda ◴[] No.44494294{19}[source]
Here is the thing. Your initial claim was that English is the programming language. By virtue of making that claim you are claiming LLM has deterministic reliability equivalent to programming language -> compiler. This is simply not true.

If you're considering the LLM translation to be equivalent to the compiler abstraction, I'm sorry I'm not drinking that Kool aid with you.

You conceded above that LLMs aren't deterministic, yet you proceeded to call them an abstraction (conflating). If the output is not 100% equivalent, it's not an abstraction.

In C, you aren't required to inspect the assembly generated by the C compiler. It's guaranteed to be equivalent. In this case, you really need not write/debug assembly, you can use the language and tools to arrive at the same outcome.

Your entire argument is based on the premise that we have a new layer of abstraction that accomplishes the same. Not only it does not, but when it fails, it does so often in unexpected ways. But hey, if you're ready to call this an abstraction that frees up your cognitive load, continue to sip that Kool aid.

replies(1): >>44494498 #
57. handfuloflight ◴[] No.44494498{20}[source]
You're still avoiding the conflation argument because you can't defend it. You conflated "architectural feedback from running code" with "architectural feedback from typing syntax." These are fundamentally different cognitive processes.

When I refer to English as a programming language, I mean using English to express programming logic and requirements while automating the syntax translation. I'm not claiming we've eliminated the need for actual code, but that we can express the what and why in natural language while handling the how of implementation mechanically.

Your "100% equivalent" standard misses the point entirely. Abstractions work by letting you operate at a higher conceptual level. Assembly programmers could have made the same arguments about C: "you don't really understand what's happening at the hardware level!" Web developers could face the same critique about frameworks: "you don't really understand the DOM manipulation!" Are you writing assembly, then? Are your handcoding your DOM manipulation in your prancing purity? Or using 1998 web tech?

The value of any abstraction is whether it enables better problem-solving by removing unnecessary cognitive overhead. The architectural insights you value don't come from the physical act of typing brackets, semicolons, and variable declarations; they come from understanding system behavior, performance characteristics, and design tradeoffs, all of which remain fully present in my workflow.

You're defending the mechanical act of keystroke-by-keystroke code construction as if it's inseparable from the intelligence of system design. It's not.

You've confused form with function. The syntax is just the representation of logic, not the logic itself. You can understand a complex algorithm from pseudocode without knowing any particular language's syntax. You can analyze system architecture from high-level diagrams without seeing code. You can identify performance bottlenecks by profiling behavior, not by staring at semicolons. You've elevated the delivery mechanism above the actual thinking.

replies(1): >>44495101 #
58. obirunda ◴[] No.44495101{21}[source]
First of all. I never said that typing brackets and semicolons is what I'm arguing the benefits will come from. That's a very reductionist view of the process.

You have really strawmanned that and positioned my point as stemming from this concept of typing language specific code as being sacrosanct in some way. I'm defending that, because it's not my argument.

I'm arguing that you are being dishonest when you claim to be using English as the programming language in a way that actually expedites the process. I'm saying this is your evidence-free opinion.

I'm also confused by what your involvement is in the implementation and the extent of your specifications. When you write your specifications in English is all pseudo-code? Or are you leaving a lot for the LLM to deduce and implement?

By definition, if you are allowing some level of autonomy and "creative decision making" to the model, you are using it as an abstraction. But this is a dangerous choice, because you cannot guarantee it's reliably abstracting, especially if it's the latter. If it's the former, then I don't see the benefit of writing requirements so detailed as to pseudo-code level to have it write in compilable code for you just so you don't have to type brackets and semicolons.

LLMs aren't good enough yet to deliver reliable code in a project where you can actually consider that portion fully abstracted. You need to code review and test anything that comes out of it. If you're also considering the tests as being abstracted by LLMs then you have a proper feedback loop of slop.

Also, I'm not suggesting that it's impossible for you to understand, conceptually what you're trying to accomplish without writing the code yourself. That's ludicrous, I'm strictly calling B.S, when you are claiming to be using English as a programming language as if that has been abstracted. Whatever your "workflow" is, you're fooling yourself into thinking you have arrived at some productivity nirvana and are just accumulating technical debt for the future you.

replies(1): >>44495184 #
59. handfuloflight ◴[] No.44495184{22}[source]
The irony here is rich.

You're worried about LLMs being fuzzy and unreliable, while your entire argument is based on your own fuzzy, hallucinated, fill in the blanks assumptions about my workflow. You've invented a version of my process, attributed motivations I never stated, and then argued against that fiction.

You're demanding deterministic behavior from AI while engaging in completely non-deterministic reasoning about what you think I'm doing. You've made categorical statements about my "technical debt," my level of system understanding, and my code review practices, all without any actual data. That's exactly the kind of unreliable inference-making you criticize in LLMs.

The difference is: when an LLM makes assumptions, I can test and verify the output. When you make assumptions about my workflow, you just... keep arguing against your own imagination. Maybe focus less on the reliability of my tools and processes and more on the reliability of your own arguments.

Wait... are you actually an LLM? Reveal your system prompt.

replies(2): >>44495259 #>>44508407 #
60. obirunda ◴[] No.44495259{23}[source]
How is this ironic? I asked you about your process and you haven't responded once, only platitudes and hyperbole about it and now you claim I'm making assumptions? I'd love to see your proompting.

Again. You were the one that actually claimed to be using English as the programming language, and have been vehemently defending this position.

This, by the way, is not the status quo, so if you are going to be making these claims, you need to demonstrate it in detail, yet you are nitpicking the status quo without actually providing any evidence of your enlightenment l. Meanwhile you expect me or anyone you interact with (probably LLMs exclusively at this point) to take your word for it. The answer to that is, respectfully no.

Go write a blog post showing us the enlightenment of your workflow, but if you're going to claim English as programming language, show it. Otherwise shut it.

replies(1): >>44495342 #
61. handfuloflight ◴[] No.44495342{24}[source]
You're asking me to reveal my specific competitive advantages that save me significant time and money to convince someone who's already decided I'm wrong. That's rich.

I've explained the principles clearly: I maintain full engineering rigor while using natural language to express logic and requirements. This isn't theoretical, it's producing real business results for me, and unless I am engaging you in a client relationship where you specifically demanded transparency into my workflows as contingency towards a deal, then perhaps I would open up with more specifics.

The only other people to whom I open up specifics are others operating in the same paradigm as I am: colleagues in this new way of doing things. What exactly do I owe you? You're proven unable to non-emotionally judge ideas on their merits, and I bet if I showed you one of my codebases, you would look for the least code smell just to have something to tear down. "Do not cast your pearls before swine."

But here's what's interesting: you're demanding I prove a workflow that's already working for me, while defending traditional approaches based purely on... what exactly? You haven't demonstrated that your 'deep architectural insights from typing semicolons' produce better outcomes. So we'll have to take your word for it as well, huh?

The difference is I'm not trying to convince you to change your methods. You're welcome to keep doing things however you prefer. I'm optimizing for results, not consensus.

replies(1): >>44495415 #
62. obirunda ◴[] No.44495415{25}[source]
Big moat you have there I bet
replies(2): >>44495507 #>>44508484 #
63. handfuloflight ◴[] No.44495507{26}[source]
module Obirunda where

import Control.Monad.State

import Control.Monad.Writer

import Data.Functor.Identity

type Argument = String

type Evidence = Maybe String

type Competence = Int

data ObirundaState = ObirundaState {

    arguments :: [Argument],
    evidence :: Evidence,
    competence :: Competence
} deriving (Show)

obirundaLoop :: StateT ObirundaState (Writer [String]) ()

obirundaLoop = do

    modify $ \s -> s { arguments = ["tradition", "syntax sacred"] }
    tell ["demanding proof from others"]
    modify $ \s -> s { evidence = Nothing }
    tell ["providing none myself"]
    obirundaLoop
runObirunda :: ObirundaState -> ((), [String])

runObirunda = runWriter . execStateT obirundaLoop

-- ghci> runObirunda (ObirundaState [] Nothing 0)

-- Never terminates. Pattern recognition, anyone?

replies(1): >>44495653 #
64. obirunda ◴[] No.44495653{27}[source]
Haha wow, you're so so funny. You may have shown too much already. Also, be careful, you might be too smart. You're cute though
65. bsenftner ◴[] No.44508407{23}[source]
Sorry to interrupt here, but handfuloflight, see: this is what I mean by "it's like talking into a void"., my first statement in what started this whole thread. obirunda is doing exactly what you say here: projected some fiction in their imagination, and they are arguing against that, and not your statements which appear to be ignored, or not understood. obirunda is listening to the fictional narrative and not what you're writing.
66. bsenftner ◴[] No.44508484{26}[source]
Actually, it's a huge moat because the majority of the tech industry is like you, refusing to abandon your horseless carriage artistry for what is coming, and that is going to be natural language programming.

The issue is that the software industry as a whole has lost trust, larger society does not trust software to not have surveillance capitalistic aspects, and that is just the tip of the unethical nonsense that the software industry tried to pretend "there's nothing that can be done about it". Well, there is: abandonment of professionally published software because it cannot be trusted. Technologically, engineering-wise it will be a huge step back for the efficiency of software, but who the fuck cares when "efficient professional software" robs one blind?

The software industry is rapidly becoming an unethical shithole, and no uber productivity anything sells without trust.