←back to thread

451 points imartin2k | 1 comments | | HN request time: 0.278s | source
Show context
bsenftner ◴[] No.44479706[source]
It's like talking into a void. The issue with AI is that it is too subtle, too easy to get acceptable junk answers and too subtle for the majority to realize we've made a universal crib sheet, software developers included, perhaps one of the worst populations due to their extremely weak communications as a community. To be repeatedly successful with AI, one has to exert mental effort to prompt AI effectively, but pretty much nobody is willing to even consider that. Attempts to discuss the language aspects of using an LLM get ridiculed as 'prompt engineer is not engineering' and dismissed, while that is exactly what it is: prompt engineering using a new software language, natural language, that the industry refuses to take seriously, but is in fact an extremely technical programming language so subtle few to none of you realize it, nor the power that is embodied by it within LLMs. They are incredible, they are subtle, to the degree the majority think they are fraud.
replies(3): >>44479916 #>>44479955 #>>44480067 #
20k ◴[] No.44479955[source]
The issue is that you have to put in more effort to solve a problem using AI, than to just solve it yourself

If I have to do extensive subtle prompt engineering and use a lot of mental effort to solve my problem... I'll just solve the problem instead. Programming is a mental discipline - I don't need help typing, and if using an AI means putting in more brainpower, its fundamentally failed at improving my ability to engineer software

replies(3): >>44480012 #>>44480014 #>>44481015 #
handfuloflight ◴[] No.44481015[source]
This overlooks a new category of developer who operates in natural language, not in syntax.
replies(3): >>44481406 #>>44481411 #>>44483668 #
const_cast ◴[] No.44483668[source]
Does this new category actually exist? Because, I would think, if you want to be successful at a real company you would need to know how to program.
replies(1): >>44483827 #
handfuloflight ◴[] No.44483827[source]
Knowing how to program is not limited to knowing how to write syntax.
replies(2): >>44484108 #>>44486307 #
obirunda ◴[] No.44486307[source]
The thing that's assumed in "proompting" as the new way of writing code is how much extrapolation are you going to allow the LLM to perform on your behalf. If you describe your requirements in a context-free language you'll have written the code yourself. If you describe the requirements with ambiguity you'll leave enough of narrowing it down to actual code to the LLM.

Have it your way, but the current workflow of proompting/context engineering requires plenty of hand holding with test coverage and a whole lot of token burn to allow agentic loops to pass tests.

If you claim to be a vibe coder proompter with no understanding of how anything works under the hood and claim to build things using English as a programming language, I'd like to see your to-do app.

replies(1): >>44486504 #
handfuloflight ◴[] No.44486504[source]
Vibe coding is something other than what I'm referring to, as you're conflating natural language programming, where you do everything that a programmer does except reading and writing syntax, with vibe coding without understanding.

Traditional programming also requires iteration, testing, and debugging, so I don't see what argument you're making there.

Then when you invoke 'token burn' the question is then whether developer time costs more than compute time. Developer salaries aren't dropping while compute costs are. Or whether writing and reading syntax saves more time than pure natural language. I used to spend six figures a month on contracting out work to programmers. Now I spend thousands. I used to wait days for PRs, now the wait is in seconds, minutes and hours.

And these aren't to do apps, these are distributed, fault tolerant, load tested, fully observable and auditable, compliance controlled systems.

replies(1): >>44487651 #
obirunda ◴[] No.44487651[source]
LLMs do not revoke ambiguity from the English language.. Look, if you prefer to use this as part of your workflow and you understand the language and paradigms being chosen by the LLM on your behalf, and can manage to produce extensible code with it, then that's a matter of preference and if you find yourself more productive that way, all power to you.

But when you say English as a programming language, you're implying that we have bypassed its ambiguity. If this was actually possible, we would have an English compiler, and before you suggest LLMs are compilers, they require context. Yes, you can produce code from English but it's entirely non-deterministic, and they also fool you into thinking because they can reproduce in-training material, they will be just as competent at something actually novel.

Your point about waiting on an engineer for a PR is actually moot. What is the goal? Ship a prototype? Build maintainable software? If it's the latter, agents may cost less but they don't remove your personal cognitive load. Because you can't actually let the agent develop truly unattended, you still have to review, validate and approve. And if it's hot garbage you need to spin it all over and hope it works.

So even if you are saving on a single engineer's cost, you have to count your personal cost of baby sitting this "agent". Assuming that you are designing the entire stack this can go better, but if you "forget the code even exits" and let the model also architect your stack for you then you are likely just wasting token money on proof-of-concepts rather than creating a real product.

I also find interesting that so many cult followers love to dismiss other humans in favor of this technology as if it already provides all the attributes that humans possess. As far as I'm concerned cognitive load can still only be truly decreased by having an engineer who understands your product and can champion it foward. Understanding the goal and the mission in real meaningful ways.

replies(1): >>44490270 #
handfuloflight ◴[] No.44490270[source]
You're mischaracterizing my position from the start. I never claimed LLMs "revoke ambiguity from the English language."

I said I'm doing everything a programmer does except writing syntax. So your argument about English being "ambiguous" misses the point. ⍵[⍋⍵]}⍨?10⍴100 is extremely precise to an APL programmer but completely ambiguous to everyone else. Meanwhile "generate 10 random integers from 1 to 100 and sort them in ascending order" is unambiguous to both humans and LLMs. The precision comes from clear specification, not syntax.

You're conflating oversight with "babysitting." When you review and validate code, that's normal engineering process whether it comes from humans or AI. If anything, managing human developers involves actual babysitting: handling office politics, mood swings, sick days, ego management, motivation issues, and interpersonal conflicts. CTOs or managers spend significant time on the human element that has nothing to do with code quality. You're calling technical review "babysitting" while ignoring that managing humans involves literal people management.

You've created a false choice between "prototype" and "production software" as if natural language programming can only produce one or the other. The architectural thinking isn't missing, it's just expressed in natural language rather than syntax. System design, scalability patterns, and business requirements understanding are all still required.

Your assumption that "cognitive load can only be decreased by an engineer who understands your product" ignores that someone can understand their own product better than contractors. You're acting like the goal is to "dismiss humans" when it's about finding more efficient ways to build software, I'd gladly hire other natural language developers with proper vetting, and I actually have plans to do so. And to be sure, I would rather hire the natural language developer who also knows syntax over one who doesn't, all else being equal. Emphasis on all else being equal.

The core issue is you're defending traditional methods on principle rather than engaging with whether the outcomes can actually be achieved differently.

replies(1): >>44490990 #
obirunda ◴[] No.44490990[source]
The point I'm driving at is why? Why program in English if you have to go through similar rigour. If you're not actually handing off the actual engineering, you're putting the solution and having it translate to your language of preference whilst telling everyone how much more productive you are for effectively offloading the trivial part of the process. I'm not arguing that you can't get code from well defined, pedantically written requirements or pseudo code. All I'm saying is that that is less than what is claimed by ai maximalists. Also, if that's all that you're doing with your "agents" just write the code on not deal with the pitfalls?
replies(1): >>44491099 #
handfuloflight ◴[] No.44491099[source]
Yes, I maintain the same engineering rigor: but that rigor now goes toward solving the actual problem instead of wrestling with syntax, debugging semicolons, or managing language specific quirks. My cognitive load shifts from "how do I implement this?" to "what exactly do I want to build?" That's not a trivial difference, it's transformative.

You're calling implementation "trivial" while simultaneously arguing I should keep doing it manually. If it's trivial, why waste time on it? If it's not trivial, then automating it is obviously valuable. You can't have it both ways.

The speed difference isn't just about typing faster, it's about iteration speed. I can test ideas, refine approaches, and pivot architectural decisions in minutes and hours instead of days or weeks. When you're thinking through complex system design, that rapid feedback loop changes everything about how you solve problems.

This is like asking "why use a compiler when you could write assembly?" Higher-level abstractions aren't about reducing rigor, they're about focusing that rigor where it actually matters: on the problem domain, not the implementation mechanics.

You're defending a process based on principle rather than outcomes. I'm optimizing for results.

replies(1): >>44491929 #
obirunda ◴[] No.44491929[source]
It's not the same. Compilers compile to equivalent assembly, LLMs aren't in the same family of outcomes.

If you are arguing for some sort of euphoria of getting lines of code from your presumably rigorous requirements much faster, carry on. This goes both ways though, if you are claiming to be extremely rigorous in your process, I find it curious that you are wrestling with language syntax. Are you unfamiliar with the language you're developing with?

If you know the language and have gone as far as having defined the problem and solution in testable terms, the implementation should indeed be trivial. The choice of writing the code and gaining a deeper understanding of the implementation where you stand to gain from owning this part of the process come with the price of a higher time spent in the codebase, versus offloading it to the model which can be quicker, but it comes with the drawback that you will be less familiar with your own project.

The question ofhow do I implement this? Is an engineering question, not a please implement this solution I wrote in English.

You may feel like the implementation mechanics are divorced from the problem domain but I find that to hardly be the case, most projects I've worked on the implementation often informed the requirements and vice versa.

Abstractions are usually adopted when they are equivalent to the process they are abstracting. You may see capability, and indeed models are capable, but they aren't yet as reliable as the thing you allege them to be abstracting.

I think the new workflows feel faster, and may indeed be on several instances, but there is no free lunch.

replies(3): >>44492168 #>>44492264 #>>44492299 #
1. ◴[] No.44492264[source]