Most active commenters
  • zahlman(5)
  • geor9e(3)

←back to thread

121 points tylerg | 35 comments | | HN request time: 1.229s | source | bottom
1. zahlman ◴[] No.43659511[source]
Okay, but like.

If you do have that skill to communicate clearly and describe the requirements of a novel problem, why is the AI still useful? Actually writing the code should be relatively trivial from there. If it isn't, that points to a problem with your tools/architecture/etc. Programmers IMX are, on average, far too tolerant of boilerplate.

replies(5): >>43659634 #>>43659667 #>>43659773 #>>43660939 #>>43661579 #
2. simonw ◴[] No.43659634[source]
Once you've got to a detailed specification, LLMs are a lot faster at correctly typing code than you are.
replies(3): >>43659854 #>>43660451 #>>43662888 #
3. larve ◴[] No.43659667[source]
Useful boilerplate:

- documentation (reference, tutorials, overviews) - tools - logging and log analyzers - monitoring - configurability - unit tests - fuzzers - UIs - and not least: lots and lots of prototypes and iterating on ideas

All of these are "trivial" once you have the main code, but they are incredibly valuable, and LLMs do a fantastic job.

replies(1): >>43660553 #
4. geor9e ◴[] No.43659773[source]
>Actually writing the code should be relatively trivial

For you, maybe. This statement assumes years of grueling training to become bilingual in a foreign programming language. And I can't type at 1000 tokens/s personally - sometimes I just want to press the voice dictate key and blab for five seconds and move on to something actually interesting.

replies(1): >>43659865 #
5. zahlman ◴[] No.43659854[source]
In your analysis, do you account for the time taken to type a detailed specification with which to prompt the LLM?

Or the time to review the code - whether by manual fixes, or iterating with the prompt, or both?

replies(1): >>43659983 #
6. zahlman ◴[] No.43659865[source]
>This statement assumes years of grueling training to become bilingual in a foreign programming language

...So, less experienced programmers are supposed to be happy that they can save time with the same technology that will convince their employers that a human isn't necessary for the position?

(And, frankly, I've overall quite enjoyed the many years I've put into the craft.)

replies(1): >>43660712 #
7. simonw ◴[] No.43659983{3}[source]
No, just the time spent typing the code.
replies(2): >>43660176 #>>43660546 #
8. recursivegirth ◴[] No.43660176{4}[source]
Time to first iteration is a huge metric that no one is tracking.
replies(1): >>43661083 #
9. layer8 ◴[] No.43660451[source]
As a developer, typing speed is rarely the bottleneck.
replies(1): >>43661689 #
10. zahlman ◴[] No.43660546{4}[source]
I'm sure curiousity will get the better of me eventually, but as it stands I'm still unconvinced. Over the years I've ingrained a strong sense that just fixing things myself is easier than clearly explaining in text what needs to be done.
11. zahlman ◴[] No.43660553[source]
I was referring specifically to boilerplate within the code itself. But sure, I can imagine some uses.
12. geor9e ◴[] No.43660712{3}[source]
You're seeing this entirely from the perspective of people who do programming as their job. I'm seeing it from the perspective of the other 99% of society. It feels really good that they're no longer gatekept by the rigid and cryptic interfaces that prevented them from really communicating with their computer, just because it couldn't speak their native tongue.
replies(3): >>43661007 #>>43661426 #>>43662725 #
13. MBCook ◴[] No.43660939[source]
Exactly. This same point was mentioned on Accidental Tech Podcast last week during a section primarily about “vibe coding”. (May have been the paid-only segment)

If the LLM gets something wrong, you have to be more exact to get it to make the program do the thing you want. And when that isn’t perfect, you have to tell it exactly what you want to to do in THAT situation. And the next one. And the next one.

At that point you’re programming. It may not be the same as coding in a traditional language, but isn’t it effectively the same process? You’re having to lay out all the exact steps to take when different things happen.

So in the end have you replaced programmers or decreased the amount of programming needed? Or have you just changed the shape of the activity so it doesn’t look like what we’re used to calling programming today?

John Siracusa (one of the hosts) compared it to the idea of a fourth generation language.

From Wikipedia:

“The concept of 4GL was developed from the 1970s through the 1990s, overlapping most of the development of 3GL, with 4GLs identified as ‘non-procedural’ or ‘program-generating’ languages”.

Program generating language sounds an awful lot like what people are trying to use AI for. And these claims that we don’t need programmers anymore also sound a lot like the claims from when people were trying to make flowchart based languages. Or COBOL.

“You don’t need programmers! The managers can write their own reports”.

In fact “the term 4GL was first used formally by James Martin in his 1981 book Application Development Without Programmers” (Wikipedia again).

They keep trying. But it all ends up still being programming.

replies(4): >>43661565 #>>43663011 #>>43665093 #>>43671540 #
14. wrs ◴[] No.43661007{4}[source]
The point of the PB&J thing is exactly to demonstrate that your native tongue isn’t precise enough to program a computer with. There’s a reason those interfaces are rigid, and it’s not “gatekeeping”. (The cryptic part is just to increase information density — see COBOL for an alternative.)
replies(1): >>43661126 #
15. r0b05 ◴[] No.43661083{5}[source]
Could you explain this please?
16. geor9e ◴[] No.43661126{5}[source]
I think https://docs.cursor.com/chat/agent has shown plain English is precise enough to program a computer with, and some well respected programmers have become fans of it https://x.com/karpathy/status/1886192184808149383

I only took exception to the original statement - that coding is trivial, and the questioning if AI is even useful. So many people are finally able to create things they were never able to. That's something to celebrate. Coding isn't trivial to most people, it's more of an insurmountable barrier to entry. English works - that's why a clear-minded project manager can delegate programming to someone fluent in it, without knowing how to code themselves. We don't end up with them dumping a jar of jam on the floor, because intelligent beings can communicate in the context of a lot of prior knowledge they were trained on. That's how AI is overcoming the peanut butter and jelly problem of English. It doesn't need solutions defined for it, a word to the wise is sufficient.

replies(2): >>43662171 #>>43663723 #
17. ◴[] No.43661426{4}[source]
18. daxfohl ◴[] No.43661565[source]
This is what I keep coming back to. I'm sure I'm not the only one here who frequently writes the code, or at least a PoC, then writes the design doc based on it. Because the code is the most concise and precise way to specify what you really want. And writing it gives you more clarity on things you might not have thought about when writing it in a document. Unrolling that into pseudocode/English almost always gets convoluted for anything but very linear pieces of logic, and you're generally not going to get it right if you haven't already done a little exploratory coding beforehand.

So to me, even in an ideal world the dream of AI coding is backwards. It's more verbose, it's harder to conceptualize, it's less precise, and it's going to be more of a pain to get right even if it worked perfectly.

That's not to say it'll never work. But the interface has to change a lot. Instead of a UX where you have to think about and specify all the details up front, a useful assistant would be more conversational, analyze the existing codebase, clarify the change you're asking about, propose some options, ask which layer of the system, which design patterns to use, whether the level of coupling makes sense, what extensions of the functionality you're thinking about in the future, pros and cons of each approach, and also help point out conflicts or vague requirements, etc. But it seems like we've got quite a way to go before we get there.

replies(2): >>43662126 #>>43664605 #
19. derefr ◴[] No.43661579[source]
An LLM is a very effective human-solution-description / pseudocode to "the ten programming languages we use at work, where I'm only really fluent in three of them, and have to use language references for the others each time I code in them" transpiler.

It also remembers CLI tool args far better than I do. Before LLMs, I would often have to sit and just read a manpage in its entirety to see if a certain command-line tool could do a certain thing. (For example: do you know off-hand if you can get ls(1) to format file mtimes as ISO8601 or POSIX timestamps? Or — do you know how to make find(1) prune a specific subdirectory, so that it doesn't have to iterate-over-and-ignore the millions of tiny files inside it?) But now, I just ask the LLM for the flags that will make the tool do the thing; it spits them out (if they exist); and then I can go and look at the manpage and jump directly to that flag to learn about it — using the manpage as a reference, the way it was intended.

Actually, speaking of CLI tools, it also just knows about tools that I don't. You have to be very good with your google-fu to go from the mental question of "how do I get disk IO queue saturation metrics in Linux?" to learning about e.g. the sar(1) command. Or you can just ask an LLM that actual literal question.

replies(2): >>43663157 #>>43666630 #
20. Kiro ◴[] No.43661689{3}[source]
Old trope that is no longer true.
replies(1): >>43663715 #
21. namaria ◴[] No.43662126{3}[source]
Another issue I see is the "Machine Stops" problem. When we come to depend on a systems that fails to foster the skills and knowledge needed to reproduce it (i.e. if programming comes to be so easy to so many people that they don't actually need to know how it works under the hood) you slowly loose the ability to maintain and extend the system as a society.
22. namaria ◴[] No.43662171{6}[source]
> intelligent beings can communicate in the context of a lot of prior knowledge

This is key. It works because of previous work. People have shared context because they develop it over time, when we are raised - shared context is passed on the the new generation and it grows.

LLMs consume the context recorded in the training data, but they don't give it back. They diminish it because people don't need to learn the shared context when using this tools. It appears to work in some use cases, but it will degrade our collective shared context over time as people engage with and use these tools that consume past shared context and at the same time atrophy our ability to maintain and increase the shared context. Because the shared context is reproduced and grows when it is learned by people. If a tool just takes it and precludes people learning it, there is a delayed effect where over time there will be less shared context and when the performance of the tool degrades the ability to maintain and extend the shared context will also have degraded. We might get to an irrecoverable state and spiral.

23. Arn_Thor ◴[] No.43662725{4}[source]
Yep! I’m digitally literate but can’t do anything more advanced than “hello world”. Never had the time or really interest in learning programming.

In the last year I’ve built scripts and even full fledged apps with GUIs to solve a number of problems and automate a bunch of routine tasks at work and in my hobbies. It feels like I woke up with a superpower.

And I’ve learned a ton too, about how the plumbing works, but I still can’t write code on my own. That makes me useful but dependent on the AI.

24. tharant ◴[] No.43662888[source]
This is one reason I see to be optimistic about some of the hype around LLMs—folks will have to learn how to write high quality specifications and documentation in order to get good results from a language model; society desperately needs better documentation!
25. LikesPwsh ◴[] No.43663011[source]
I realise this is meant to be a jab at high-level programming languages, but SQL really did succeed at that.

Its abstraction may leak sometimes, but most people using it are incredibly productive without needing to learn what a spool operator or bitmap does.

Even though the GUI and natural language aspects of 4GL failed, declarative programming was worth it.

replies(1): >>43664521 #
26. taurath ◴[] No.43663157[source]
I’ve found that the surfacing of tools and APIs really can help me dive into learning, but ironically usually by AI finding a tool and then me reading its documentation, as I want to understand if it has the capabilities or flexibility I have in mind. I can leave that to LLMs to tell me, but I find it’s too good an opportunity to build my own internal knowledge base to pass up. It’s the back and forth between having an LLM spit out familiar concepts and give new to me solutions. Overall it helps me get through learning quicker I think, because I can often work off of an example to start.
replies(1): >>43665050 #
27. otabdeveloper4 ◴[] No.43663715{4}[source]
Is this a jab at enterprise Java programmers?
28. otabdeveloper4 ◴[] No.43663723{6}[source]
> plain English is precise enough to program a computer with

Only if your problem is already uploaded on Github.

29. MBCook ◴[] No.43664521{3}[source]
I really like SQL personally. You’re right it does work well, but I suspect that’s because it has a limited domain instead of being a general purpose language.
30. grahac ◴[] No.43664605{3}[source]
Agreed although AIs today with simple project based rules can do things like check and account for error cases, and write the appropriate unit tests for those error cases.

I personally have found I can often create equivalent code in less English than typing.

Also it works very well where the scope is well defined like implementing interfaces or porting a library from one language to another.

replies(1): >>43666709 #
31. derefr ◴[] No.43665050{3}[source]
Exactly — one thing LLMs are great at, is basically acting as a coworker who happens to have a very wide breadth of knowledge (i.e. to know at least a little about a lot) — who you can thus ask to "point you in a direction" any time you're stuck or don't know where to start.
32. aleph_minus_one ◴[] No.43665093[source]
> At that point you’re programming. It may not be the same as coding in a traditional language, but isn’t it effectively the same process? You’re having to lay out all the exact steps to take when different things happen.

No, it isn't.

Programming is thinking deeply about

- the invariants that your code obeys

- which huge implications a small, innocent change in one part of the program will have for other, seemingly unrelated parts of the program

- in which sense the current architecture is (still) the best possible for what the program does, and if not, what the best route is to get there

- ...

33. Arcuru ◴[] No.43666630[source]
Before LLMs there existed quite a few tools to try to help with understanding CLI options; off the top of my head there are https://github.com/tldr-pages/tldr and explainshell.com

LLMs are both more general and more useful than those tools. They're more flexible and composable, and can replace those tools with a small wrapper script. Part of the reason why the LLMs can do that though is because it has those other tools as datasets to train off of.

34. daxfohl ◴[] No.43666709{4}[source]
Yeah, I guess it depends how much you care about the details. Sometimes you just want a thing to get done, and there are billions of acceptable ways to do it, so whatever GPT spits out is within the realm of good enough. Sometimes you want finer control, and in those cases trying to use AI exclusively is going to take longer than writing code.

Not much different from image generation really. Sometimes AI is fine, but there's always going to be a need to drop down into photoshop when you really care about some detail. Even if you could do the same thing thing with very detailed AI prompts and some trial and error, doing the thing in photoshop will be easier.

35. euroderf ◴[] No.43671540[source]
So, which is it ? Do you want to end up writing extremely detailed requirements, in English ? Or do you want to DIY by filling your head with software-related abstractions - in some internal mental "language" that might often be beyond words - and then translating those mental abstractions to source code ?