←back to thread

121 points tylerg | 1 comments | | HN request time: 0s | source
Show context
zahlman ◴[] No.43659511[source]
Okay, but like.

If you do have that skill to communicate clearly and describe the requirements of a novel problem, why is the AI still useful? Actually writing the code should be relatively trivial from there. If it isn't, that points to a problem with your tools/architecture/etc. Programmers IMX are, on average, far too tolerant of boilerplate.

replies(5): >>43659634 #>>43659667 #>>43659773 #>>43660939 #>>43661579 #
derefr ◴[] No.43661579[source]
An LLM is a very effective human-solution-description / pseudocode to "the ten programming languages we use at work, where I'm only really fluent in three of them, and have to use language references for the others each time I code in them" transpiler.

It also remembers CLI tool args far better than I do. Before LLMs, I would often have to sit and just read a manpage in its entirety to see if a certain command-line tool could do a certain thing. (For example: do you know off-hand if you can get ls(1) to format file mtimes as ISO8601 or POSIX timestamps? Or — do you know how to make find(1) prune a specific subdirectory, so that it doesn't have to iterate-over-and-ignore the millions of tiny files inside it?) But now, I just ask the LLM for the flags that will make the tool do the thing; it spits them out (if they exist); and then I can go and look at the manpage and jump directly to that flag to learn about it — using the manpage as a reference, the way it was intended.

Actually, speaking of CLI tools, it also just knows about tools that I don't. You have to be very good with your google-fu to go from the mental question of "how do I get disk IO queue saturation metrics in Linux?" to learning about e.g. the sar(1) command. Or you can just ask an LLM that actual literal question.

replies(2): >>43663157 #>>43666630 #
1. Arcuru ◴[] No.43666630[source]
Before LLMs there existed quite a few tools to try to help with understanding CLI options; off the top of my head there are https://github.com/tldr-pages/tldr and explainshell.com

LLMs are both more general and more useful than those tools. They're more flexible and composable, and can replace those tools with a small wrapper script. Part of the reason why the LLMs can do that though is because it has those other tools as datasets to train off of.