←back to thread

121 points tylerg | 4 comments | | HN request time: 0.001s | source
Show context
zahlman ◴[] No.43659511[source]
Okay, but like.

If you do have that skill to communicate clearly and describe the requirements of a novel problem, why is the AI still useful? Actually writing the code should be relatively trivial from there. If it isn't, that points to a problem with your tools/architecture/etc. Programmers IMX are, on average, far too tolerant of boilerplate.

replies(5): >>43659634 #>>43659667 #>>43659773 #>>43660939 #>>43661579 #
1. derefr ◴[] No.43661579[source]
An LLM is a very effective human-solution-description / pseudocode to "the ten programming languages we use at work, where I'm only really fluent in three of them, and have to use language references for the others each time I code in them" transpiler.

It also remembers CLI tool args far better than I do. Before LLMs, I would often have to sit and just read a manpage in its entirety to see if a certain command-line tool could do a certain thing. (For example: do you know off-hand if you can get ls(1) to format file mtimes as ISO8601 or POSIX timestamps? Or — do you know how to make find(1) prune a specific subdirectory, so that it doesn't have to iterate-over-and-ignore the millions of tiny files inside it?) But now, I just ask the LLM for the flags that will make the tool do the thing; it spits them out (if they exist); and then I can go and look at the manpage and jump directly to that flag to learn about it — using the manpage as a reference, the way it was intended.

Actually, speaking of CLI tools, it also just knows about tools that I don't. You have to be very good with your google-fu to go from the mental question of "how do I get disk IO queue saturation metrics in Linux?" to learning about e.g. the sar(1) command. Or you can just ask an LLM that actual literal question.

replies(2): >>43663157 #>>43666630 #
2. taurath ◴[] No.43663157[source]
I’ve found that the surfacing of tools and APIs really can help me dive into learning, but ironically usually by AI finding a tool and then me reading its documentation, as I want to understand if it has the capabilities or flexibility I have in mind. I can leave that to LLMs to tell me, but I find it’s too good an opportunity to build my own internal knowledge base to pass up. It’s the back and forth between having an LLM spit out familiar concepts and give new to me solutions. Overall it helps me get through learning quicker I think, because I can often work off of an example to start.
replies(1): >>43665050 #
3. derefr ◴[] No.43665050[source]
Exactly — one thing LLMs are great at, is basically acting as a coworker who happens to have a very wide breadth of knowledge (i.e. to know at least a little about a lot) — who you can thus ask to "point you in a direction" any time you're stuck or don't know where to start.
4. Arcuru ◴[] No.43666630[source]
Before LLMs there existed quite a few tools to try to help with understanding CLI options; off the top of my head there are https://github.com/tldr-pages/tldr and explainshell.com

LLMs are both more general and more useful than those tools. They're more flexible and composable, and can replace those tools with a small wrapper script. Part of the reason why the LLMs can do that though is because it has those other tools as datasets to train off of.