←back to thread

577 points simonw | 1 comments | | HN request time: 0.219s | source
Show context
righthand ◴[] No.44724896[source]
Did you understand the implementation or just that it produced a result?

I would hope an LLM could spit out a cobbled form of answer to a common interview question.

Today a colleague presented data changes and used an LLM to build a display app for the JSON for presentation. Why did they not just pipe the JSON into our already working app that displays this data?

People around me for the most part are using LLMs to enhance their presentations, not to actually implement anything useful. I have been watching my coworkers use it that way for months.

Another example? A different coworker wanted to build a document macro to perform bulk updates on courseware content. Swapping old words for new words. To build the macro they first wrote a rubrick to prompt an LLM correctly inside of a word doc.

That filled rubrik is then used to generate a program template for the macro. To define the requirements for the macro the coworker then used a slideshow slide to list bullet points of functionality, in this case to Find+Replace words in courseware slides/documents using a list of words from another text document. Due to the complexity of the system, I can’t believe my colleague saved any time. The presentation was interesting though and that is what they got compliments on.

However the solutions are absolutely useless for anyone else but the implementer.

replies(3): >>44724928 #>>44728396 #>>44728544 #
bsder ◴[] No.44728396[source]
> However the solutions are absolutely useless for anyone else but the implementer.

Disposable code is where AI shines.

AI generating the boilerplate code for an obtuse build system? Yes, please. AI generating an animation? Ganbatte. (Look at how much work 3Blue1Brown had to put into that--if AI can help that kind of thing, it has my blessings). AI enabling someone who doesn't program to generate some prototype that they can then point at an actual programmer? Excellent.

This is fine because you don't need to understand the result. You have a concrete pass/fail gate and don't care about underneath. This is real value. The problem is that it isn't gigabuck value.

The stuff that would be gigabuck value is unfortunately where AI falls down. Fix this bug in a product. Add this feature to an existing codebase. etc.

AI is also a problem because disposable code is what you would assign to junior programmers in order for them to learn.

replies(1): >>44734358 #
giantrobot ◴[] No.44734358[source]
> AI is also a problem because disposable code is what you would assign to junior programmers in order for them to learn.

It's also giving PHBs the ability to hand ill-conceived ideas to a magic robot, receive "code" they can't understand, and throw it into production. All the while firing what real developers they had on staff.

replies(1): >>44739322 #
1. yencabulator ◴[] No.44739322[source]
I expect many of those companies to fail in the 3mo-2y timeline, so in many ways I welcome PHBs to embrace their full stupidity. Same for the people who funded them.

I do feel semi-sorry for anyone who paid for the services by those companies, though. Maybe something good will arise from that too, in the end; for example, it'd be nice if US society taught more critical reading skills to its members.

The interesting game for the non-PHBs among us is figuring out if/how we can use LLMs in less risky ways, and what all is possible there. For example, I'd love to see work put into LLMs helping with formal correctness of software; there's a hard backstop there where either the proof checks or it doesn't. Code changes needed to enable less-painful proofs would hopefully largely be refactorings, where reviews should be easier and it might even work out to fuzz test that the old and new implementations return matching output for same input. Or similarly, LLM-powered test coverage improver that only writes new tests (old school/branch-based/mutation-based, there's plenty of room there).