←back to thread

456 points wg0 | 1 comments | | HN request time: 0.237s | source
Show context
forinti ◴[] No.45899440[source]
As people get comfortable with AI they'll get lazy and this will become common.

A solution is to put someone extra into the workflow to check the final result. This way AI will actually make more jobs. Ha!

replies(5): >>45899561 #>>45899781 #>>45900655 #>>45901795 #>>45902018 #
1. stillworks ◴[] No.45900655[source]
I think better to put that someone extra further up in the pipeline who knows how to prompt the LLM correctly so that it doesn't generate the fluff to begin with.

Or get software engineers to produce domain specific tooling rather than the domain relying on generic tooling which lead to such mistakes (although this is speculation.. but still to me it seems like the author of that article was using the vanilla ChatGPT client)

/s I am now thinking of setting up an "AI Consultancy" which will be able to provide both these resources to those seeking such services. I mean, why have only one of those when both are available.