←back to thread

559 points Gricha | 2 comments | | HN request time: 0.001s | source
Show context
hazmazlaz ◴[] No.46233416[source]
Well of course it produced bad results... it was given a bad prompt. Imagine how things would have turned out if you had given the same instructions to a skilled but naive contractor who contractually couldn't say no and couldn't question you. Probably pretty similar.
replies(1): >>46233450 #
1. mainmailman ◴[] No.46233450[source]
Yeah I don't see the utility in doing this hundreds of times back to back. A few iterations can tell us some things about how Claude optimizes code, but an open ended prompt to endlessly "improve" the code sounds like a bad boss making huge demands. I don't blame the AI for adding BS down the line.
replies(1): >>46242983 #
2. Dilettante_ ◴[] No.46242983[source]
I don't think the question "will the AI add BS" was what drove this experiment. The very first thing the author references is re-feeding and degrading the same image 100 times, which similarly is not about improving the image.

This was more about seeing in what interesting ways the LLM will "fail", to get a little glimpse into how the black-box "thinks".