←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.001s | source
Show context
lubujackson ◴[] No.45056213[source]
I like the idea of AI usage comes down to a measurement of "tolerances". With enough specificity, LLMs will 100% return what you want. The goal is to find the happy tolerance between "acceptable" and "I did it myself" via prompts.
replies(1): >>45056638 #
1. manmal ◴[] No.45056638[source]
> With enough specificity, LLMs will 100% return what you want.

By now I’m sure it won’t. Even if you provide the expected code verbatim, LLMs might go on a side quest to “improve” something.