←back to thread

559 points Gricha | 2 comments | | HN request time: 0s | source
1. GuB-42 ◴[] No.46234005[source]
It is something I noticed when talking to LLMs, if they don't get it right the first time, they probably never will, and if you really insist, the quality starts to degrade.

It is not unlike people, the difference being that if you ask someone the same thing 200 times, he will probably going to tell you to go fuck yourself, or, if unable to, turn to malicious compliance. These AIs will always be diligent. Or, a human may use the opportunity to educate himself, but again, LLMs don't learn by doing, they have a distinct training phase that involves ingesting pretty much everything humanity has produced, your little conversation will not have a significant effect, if at all.

replies(1): >>46234302 #
2. grvdrm ◴[] No.46234302[source]
I use a new chat/etc every time that happens. Try to improve my prompt to get a better result. Sometimes works, but that multiple chat rather than laborious long chat approach annoys me less.