←back to thread

159 points jbredeche | 1 comments | | HN request time: 0.225s | source
Show context
cuttothechase ◴[] No.45532033[source]
The fact that we now have to write cook book about cook books kind of masks the reality that there is something that could be genuinely wrong about this entire paradigm.

Why are even experts unsure about whats the right way to do something or even if its possible to do something at all, for anything non-trivial? Why so much hesitancy, if this is the panacea? If we are so sure then why not use the AI itself to come up with a proven paradigm?

replies(7): >>45532137 #>>45532153 #>>45532221 #>>45532341 #>>45533296 #>>45534567 #>>45535131 #
torginus ◴[] No.45533296[source]
LLMs are literal gambling - you get them to work right once and they are magical - then you end up chasing that high by tweaking the model and instructions the rest of the time.
replies(4): >>45533660 #>>45533879 #>>45533984 #>>45534359 #
1. ◴[] No.45533660[source]