←back to thread

446 points walterbell | 1 comments | | HN request time: 0.247s | source
Show context
BariumBlue ◴[] No.43576242[source]
Good point in the post about confidence - most people equate confidence with accuracy - and since AIs always sound confident, they always sound correct
replies(3): >>43576578 #>>43576627 #>>43576944 #
rglover ◴[] No.43576627[source]
Yep. Last night I was asking ChatGPT (4o) to help me generate a simple HTML canvas that users could draw on. Multiple times, it spoke confidently of its not even kind of working solution (copying the text from the chat below):

- "Final FIXED & WORKING drawing.html" (it wasn't working at all)

- "Full, Clean, Working Version (save as drawing.html)" (not working at all)

- "Tested and works perfectly with: Chrome / Safari / Firefox" (not working at all)

- "Working Drawing Canvas (Vanilla HTML/JS — Save this as index.html)" (not working at all)

- "It Just Works™" (not working at all)

The last one was so obnoxious I moved over to Claude (3.5 Sonnet) and it knocked it out in 3-5 prompts.

replies(3): >>43576690 #>>43577124 #>>43579586 #
1. numpad0 ◴[] No.43579586[source]
IME, it's better to just delete erroneous responses and fix prompts until it works.

They are much better at fractally subdividing and interpreting inputs like a believer of a religion, than at deconstructing and iteratively improving things like an engineert. It's waste of token count trying to have such discussions with an LLM.