←back to thread

265 points ctoth | 1 comments | | HN request time: 0.21s | source
Show context
Y_Y ◴[] No.43745485[source]
Here's o4-mini-high solving riddles:

> What weighs more; a pound of courage or a pound coin?

> They weigh the same—each is a “pound.” A pound of courage and a pound coin both come out to one pound in weight!

https://chatgpt.com/share/68053ad0-aae4-800d-95ff-5b5b0a9a38...

I didn't know what AGI is, but I know this isn't it.

replies(6): >>43745696 #>>43746558 #>>43747299 #>>43747795 #>>43752364 #>>43754583 #
bitshiftfaced ◴[] No.43752364[source]
It may not be AGI, but I don't think it's for that reason. Many humans would make the exact same error by reading too quickly and seeing "pound [of] coin", and I would still consider them of "general intelligence."
replies(1): >>43753142 #
1. achierius ◴[] No.43753142[source]
It's nevertheless interesting how LLMs seem to default to the 'fast thinking' mode of human interaction -- even CoT approaches seem to just be mimicking 'slow thinking' by forcing the LLM to iterate through different options. The failure modes I see are very often the sort of thing I would do if I were unfocused or uninterested in a problem.