←back to thread

139 points obscurette | 1 comments | | HN request time: 0.227s | source
Show context
wyager ◴[] No.44465546[source]
> Large language models are impressive statistical text predictors — genuinely useful tools that excel at pattern matching and interpolation.

Slightly OT: It's interesting how many (smart!) people in tech like the author of this article still can't conceptualize the difference between training objective and learned capability. I wonder at this point if it's a sort of willful ignorance adopted as a psychological protection mechanism. I wonder if they're going to experience a moment of severe shock, just gradually forget that they held these opinions, or take on a sort of delusional belief that AI can't do XYZ despite all mounting evidence to the contrary.

replies(2): >>44465622 #>>44465973 #
1. possiblyreese ◴[] No.44465973[source]
Couldn't agree more. I thought we were past the "stochastic parrots" phase, but it seems some people are incapable of accepting these models have emergent capabilities.