←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.214s | source
1. grey-area ◴[] No.42144289[source]
The biggest weakness of generative AI to me is knowledge. It gives the impression of knowledge about the world without actually having a model of the world or any sense of what it does or does not know.

For example recently I asked it to generate some phrases for a list of words, along with synonym and antonym lists.

The phrases were generally correct and appropriate (some mistakes but that’s fine). The synonyms/antonyms were misaligned to the list (so strictly speaking all wrong) and were often incorrect anyway. I imagine it would be the same if you asked for definitions of a list of words.

If you ask it to correct it just generates something else which is often also wrong. It’s certainly superficially convincing in many domains but once you try to get it to do real work it’s wrong in subtle ways.