←back to thread

423 points sohkamyung | 2 comments | | HN request time: 0.001s | source
Show context
iainctduncan ◴[] No.45670881[source]
I'm curious how many people have actually taken the time to compare AI summaries with sources they summarize. I did for a few and ... it was really bad. In my experience, they don't summarize at all, they do a random condensation.. not the same thing at all. In one instance I looked at the result was a key takeaway being the opposite of what it should have been. I don't trust them at all now.
replies(10): >>45671039 #>>45671541 #>>45671813 #>>45672108 #>>45672572 #>>45672678 #>>45673123 #>>45674739 #>>45674888 #>>45675283 #
1. walkabout ◴[] No.45673123[source]
They’re basically markov chain text generators with a relevance-tracking-and-correction step. It turns out this is like 100x more useful than the same thing without the correction step, but they don’t really escape what they are “at heart”, if you will.

The ways they fail are often surprising if your baseline is “these are thinking machines”. If your baseline is what I wrote above (say, because you read the “Attention Is All You Need” paper) none of it’s surprising.

replies(1): >>45674782 #
2. SrslyJosh ◴[] No.45674782[source]
See also: 3 Blue 1 Brown's fantastic series on deep learning, particularly videos like "Transformers, the tech behind LLMs".

My own mental model (condensed to a single phrase) is that LLMs are extremely convincing (on the surface) autocomplete. So far, this model has not disappointed me.