←back to thread

423 points sohkamyung | 1 comments | | HN request time: 0s | source
Show context
iainctduncan ◴[] No.45670881[source]
I'm curious how many people have actually taken the time to compare AI summaries with sources they summarize. I did for a few and ... it was really bad. In my experience, they don't summarize at all, they do a random condensation.. not the same thing at all. In one instance I looked at the result was a key takeaway being the opposite of what it should have been. I don't trust them at all now.
replies(10): >>45671039 #>>45671541 #>>45671813 #>>45672108 #>>45672572 #>>45672678 #>>45673123 #>>45674739 #>>45674888 #>>45675283 #
1. ModernMech ◴[] No.45674888[source]
I have just tried doing this. I thought I could take all the release notes for my project over the past year and AI could give a great summary of all the work that had been done, categorize it and organize it. Seems like a good application for AI.

Result was just trash. It would do exactly as you say: condense the information, but there was no semblance of "summary". It would just choose random phrases or keywords from the release notes and string them together, but it had no meaning or clarity, it just seemed garbled.

And it's not for lack of trying; I tried to get a suitable result out of the AI well past the amount of time it would have taken me to summarize it myself.

The more I use these tools the more I feel their best use case is still advanced autocomplete.