←back to thread

421 points sohkamyung | 1 comments | | HN request time: 0s | source
Show context
alcide ◴[] No.45669582[source]
Kagi News has been pretty accurate. Source information is provided along with the summary and key details too.

AI summarizes are good for getting a feel of if you want to read an article or not. Even with Kagi News I verify key facts myself.

replies(6): >>45669784 #>>45670009 #>>45670554 #>>45670632 #>>45672369 #>>45672702 #
raffael_de ◴[] No.45672702[source]
Kagi News is basically a summary of news articles fed into the context. It's different from what the op is about, that is just asking an LLM with web access to query the news.
replies(1): >>45673921 #
1. Spivak ◴[] No.45673921[source]
I hate saying people are holding it wrong but given just given how LLMs work, how did anyone expect that this would go right? Managing the LLM's context is the game. I feel like ChatGPT has done such a disservice for teaching users how to actually use these tools and what their failure modes are.