←back to thread

ChatGPT Atlas

(chatgpt.com)
763 points easton | 1 comments | | HN request time: 0s | source
Show context
ZeljkoS ◴[] No.45659245[source]
Here are the highlights from the .DMG installer screens (https://imgur.com/a/Tu4TlNu):

1. Turn on browser memories Allow ChatGPT to remember useful details as you browse to give smarter responses and proactive suggestions. You're in control - memories stay private.

2. Ask ChatGPT - on any website Open the ChatGPT sidebar on any website to summarize, explain, or handle tasks - right next to what you're browsing.

3. Make your cursor a collaborator ChatGPT can help you draft emails, write reviews, or fill out forms. Highlight text inside a form field or doc and click the ChatGPT logo to get started.

4. Set as default browser BOOST CHATGPT LIMITS Unlock 7 days of extended limits on messaging, file uploads, data analysis, and image generation on ChatGPT Atlas.

5. You're all set — welcome to Atlas! Have fun exploring the web with ChatGPT by your side, all while staying in control of your data and privacy. (This screen also displays shareable PNG badge with days since you registered for ChatGPT and Atlas).

My guess is that many ChatGPT Free users will make it their default browser just because of (4) — to extend their limits. Creative :)

replies(9): >>45659596 #>>45659838 #>>45659864 #>>45659877 #>>45660265 #>>45662102 #>>45662218 #>>45663663 #>>45663733 #
granzymes ◴[] No.45659877[source]
Being able to search browser history with natural language is the feature I am most excited for. I can't count the number of times I've spent >10 minutes looking for a link from 5 months ago that I can describe the content of but can't remember the title.
replies(5): >>45659939 #>>45660251 #>>45660448 #>>45660897 #>>45663800 #
elric ◴[] No.45660448[source]
Are we talking searching the URLs and titles? Or the full body of the page? The latter would require tracking a fuckton of data, including a whole lot of potentially sensitive data.
replies(3): >>45660487 #>>45662104 #>>45663105 #
Ethee ◴[] No.45660487[source]
All of these LLMs already have the ability to go fetch content themselves, I'd imagine they'd just skim your URLs then do it's own token-efficient fetching. When I use research mode with Claude it crawls over 600 web pages sometimes so imagine they've figured out a way to skim down a lot of the actual content on pages for token context.
replies(1): >>45663674 #
1. visarga ◴[] No.45663674{3}[source]
I made my own browser extension for that, uses readability and custom extractors to save content, but also summarizes the content before saving. Has a blacklist of sites not to record. Then I made it accessible via MCP as a tool, or I can use it to summarize activity in the last 2 weeks and have it at hand with LLMs.