←back to thread

Claude in Chrome

(claude.com)
280 points ianrahman | 1 comments | | HN request time: 0s | source
Show context
buremba ◴[] No.46341007[source]
After Claude Code couldn't find the relevant operation neither in CLI nor the public API, it went through its Chrome integration to open up the app in Chrome.

It grabbed my access tokens from cookies and curl into the app's private API for their UI. What an amazing time to be alive, can't wait for the future!

replies(2): >>46341393 #>>46341731 #
ethmarks ◴[] No.46341731[source]
Security risks aside, that's pretty remarkable problem solving on Claude's part. Rather than hallucinating an answer or just giving up, it found a solution by creatively exercising its tools. This kind of stuff was absolute sci-fi a few years ago.
replies(3): >>46341789 #>>46342227 #>>46343327 #
csomar ◴[] No.46343327[source]
Honestly, I think many hallucinations are the LLM way of "moving forward". For example, the LLM will try something, not ask me to test (and it can't test it, itself) and then carry on to say "Oh, this shouldn't work, blabla, I should try this instead.

Now that LLMs can run commands themselves, they are able to test and react on feedback. But lacking that, they'll hallucinate things (ie: hallucinate tokens/API keys)

replies(1): >>46343602 #
braebo ◴[] No.46343602[source]
Refusing to give up is a benchmark optimization technique with unfortunate consequences.
replies(1): >>46344147 #
1. csomar ◴[] No.46344147[source]
I think it's probably more complex than that. Humans have constant continuous feedback which we understand as "time". LLMs do not have an equivalent to that and thus do not have a frame of reference to how much time passed between each message.