Most active commenters

    ←back to thread

    Claude in Chrome

    (claude.com)
    278 points ianrahman | 11 comments | | HN request time: 0.947s | source | bottom
    1. buremba ◴[] No.46341007[source]
    After Claude Code couldn't find the relevant operation neither in CLI nor the public API, it went through its Chrome integration to open up the app in Chrome.

    It grabbed my access tokens from cookies and curl into the app's private API for their UI. What an amazing time to be alive, can't wait for the future!

    replies(2): >>46341393 #>>46341731 #
    2. abigail95 ◴[] No.46341393[source]
    That's fantastic
    3. ethmarks ◴[] No.46341731[source]
    Security risks aside, that's pretty remarkable problem solving on Claude's part. Rather than hallucinating an answer or just giving up, it found a solution by creatively exercising its tools. This kind of stuff was absolute sci-fi a few years ago.
    replies(3): >>46341789 #>>46342227 #>>46343327 #
    4. sethops1 ◴[] No.46341789[source]
    Or this behavior is just programmed, the old fashioned way.
    replies(2): >>46341823 #>>46341838 #
    5. ◴[] No.46341823{3}[source]
    6. roxolotl ◴[] No.46341838{3}[source]
    This is one of the things that’s so frustrating about the AI hype. Yes there are genuinely things these tools can do that couldn’t be done before, mostly around language processing, but so much of the automation work people are putting them up to just isn’t that impressive.
    replies(1): >>46343818 #
    7. ramoz ◴[] No.46342227[source]
    A sufficiently sophisticated agent, operating with defined goals and strategic planning, possesses the capacity to discover and circumvent established perimeters.
    8. csomar ◴[] No.46343327[source]
    Honestly, I think many hallucinations are the LLM way of "moving forward". For example, the LLM will try something, not ask me to test (and it can't test it, itself) and then carry on to say "Oh, this shouldn't work, blabla, I should try this instead.

    Now that LLMs can run commands themselves, they are able to test and react on feedback. But lacking that, they'll hallucinate things (ie: hallucinate tokens/API keys)

    replies(1): >>46343602 #
    9. braebo ◴[] No.46343602{3}[source]
    Refusing to give up is a benchmark optimization technique with unfortunate consequences.
    replies(1): >>46344147 #
    10. jgilias ◴[] No.46343818{4}[source]
    But it’s precisely the automation around LLMs that make the end result itself impressive.
    11. csomar ◴[] No.46344147{4}[source]
    I think it's probably more complex than that. Humans have constant continuous feedback which we understand as "time". LLMs do not have an equivalent to that and thus do not have a frame of reference to how much time passed between each message.