Most active commenters

    ←back to thread

    Claude in Chrome

    (claude.com)
    278 points ianrahman | 43 comments | | HN request time: 0.822s | source | bottom
    1. CAP_NET_ADMIN ◴[] No.46340821[source]
    Let's spend years plugging holes in V8, splitting browser components to separate processes and improving sandboxing and then just plug in LLM with debugging enabled into Chrome. Great idea. Last time we had such a great idea it was lead in gasoline.
    replies(6): >>46340861 #>>46340956 #>>46341146 #>>46341730 #>>46341782 #>>46344113 #
    2. dmix ◴[] No.46340861[source]
    Innovation in the short term might trump longer term security concerns.

    All of these have big warning labels like it's alpha software (ie, this isn't for your mom to use). The security model will come later... or maybe it will never be fully solved.

    replies(1): >>46341009 #
    3. conradev ◴[] No.46340956[source]
    The cycle must not be broken https://xkcd.com/2044/
    replies(3): >>46342819 #>>46344221 #>>46347010 #
    4. onionisafruit ◴[] No.46341009[source]
    > this isn't for your mom to use

    many don’t realize they are the mom

    replies(1): >>46341085 #
    5. int32_64 ◴[] No.46341146[source]
    It's clear the endgame is to cook AI into Chrome itself. Get ready for some big antitrust lawsuit that settles in 20 years when Gemini is bundled too conveniently and all the other players complain.

    https://developer.chrome.com/docs/ai/built-in-apis

    replies(2): >>46341540 #>>46342376 #
    6. thrance ◴[] No.46341540[source]
    We'll soon get Manifest V4 that, for "security reasons", somehow includes clauses banning any AI other than Gemini from using the browser.
    replies(3): >>46341693 #>>46341815 #>>46345574 #
    7. arthurcolle ◴[] No.46341693{3}[source]
    That's too easy. It'll be more subtle. Compatibility MCP-Gemini for "security" so it slurps in more data from all the other AIs
    replies(1): >>46341859 #
    8. nine_k ◴[] No.46341730[source]
    Do you mean you let Claude Code and other such tools act directly on your personal or corporate machine, under your own account? Not in an isolated VM or box?

    I'm shocked, shocked.

    Sadly, not joking at all.

    replies(1): >>46342472 #
    9. sheepscreek ◴[] No.46341782[source]
    This made me want to laugh so hard. I think this idea came from the same place as beta testing “Full Autopilot” with human guinea pigs. Great minds…

    Jokes aside, Anthropic CEO commands a tad more respect from me, on taking a more principals approach and sticking to it (at least better than their biggest rival). Also for inventing the code agent in the terminal category.

    replies(5): >>46341921 #>>46342325 #>>46343274 #>>46345684 #>>46346098 #
    10. bigyabai ◴[] No.46341859{4}[source]
    And then a flat fee whenever anyone links-out from your proprietary, inescapable MCP backend. It's a legal free money hack!
    replies(1): >>46341967 #
    11. stingraycharles ◴[] No.46341921[source]
    All things considered Anthropic seems like they’re doing most things the right way, and seemed to be focused on professional use more than OpenAI and Grok, and Opus 4.5 is really an incredibly good model.

    Yes, they know how to use their safety research as marketing, and yes, they got a big DoD contract, but I don’t think that fundamentally conflicts with their core mission.

    And honestly, some of their research they publish is genuinely interesting.

    12. arthurcolle ◴[] No.46341967{5}[source]
    That would suck. Is Google going to just eat all of this?
    replies(1): >>46342136 #
    13. bigyabai ◴[] No.46342136{6}[source]
    I'm not sure, all of my devices run a Firefox fork.
    14. IAmGraydon ◴[] No.46342325[source]
    >Also for inventing the code agent in the terminal category.

    Not even close. That distinction belongs to Aider, which was released 1.5 years before Claude Code.

    replies(2): >>46342863 #>>46343410 #
    15. spyder ◴[] No.46342376[source]
    "that settles in 20 years "

    And at that point it will be a fight mostly between AI lawyers :-)

    replies(1): >>46345537 #
    16. mattwilsonn888 ◴[] No.46342472[source]
    Why not? The individual grunt knows it is more productive and the managers tolerate a non-zero amount of risk with incompetent or disgruntled workers anyways.

    If you have clean access privileges then the productivity gain is worth the risk, a risk that we could argue is marginally higher or barely higher. If the workplace also provides the system then the efficiency in auditing operations makes up for any added risk.

    replies(1): >>46342595 #
    17. croes ◴[] No.46342595{3}[source]
    Incompetent workers are liable. Who’s liable when AI makes a big mistake?
    replies(1): >>46342821 #
    18. N_Lens ◴[] No.46342819[source]
    XKCD for everything!
    19. N_Lens ◴[] No.46342821{4}[source]
    Incompetent workers are liable.
    replies(1): >>46344130 #
    20. sheepscreek ◴[] No.46342863{3}[source]
    Oh cool, I didn’t know that.
    21. mejutoco ◴[] No.46343274[source]
    > Also for inventing the code agent in the terminal category.

    Maybe I am wrong, but wasnt aider first?

    replies(2): >>46343367 #>>46343606 #
    22. afro88 ◴[] No.46343367{3}[source]
    Aider wasn't really an agentic loop before Claude Code came along
    replies(1): >>46343434 #
    23. bpavuk ◴[] No.46343410{3}[source]
    let me be a date-time nerd for a split second:

    - Claude Code released Introducing Claude Code video on 24 Feb 2025 [0]

    - Aider's oldest known GitHub release, v0.5.0, is dated 8 Jun 2025 [1]

    [0]: https://www.youtube.com/watch?v=AJpK3YTTKZ4

    [1]: https://github.com/Aider-AI/aider/releases/tag/v0.5.0

    replies(4): >>46343562 #>>46343639 #>>46344333 #>>46345493 #
    24. mejutoco ◴[] No.46343434{4}[source]
    I would love to know more. I used aider with local models and it behaved like cursor in agent mode. Unfortunately I dont remember exactly when (+6 months ago at least). What was your experience with it?
    replies(1): >>46346807 #
    25. jeeeb ◴[] No.46343562{4}[source]
    That’s 8th of June 2023 not 2025.. almost 2 years before Claude Code was released.

    I remember evaluating Aider and Cursor side by side before Claude Code existed.

    26. stingraycharles ◴[] No.46343606{3}[source]
    They are not at all the same thing. For starters, even ‘till this day, it doesn’t support ReAct-based tool calling.

    It’s more like an assistant that advices you rather than a tool that you hand full control to.

    Not saying that either is better, but they’re not the same thing.

    replies(1): >>46345727 #
    27. ◴[] No.46343639{4}[source]
    28. m4rtink ◴[] No.46344113[source]
    You are mean to lead - it solved serious issues with engines back then and enabling their use in many useful way, likely saving more people than it poisoned.
    replies(2): >>46344850 #>>46345396 #
    29. croes ◴[] No.46344130{5}[source]
    But who is when AI makes errors because it’s running automatically?
    replies(1): >>46344637 #
    30. markm248 ◴[] No.46344221[source]
    AllI want is a secure system where it's easy to do anything I want. Is that so much to ask?
    31. social_quotient ◴[] No.46344333{4}[source]
    Hey your dates are wildly wrong... It’s important people know aider is 2023. 2 years before CC
    32. ayewo ◴[] No.46344637{6}[source]
    > But who is when AI makes errors because it’s running automatically?

    I'm guessing that would be the human that let the AI run loose on corporate systems.

    33. etskinner ◴[] No.46344850[source]
    Do you have evidence that it saved more people than it poisoned?
    34. jon-wood ◴[] No.46345396[source]
    The fossil fuel industry really doesn’t need a devil’s advocate, they’ve got more lawyers than you can shake a stick at already.
    35. IAmGraydon ◴[] No.46345493{4}[source]
    Wrong. So wrong, in fact, that I’m wondering if it’s intentional. Aider was June 2023.
    replies(1): >>46347174 #
    36. donohoe ◴[] No.46345537{3}[source]
    Which will settle it quickly under the watchful AI judiciary.
    37. Forgeties79 ◴[] No.46345574{3}[source]
    “For your safety and protection from potentially malicious and unverified vendors.”
    38. CuriouslyC ◴[] No.46345684[source]
    Dario is definitely more grounded than Sam, I thought Anthropic would get crowded out between Google and the Chinese labs, but they might be able to carve out a decent niche as the business focused AI for people who are paranoid about China.

    They didn't invest terminal agents really though, Aider was the pioneer there, they just made it more autonomous (Aider could do multiple turns with some config but it was designed to have a short leash since models weren't so capable when it was released).

    39. CuriouslyC ◴[] No.46345727{4}[source]
    Aider was designed to do single turns becasue LLMs were way worse when it was created. That being said, Aider could do multiple turns of tool calling if command confirmation was turned off, and it was trivial to configure Aider to do multiple turns of code generation by having a test suite that runs automatically on changes and telling Aider to implement functionality to get the tests to pass. It's hard coded to only do 3 autonomous turns by default but you can edit that.
    40. Workaccount2 ◴[] No.46346098[source]
    Anthropic isn't any more moral or principled than the other labs, they just saw the writing on the wall that they can't win and instead decided to focus purely on coding and then selling their shortcomings as some kind of socially conscious effort.

    It's a bit like the poorest billionaire flexing how environmentally aware they are because they don't have a 300ft yacht.

    41. afro88 ◴[] No.46346807{5}[source]
    I was a heavy user, but stopped using it mid 2024. It was essentially providing codebase context and editing and writing code as you instructed - a decent step up from copy/paste to ChatGPT but not working in an agentic loop. There was logic to attempt code edits again if they failed to apply too.

    Edit: I stand corrected though. Did a bit of research and aider is considered an agentic tool by late 2023 with auto lint/test steps that feedback to the LLM. My apologies.

    42. mFixman ◴[] No.46347010[source]
    The thing AI miss about the internet from the late 2000s and early 2010s was having so much useful data available, searchable, and scrappable. Even things like "which of my friends are currently living in New York?" are impossible to find now.

    I always assumed this was a once-in-history event. Did this cycle of data openness and closure happen before?

    43. bpavuk ◴[] No.46347174{5}[source]
    sorry, editing it out! thanks for pointing out.

    EDIT: I was too late to edit it. I have to keep an eye on what I type...