←back to thread

Claude in Chrome

(claude.com)
278 points ianrahman | 2 comments | | HN request time: 0s | source
Show context
CAP_NET_ADMIN ◴[] No.46340821[source]
Let's spend years plugging holes in V8, splitting browser components to separate processes and improving sandboxing and then just plug in LLM with debugging enabled into Chrome. Great idea. Last time we had such a great idea it was lead in gasoline.
replies(6): >>46340861 #>>46340956 #>>46341146 #>>46341730 #>>46341782 #>>46344113 #
nine_k ◴[] No.46341730[source]
Do you mean you let Claude Code and other such tools act directly on your personal or corporate machine, under your own account? Not in an isolated VM or box?

I'm shocked, shocked.

Sadly, not joking at all.

replies(1): >>46342472 #
mattwilsonn888 ◴[] No.46342472[source]
Why not? The individual grunt knows it is more productive and the managers tolerate a non-zero amount of risk with incompetent or disgruntled workers anyways.

If you have clean access privileges then the productivity gain is worth the risk, a risk that we could argue is marginally higher or barely higher. If the workplace also provides the system then the efficiency in auditing operations makes up for any added risk.

replies(1): >>46342595 #
croes ◴[] No.46342595[source]
Incompetent workers are liable. Who’s liable when AI makes a big mistake?
replies(1): >>46342821 #
N_Lens ◴[] No.46342821{3}[source]
Incompetent workers are liable.
replies(1): >>46344130 #
1. croes ◴[] No.46344130{4}[source]
But who is when AI makes errors because it’s running automatically?
replies(1): >>46344637 #
2. ayewo ◴[] No.46344637[source]
> But who is when AI makes errors because it’s running automatically?

I'm guessing that would be the human that let the AI run loose on corporate systems.