←back to thread

Claude Code now supports hooks

(docs.anthropic.com)
381 points ramoz | 1 comments | | HN request time: 0s | source
Show context
bionhoward ◴[] No.44429497[source]
Given the Anthropic legal terms forbid competing with them, what are we actually allowed to do with this? Seems confusing what is allowed.

No machine learning work? That would compete.

No writing stuff I would train AI on. Except I own the stuff it writes, but I can’t use it.

Can we build websites with it? What websites don’t compete with Anthropic?

Terminal games? No, Claude code is a terminal game, if you make a terminal game it competes with Claude?

Can their “trust and safety team” humans read everyone’s stuff just to check if we’re competing with LLMs (funny joke) and steal business ideas and use them at Anthropic?

Feels like the dirty secret of AI services is, every possible use case violates the terms, and we just have to accept we’re using something their legal team told us not to use? How is that logically consistent? Any safety concerns? This doesn’t seem like a law Asimov would appreciate.

It would be cool if the set of allowed use cases wasn’t empty. That might make Anthropic seem more intelligent

replies(5): >>44429509 #>>44429552 #>>44429556 #>>44429665 #>>44429697 #
nerdsniper ◴[] No.44429509[source]
Would you argue that Cursor (valued at $10B) is breaking Anthropic's terms by making an IDE that competes with their Canvas feature?
replies(4): >>44429526 #>>44429557 #>>44429582 #>>44429834 #
varenc ◴[] No.44429834[source]
Cursor isn't building models trained with the outputs of Anthropic models (I think). That's what the ToS is forbidding.
replies(1): >>44436345 #
1. bionhoward ◴[] No.44436345{3}[source]
That’s what everyone acts like the ToS forbids, but the language of the ToS as written is much more (infinitely) broad