←back to thread

586 points mizzao | 1 comments | | HN request time: 0.201s | source
Show context
29athrowaway ◴[] No.40666313[source]
Uncensoring Llama 3 is a violation of the Llama 3 acceptable use policy.

https://llama.meta.com/llama3/use-policy/

> You agree you will not use, or allow others to use, Meta Llama 3 to: <list of bad things>...

That terminates your Llama 3 license forcing you to delete all the "materials" from your system.

replies(4): >>40666327 #>>40666456 #>>40666503 #>>40667231 #
schoen ◴[] No.40666327[source]
Do you mean to say that teaching people how to do things should be regarded, for this purpose, as a form of allowing them to do those things?
replies(1): >>40666335 #
29athrowaway ◴[] No.40666335[source]
The article clearly demonstrates how to circumvent the built-in protections in the model that prevent it from doing the stuff that violates the acceptable use policy. Which are clearly the things that are against the public good.

There should be CVEs for AI.

replies(1): >>40666554 #
logicchains ◴[] No.40666554[source]
Giving large, politicised software companies the sole power to determine what LLMs can and cannot say is against the public good.
replies(2): >>40666568 #>>40666973 #
29athrowaway ◴[] No.40666568[source]
Agreed. But uncensoring Llama 3 can do harm in the immediate term.

As much as I am not a fan of Meta, an uncensored Llama 3 in the wrong hands is a universally bad idea.

replies(3): >>40666747 #>>40666983 #>>40667248 #
1. pantalaimon ◴[] No.40666983[source]
> But uncensoring Llama 3 can do harm in the immediate term

How so?