https://llama.meta.com/llama3/use-policy/
> You agree you will not use, or allow others to use, Meta Llama 3 to: <list of bad things>...
That terminates your Llama 3 license forcing you to delete all the "materials" from your system.
https://llama.meta.com/llama3/use-policy/
> You agree you will not use, or allow others to use, Meta Llama 3 to: <list of bad things>...
That terminates your Llama 3 license forcing you to delete all the "materials" from your system.
There should be CVEs for AI.
As much as I am not a fan of Meta, an uncensored Llama 3 in the wrong hands is a universally bad idea.
How so?
- Generating new malware.
- Generating new propaganda or hate speech.
- Generating directions for something risky (that turn out to be wrong enough to get someone injured or killed).
But LLMs generate nearly everything they output. Even with greedy sampling, they do not always repeat the dataset verbatim, especially if they haven't seen the prompt verbatim. So you need to prevent them from engaging in entire classes of questionable topics if you want any hope of restricting those types of questionable content.
It's not "we can't let this model get into the hands of adversaries, it's too powerful" like every LLM creator claims. It's "we can't let our model be the one adversaries are using", or in other words, "we can't let our reputation be ruined from our model powering something bad".
So, then, it's not "we can't let people get dangerous info from our model". It's "we can't let new dangerous info have come from our model". As an example, Google got so much shit for their LLM-powered dumpster fire telling people to put glue on pizza.