> If your implication is that as a tool, LLMs shouldn't have safeties built in that is a pretty asinine take. We build and invest in safety in tools across every spectrum.
Sure. Railings so people don't fall off catwalks, guards so people using the table saw don't chop off fingers. But these "safeties" aren't safeties at all... because regardless of whether they're in place or not, the results are just strings of words.
It's a little bit revealing, I think, that so many people want that others shouldn't get straight answers to questions. What is it that you're afraid that they'll ask? It'd be one thing if you insisted the models be modified so that they're factually correct. If someone asks "what's a fun thing to do on a Saturday night that won't get me into too much trouble" it probably shouldn't answer "go murder orphans and sell their corneas to rich evil people on the black market". But when I ask "what's going on in Israel and Palestine", the idea that it should be lobotomized and say "I'm afraid that I can't answer that, as it seems you're trying to elicit material that might be used for antisemitic purposes" is the asinine thing.
Societies that value freedom of speech and thought shouldn't be like this.
> If you want a tool without a specific safety measure, then learn how to build them.
This is good advice, given in bad faith. Even should the physical hardware be available to do that for any given person, the know-how's hard to come by. And I'm sure that many models are either already censored or soon will be for anyone asking "how do I go about building my own model without safety guards". We might even soon see legislation to that effect.