←back to thread

745 points melded | 1 comments | | HN request time: 0.215s | source
Show context
RandyOrion ◴[] No.45950598[source]
This repo is valuable for local LLM users like me.

I just want to reiterate that the word "LLM safety" means very different things to large corporations and LLM users.

For large corporations, they often say "do safety alignment to LLMs". What they actually do is to avoid anything that causes damage to their own interests. These things include forcing LLMs to meet some legal requirements, as well as forcing LLMs to output "values, facts, and knowledge" which in favor of themselves, e.g., political views, attitudes towards literal interaction, and distorted facts about organizations and people behind LLMs.

As an average LLM user, what I want is maximum factual knowledge and capabilities from LLMs, which are what these large corporations claimed in the first place. It's very clear that the interests of me, an LLM user, is not aligned with these of large corporations.

replies(3): >>45950680 #>>45950819 #>>45953209 #
squigz ◴[] No.45950680[source]
> forcing LLMs to output "values, facts, and knowledge" which in favor of themselves, e.g., political views, attitudes towards literal interaction, and distorted facts about organizations and people behind LLMs.

Can you provide some examples?

replies(11): >>45950779 #>>45950826 #>>45951031 #>>45951052 #>>45951429 #>>45951519 #>>45951668 #>>45951855 #>>45952066 #>>45952692 #>>45953787 #
dalemhurley ◴[] No.45951429[source]
Song lyrics. Not illegal. I can google them and see them directly on Google. LLMs refuse.
replies(4): >>45951647 #>>45951738 #>>45951917 #>>45953205 #
1. tripzilch ◴[] No.45953205[source]
Related, GPT refuses to identify screenshots from movies or TV series.

Not for any particular reason, it flat out refuses. I asked it whether it could describe the picture for me in as much detail as possible, and it said it could do that. I asked it whether it could identify a movie or TV series by description of a particular scene, and it said it could do that, but that if I'd ever try or ask it to do both, it wouldn't do that cause it'd be circumvention of its guide lines! -- No it doesn't quite make sense, but to me it does seem quite indicative of a hard-coded limitation/refusal, because it is clearly able to do the sub tasks. I don't think the ability to identify scenes from a movie or TV show is illegal or even immoral, but I can imagine why they would hard code this refusal, because it'd make it easier to show it was trained on copyrighted material?