Most active commenters

    ←back to thread

    171 points martinald | 12 comments | | HN request time: 0.418s | source | bottom
    Show context
    ryao ◴[] No.44538755[source]
    Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme? Cars, planes and elevators have safety tests. LLMs don’t. Nobody is going to die if a LLM gives an output that its creators do not like, yet when they say “safety tests”, they mean that they are checking to what extent the LLM will say things they do not like.
    replies(9): >>44538785 #>>44538805 #>>44538808 #>>44538903 #>>44538929 #>>44539030 #>>44539924 #>>44540225 #>>44540905 #
    natrius ◴[] No.44538808[source]
    An LLM can trivially instruct someone to take medications with adverse interactions, steer a mental health crisis toward suicide, or make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminated. Words can't kill people, but words can definitely lead to deaths.

    That's not even considering tool use!

    replies(9): >>44538847 #>>44538877 #>>44538896 #>>44538914 #>>44539109 #>>44539685 #>>44539785 #>>44539805 #>>44540111 #
    1. 123yawaworht456 ◴[] No.44538877[source]
    does your CPU, your OS, your web browser come with ~~built-in censorship~~ safety filters too?

    AI 'safety' is one of the most neurotic twitter-era nanny bullshit things in existence, blatantly obviously invented to regulate small competitors out of existence.

    replies(3): >>44539019 #>>44539668 #>>44539763 #
    2. no_wizard ◴[] No.44539019[source]
    It isn’t. This is dismissive without first thinking through the difference of application.

    AI safety is about proactive safety. Such an example: if an AI model could be used to screen hiring applications, making sure it doesn’t have any weighted racial biases.

    The difference here is that it’s not reactive. Reading a book with a racial bias would be the inverse; where you would be reacting to that information.

    That’s the basis of proper AI safety in a nutshell

    replies(2): >>44539067 #>>44539808 #
    3. ryao ◴[] No.44539067[source]
    As someone who has reviewed people’s résumés that they submitted with job applications in the past, I find it difficult to imagine this. The résumés that I saw had no racial information. I suppose the names might have some correlation to such information, but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias. I do not see an opportunity for proactive safety in the LLM design here. It is not even clear that they even are evaluating whether there is bias in such a scenario when someone did not properly sanitize inputs.
    replies(2): >>44539127 #>>44539553 #
    4. thayne ◴[] No.44539127{3}[source]
    > but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias

    That should really be done for humans reviewing the resumes as well, but in practice that isn't done as much as it should be

    5. kalkin ◴[] No.44539553{3}[source]
    > I find it difficult to imagine this

    Luckily, this is something that can be studied and has been. Sticking a stereotypically Black name on a resume on average substantially decreases the likelihood that the applicant will get past a resume screen, compared to the same resume with a generic or stereotypically White name:

    https://www.npr.org/2024/04/11/1243713272/resume-bias-study-...

    replies(1): >>44539705 #
    6. derektank ◴[] No.44539668[source]
    iOS certainly does by limiting you to the App Store and restricring what apps are available there
    replies(1): >>44539797 #
    7. bigstrat2003 ◴[] No.44539705{4}[source]
    That is a terrible study. The stereotypically black names are not just stereotypically black, they are stereotypical for the underclass of trashy people. You would also see much higher rejection rates if you slapped stereotypical white underclass names like "Bubba" or "Cleetus" on resumes. As is almost always the case, this claim of racism in America is really classism and has little to do with race.
    replies(1): >>44539846 #
    8. jowea ◴[] No.44539763[source]
    Social media does. Even person to person communication has laws that apply to it. And the normal self-censorship a normal person will engage in.
    replies(1): >>44539980 #
    9. selfhoster11 ◴[] No.44539797[source]
    They have been forced to open up to alternative stores in the EU. This is unequivocally a good thing, and a victory for consumer rights.
    10. selfhoster11 ◴[] No.44539808[source]
    If you're deploying LLM-based decision making that affects lives, you should be the one held responsible for the results. If you don't want to do due diligence on automation, you can screen manually instead.
    11. stonogo ◴[] No.44539846{5}[source]
    "Names from N.C. speeding tickets were selected from the most common names where at least 90% of individuals are reported to belong to the relevant race and gender group."

    Got a better suggestion?

    12. 123yawaworht456 ◴[] No.44539980[source]
    okay. and? there are no AI 'safety' laws in the US.

    without OpenAI, Anthropic and Google's fearmongering, AI 'safety' would exist only in the delusional minds of people who take sci-fi way too seriously.

    https://en.wikipedia.org/wiki/Regulatory_capture

    for fuck's sake, how more obvious could they be? sama himself went on a world tour begging for laws and regulations, only to purge safetyists a year later. if you believe that he and the rest of his ilk are motivated by anything other than profit, smh tbh fam.

    it's all deceit and delusion. China will crush them all, inshallah.