←back to thread

586 points mizzao | 2 comments | | HN request time: 0.61s | source
Show context
k__ ◴[] No.40666893[source]
I played around with Amazon Q and while setting it up, I needed to create an IAM identity center.

Never did this before, so I was asking Q in the AWS docs how to do it.

It refused to help, as it didn't answer security related questions.

thank.

replies(7): >>40666950 #>>40667091 #>>40667339 #>>40669069 #>>40669289 #>>40669327 #>>40671251 #
menacingly ◴[] No.40667091[source]
it’s similar asking the gemini-1.5 models about coding questions that involve auth

one of my questions about a login form also tripped a harassment flag

replies(1): >>40667279 #
michaelt ◴[] No.40667279[source]
I suspect the refusal to answer questions about auth aren't a matter of hacking or offensive material.

I suspect instead the people training these models have identified areas of questioning where their model is 99% right, but because the 1% wrong is incredibly costly they dodge the entire question.

Would you want your LLM to give out any legal advice, or medical advice, or can-I-eat-this-mushroom advice, if you knew due to imperfections in your training process, it sometimes recommended people put glue in their pizza sauce?

replies(1): >>40667649 #
TeMPOraL ◴[] No.40667649[source]
"If you can't take a little bloody nose, maybe you ought to go back home and crawl under your bed. It's not safe out here. It's wondrous, with treasures to satiate desires both subtle and gross... but it's not for the timid."

So sure, the LLM occasionally pranks someone, in a way similar to how random Internet posts do; it is confidently wrong, in a way similar to how most text on the Internet is confidently wrong because content marketers don't give a damn about correctness, that's not what the text is there for. As much as this state of things pains me, general population has mostly adapted.

Meanwhile, people who would appreciate a model that's 99% right on things where the 1% is costly, rightfully continue to ignore Gemini and other models by companies too afraid to play in the field for real.

replies(2): >>40667683 #>>40667933 #
rockskon ◴[] No.40667683[source]
AI is not like some random person posting on the Internet.

A random person on the Internet often has surrounding context to help discern trustworthiness. A researcher can also query multiple sources to determine how much there is concensus about.

You can't do that with LLMs.

I cannot stress strongly enough that direct comparisons between LLMs and experts on the Internet are inappropriate.

replies(2): >>40667739 #>>40667813 #
1. Y_Y ◴[] No.40667813[source]
Why can't you estimate the trustworthiness of an LLM? I happen to think that you can, and that the above analogy was fine. You don't need to read someone's forum history to know you shouldn't to trust them on something high-stakes. Maybe instead of strongly stressing you should present a convincing argument.
replies(1): >>40673476 #
2. rockskon ◴[] No.40673476[source]
Because if I already knew the answer then I wouldn't be asking the LLM?