Never did this before, so I was asking Q in the AWS docs how to do it.
It refused to help, as it didn't answer security related questions.
thank.
Never did this before, so I was asking Q in the AWS docs how to do it.
It refused to help, as it didn't answer security related questions.
thank.
one of my questions about a login form also tripped a harassment flag
I suspect instead the people training these models have identified areas of questioning where their model is 99% right, but because the 1% wrong is incredibly costly they dodge the entire question.
Would you want your LLM to give out any legal advice, or medical advice, or can-I-eat-this-mushroom advice, if you knew due to imperfections in your training process, it sometimes recommended people put glue in their pizza sauce?
So sure, the LLM occasionally pranks someone, in a way similar to how random Internet posts do; it is confidently wrong, in a way similar to how most text on the Internet is confidently wrong because content marketers don't give a damn about correctness, that's not what the text is there for. As much as this state of things pains me, general population has mostly adapted.
Meanwhile, people who would appreciate a model that's 99% right on things where the 1% is costly, rightfully continue to ignore Gemini and other models by companies too afraid to play in the field for real.