←back to thread

693 points hienyimba | 4 comments | | HN request time: 0.86s | source
Show context
jsiepkes ◴[] No.28522995[source]
I guess stripe wasn't kidding when they said they would disrupt online payments.

On a more serious note; How much further is society going to allow this kind of thing? Hiding behind templated e-mails without any explanation. Disrupting people's lives who become collateral damage with no way out.

replies(5): >>28523050 #>>28523128 #>>28523145 #>>28523324 #>>28524929 #
afarrell ◴[] No.28523050[source]
For as long as it permits companies to hire fallible humans and to write machine learning models with false positive rates.
replies(1): >>28523139 #
nicoburns ◴[] No.28523139[source]
The machine learning models with false positives aren't the problem. The lack of a timely appeals process that involves a human is.
replies(2): >>28523237 #>>28525092 #
1. naasking ◴[] No.28525092[source]
I'm curious how many human reviews are triggered after ML flags a problem. If it's nearly 100%, why have the ML step at all?
replies(1): >>28525658 #
2. colinmhayes ◴[] No.28525658[source]
Because the algorithm only flags like less than 1% of users?
replies(2): >>28525685 #>>28525950 #
3. ◴[] No.28525685[source]
4. naasking ◴[] No.28525950[source]
Maybe I wasn't clear. I meant, why have the ML algorithm disable the account automatically if human review happens nearly 100% of the time, rather than simply have ML flag the account for human review, and let them decide whether to disable the account.