←back to thread

693 points hienyimba | 2 comments | | HN request time: 0.417s | source
Show context
pc ◴[] No.28523805[source]
(Stripe cofounder.)

Ugh, apologies. Something very clearly went wrong here and we’re already investigating.

Zooming out, a few broader comments:

* Unlike most services, Stripe can easily lose very large amounts of money on individual accounts, and thousands of people try to do so every day. We are de facto running a big bug bounty/incentive program for evading our fraudulent user detection systems.

* Errors like these happen, which we hate, and we take every single false rejection that we discover seriously, knowing that there’s another founder at the other end of the line. We try to make it easy to get in touch with the humans at Stripe, me included, to maximize the number that we discover and the speed with which we get to remedy them.

* When these mistaken rejections happen, it’s usually because the business (inadvertently) clusters strongly with behavior that fraudulent users tend to engage in. Seeking to cloak spending and using virtual cards to mask activity is a common fraudulent pattern. Of course, there are very legitimate reasons to want to do this too (as this case demonstrates).

* We actually have an ongoing project to reduce the occurrence of these mistaken rejections by 90% by the end of this year. I think we’ll succeed at it. (They’re already down 50% since earlier this year.)

replies(25): >>28524033 #>>28524044 #>>28524048 #>>28524050 #>>28524154 #>>28524171 #>>28524182 #>>28524398 #>>28524413 #>>28524431 #>>28524441 #>>28524749 #>>28525580 #>>28525617 #>>28525758 #>>28526933 #>>28527035 #>>28527043 #>>28527233 #>>28527269 #>>28527682 #>>28528656 #>>28529788 #>>28530370 #>>28537774 #
blantonl ◴[] No.28527043[source]
We actually have an ongoing project to reduce the occurrence of these mistaken rejections by 90% by the end of this year. I think we’ll succeed at it. (They’re already down 50% since earlier this year.)

It seems to me that if a company provides such an important service to other companies (i.e. functioning as that company's direct revenue source - payments), then if somewhere it is determined that Stripe no longer intends to provide that service, someone at Stripe should be reaching out proactively, via a telephone or other method, to the leadership at the customer and explaining to them in detail why the decision was made to terminate the relationship and what recourse they have.

I shudder to think of the impact something that an algorithm based decision like would have on my business in this scenario. I would be an absolute disaster, and could have far reaching implications for the viability of someone's business.

Every single decision where Stripe is terminating a relationship should have a clear path to a human being for resolution, and should be reviewed by a human before the decision is even made. Like, setup a conference call with leadership and work through the issue. Most fraudsters wouldn't go through that process anyway, and it provides a proactive approach to working with customers who obviously would be in a complete disaster recovery scenario if this occurred so it would be all hands on deck on the customers side. Nothing is worse than having all hands on deck to address a critical issue and feeling helpless because the other side of the equation is an auto-responder email box.

No business should be writing blog posts for help on something like this.

replies(2): >>28528869 #>>28533082 #
ryan29 ◴[] No.28528869[source]
This should be at the top of the comments IMO. I'm honestly stunned by this blog post because I always assumed a relationship with a payment processor like Stripe was akin to a banking relationship where you'd have an account manager that would reach out to you to resolve problems. If the banks can do it, why can't Stripe? Is it simply a difference in regulation and what they can get away with legally?

All of the big tech companies think they can use machine learning and algorithms to do everything and they have an "acceptable" rate of failure as a target.

The main problem with that is that even if the failure rate is .01%, the failure is typically catastrophic for that .01%. When the error is going to ruin someone's life, is there really an "acceptable" rate of failure?

A secondary problem is that machine learning and algorithms are going to have a tough time accounting for virility. IE: If I have a small product that goes viral, as a percentage change, my error/fraud/dispute rates are going to jump drastically. So at the exact moment where reliable, scalable payment processing is the most important in my life, the automated systems are going to have the highest risk of banning me and automatically denying my appeal.

The fact that 24-48 hours is considered an acceptable timeframe for an appeal is worthy of it's own paragraph. That's unacceptably slow if they're locking the account and doing irreparable harm to your business. That wouldn't be tolerated in a market with proper competition and my instinct is to ask for regulation that would involve a 3rd party in dispute resolution for a payment processor that's terminating a relationship in a non-amicable manner.

At least give me some options that can make things suck less. I'd prepay $500 (non-refundable) without even thinking about it to be guaranteed a phone call prior to account termination. I'd let them hold back a percentage of revenue up to an absolute value so it can be held as a (refundable) bond to protect against fraud. I'd let them hold back a higher percentage if their automated systems detect an increased chance of fraud / issues.

I think stuff like this is a stunning failure and I can't understand how tech entrepreneurs (of all people) can't understand why it's unacceptable. The dream for most of us is literally to build something that has overnight, viral success and makes us rich, but we've got companies like Stripe using ML algorithms that'll auto-ban you as soon as you deviate from the norm. How is that reasonable?

The absolute worst case scenario for a Stripe customer should be for the customer to opt to have all payments withheld (by Stripe) and to undergo some kind of dispute resolution or problem solving. Would you rather wake up to a banned account or an email saying they're holding your money until you call them? I know PayPal gets a lot of flack for the latter, but maybe it's not that bad compared to the alternative. The problem with PayPal AFAIK is they hold the money for a long time no matter what.

I get so frustrated when I see PR / damage control and the solution they're providing is "we're going to improve the algorithms." You can't. By the time those systems fail you need one-on-one human support where both sides can adapt, compromise, negotiate, etc. in real-time.

YOU NEED PEOPLE, NOT MACHINES!

replies(1): >>28533166 #
1. noahtallen ◴[] No.28533166[source]
This is a fair critique. In cases like Stripe, I’m sure there are viable ways to have humans involved.

More generally, the big problem is that most internet companies are trying to achieve growth and user numbers which aren’t incompatible with having humans moderate everything. For example, everyone likes to hate on social media companies doing a terrible job moderating. But the reality is that you cannot hire enough humans to manually moderate billions of things daily. So algorithms are a necessity, unless we are willing to part with platforms which cater to extremely large audiences.

replies(1): >>28535999 #
2. saksham_agrawal ◴[] No.28535999[source]
Just came to say that the "billions of things daily" is a red herring. The companies are simply too big to handle moderation even with an algo+human solution. So maybe any network should have only millions of things, or thousands.