←back to thread

693 points hienyimba | 1 comments | | HN request time: 0.207s | source
Show context
travoc ◴[] No.28523174[source]
These types of problems could be fixed by doing the three things big tech hates:

1. Hire human beings 2. Empower them to fix problems 3. Let your users talk to them

replies(2): >>28523333 #>>28523725 #
tiborsaas ◴[] No.28523333[source]
Hiring humans is not as scalable as technology is. You can't just hire 150 more customer support agents _each month_ like you can fire up another Kubernetes cluster. They need training, middle managers, leads, special training, good tooling, office space, adjusted KPI-s, etc.

Ideally, good companies will find a balance with AI and human operators that's also sustainable as a business.

replies(3): >>28523371 #>>28523396 #>>28523526 #
1. kiklion ◴[] No.28523526[source]
> Ideally, good companies will find a balance with AI and human operators that's also sustainable as a business.

This is the crux of it. What do you define as a balance? In this example, Stripe shouldn’t be using ML to actually ban accounts but instead to flag accounts for manual review.

My company distributes advertisements. We need to watch every ad we ever distribute to ensure both its quality and legality. We have and still are investigating ML to improve this process, but because regulations put the cost on us for false negatives, we would use ML only to identify when it knows an ad fails our checks. It would then pulls it from the QC queue before any tech manually reviews it and emails the client informing it was blocked and why it was blocked and a link to a form where they can request a manual review if they think it was a false positive.

Our contracts allow for a fee to be imposed on the client if they challenge a block which is upheld after manual review.

Doing it this way we reduced our tech workload by removing clearly violating ads from QC queue and we give the client a clear and quick way to challenge the results of the ML.

At least, that is the plan here. It’s still in R&D.