←back to thread

706 points ortusdux | 4 comments | | HN request time: 0.656s | source
1. seabass-labrax ◴[] No.42138813[source]
I'm imagining this is just a publicity stunt, and I'll say it's a very good one. However I can't see it being very practical. There are lots of scam calls to keep up with and LLMs and text-to-speech models are expensive to run. If they do run this in production, the costs of running hundreds of 'Daisies' will inevitably be passed onto the consumer, and worse still, if the scammers are calling in through PSTN lines or cellular this will use up our already scarce bandwidth. I've frequently had difficulty connecting through trunk lines from Belgium and Germany to numbers in Britain, and that's without a legion of AI grannies sitting on the phone!
replies(2): >>42138965 #>>42139213 #
2. huac ◴[] No.42138965[source]
real-time full duplex like OpenAI GPT-4o is pretty expensive. cascaded approaches (usually about 800ms - 1 second delay) are slower and worse, but very very cheap. when I built this a year ago, I estimated the LLM + TTS + other serving costs to be less than the Twilio costs.
replies(1): >>42140563 #
3. wepple ◴[] No.42139213[source]
Every type of defensive tech is nothing more than driving up the cost of attack.

Doubling the dwell time for a scammer will halve their profits. That could have interesting second-order effects. Perhaps it makes it not worth it for some subset?

4. jackphilson ◴[] No.42140563[source]
which is why we need to adopt nuclear power so we run thousands of these so the odds of them picking up a bot instead of a person is overwhelmingly likely