←back to thread

706 points ortusdux | 1 comments | | HN request time: 0.239s | source
Show context
seabass-labrax ◴[] No.42138813[source]
I'm imagining this is just a publicity stunt, and I'll say it's a very good one. However I can't see it being very practical. There are lots of scam calls to keep up with and LLMs and text-to-speech models are expensive to run. If they do run this in production, the costs of running hundreds of 'Daisies' will inevitably be passed onto the consumer, and worse still, if the scammers are calling in through PSTN lines or cellular this will use up our already scarce bandwidth. I've frequently had difficulty connecting through trunk lines from Belgium and Germany to numbers in Britain, and that's without a legion of AI grannies sitting on the phone!
replies(2): >>42138965 #>>42139213 #
huac ◴[] No.42138965[source]
real-time full duplex like OpenAI GPT-4o is pretty expensive. cascaded approaches (usually about 800ms - 1 second delay) are slower and worse, but very very cheap. when I built this a year ago, I estimated the LLM + TTS + other serving costs to be less than the Twilio costs.
replies(1): >>42140563 #
1. jackphilson ◴[] No.42140563[source]
which is why we need to adopt nuclear power so we run thousands of these so the odds of them picking up a bot instead of a person is overwhelmingly likely