←back to thread

139 points the_king | 1 comments | | HN request time: 0.38s | source

Hey HN - It’s Finn and Jack from Aqua Voice (https://withaqua.com). Aqua is fast AI dictation for your desktop and our attempt to make voice a first-class input method.

Video: https://withaqua.com/watch

Try it here: https://withaqua.com/sandbox

Finn is uber dyslexic and has been using dictation software since sixth grade. For over a decade, he’s been chasing a dream that never quite worked — using your voice instead of a keyboard.

Our last post (https://news.ycombinator.com/item?id=39828686) about this seemed to resonate with the community - though it turned out that version of Aqua was a better demo than product. But it gave us (and others) a lot of good ideas about what should come next.

Since then, we’ve remade Aqua from scratch for speed and usability. It now lives on your desktop, and it lets you talk into any text field -- Cursor, Gmail, Slack, even your terminal.

It starts up in under 50ms, inserts text in about a second (sometimes as fast as 450ms), and has state-of-the-art accuracy. It does a lot more, but that’s the core. We’d love your feedback — and if you’ve got ideas for what voice should do next, let’s hear them!

Show context
oulipo ◴[] No.43637047[source]
Interesting!

A nice open-source alternative is VoiceInk, check it out: https://github.com/Beingpax/VoiceInk

do you also plan to open-source part of your platform?

replies(3): >>43637517 #>>43638203 #>>43642627 #
razemio ◴[] No.43642627[source]
I just tried it on a M4 Max MacBook Pro. When you have such a processor, it seems to be even faster than Aqua Voice 2, does more, optional supports open router AND is open source? Thank you so much for the recommendation!
replies(2): >>43643692 #>>43649521 #
pablopeniche ◴[] No.43649521[source]
>it seems to be even faster >runs locally

This is obviously a lie. If this was true, all the inference provider companies would go to zero. I support open-source as much as the next guy here, but it's obvious that the local version will be slower or break more often. Like, come on guys. Be real.

To illustrate this, M4 Max chips do 38 TOPS in FP8. An NVIDIA H100 does 4,000 TOPS.

Prakash if you're going to bot our replies, at least make it believable.

replies(2): >>43649620 #>>43656280 #
1. razemio ◴[] No.43656280[source]
I am not Parkash. Just check my profile. I am not a bot. github.com / razem-io. I checked his youtube videos. He seems to lack presentation skills but his app is very usable at its current state.

I have both apps open. The STT seems to be faster with VoiceInk. Like it is instant. I can send you a video if you want.

I am sorry. I did not want your product to look bad. You are right you still need to offload the llm part to openrouter and the like if you want this to be fast too. However, having the ability to switch AI on/off on demand and context aware with custom prompts is perfect. It can use ollama too. Yes this will be much slower but local. Best off both worlds. No subscription, even if you use cloud ai.