←back to thread

439 points david927 | 1 comments | | HN request time: 0.207s | source

What are you working on? Any new ideas which you're thinking about?
Show context
absoluteunit1 ◴[] No.44418988[source]
Building https://www.typequicker.com

Long-term, passion project of mine - I'm hoping to make this the best typing platform. Just launched the MVP last month.

The core idea of the app is focusing on using natural text. I don't think typing random words (like what some other apps do) is the most effective way to improve typing.

We offer many text topics to type (trivia, literature, etc) where you type text snippets. We offer drills (to help you nail down certain key sequences). We also offer:

- Real-time visual hand/keyboard guides (helps you to not look down at keyboard) - Extremely detailed stats on bigrams, trigrams, per-finger performance, etc. - SmartPractice mode using LLMs to create personalized exercises - Topic-based practice (coding, literature, etc.)

I started this out of passion for typing. I went from 40wpm to ~120wpm (wrote about it here if you're interested: https://www.typequicker.com/blog/learn-touch-typing) and it completely changed my perspective and career trajectory. I became a better programmer and writer because I no longer had to think about the keyboard, nor look down at it.

Currently, we're doing a lot of analysis work on character frequencies and using that to constantly improve the SmartPractice feature. Also, exploring various LLM output testing/observability tools to improve the text generation features.

Approaching this project with a freemium model (have paid AI powered features; using AI to generate text that targets user weakpoints) while everything else in the app is completely free. No ads, no trackers, etc. (Hoping to have sufficient paid users so that we can run the site and never have to even think about running ads).

I've received a lot of feedback and am always looking for ways to improve the site.

replies(6): >>44419061 #>>44419392 #>>44420907 #>>44426017 #>>44427084 #>>44427663 #
pseufaux ◴[] No.44419061[source]
What an incredibly interesting use of LLMs (generating text to practice typing). It leans in on what LLMs are good at. That said. I would love to see a middle tier pricing which had some features but avoided the AI use.
replies(2): >>44419332 #>>44419594 #
llbbdd ◴[] No.44419332[source]
Why avoid AI use? Genuine question, I see this around and it seems usually based on a mental model of the environmental cost of AI that does not match impact in the real world.
replies(1): >>44421942 #
pseufaux ◴[] No.44421942[source]
Environmental cost is a concern, though for me not the main one. In this case it's two things.

1. AI interactions cost the service money, which is inevitably passed on to the consumer. The if it's a feature I do not wish to use, I like to have options to avoid paying for that feature. So in this case, avoiding AI use is a purely economic decision.

2. I am concerned about the content LLMs are trained on. Every major AI has (in my opinion) stolen content as training material. I prefer not to support products which I believe are unethically built. In the future, if models can be trained solely on ethically sourced material where the authors have been properly compensated, I would think this position.

replies(1): >>44422320 #
1. azeirah ◴[] No.44422320[source]
I'm active in the /r/localllama community and on the llama.cpp GitHub. For this use-case you absolutely do not need a big LLM. Even an 8B model will suffice, smaller models perform extremely well when the task is very clear and you provide a few shot prompt.

I've experimented in the past with running an LLM like this on a CPU-only VPS, and that actually just works.

If you host it on a server with a single GPU, you'll likely be able to easily fulfil all generation tasks for all customers. What many people don't know about inference is that it's _heavily_ memory bottlenecked, meaning that there is a lot of spare compute left over. What this means in practice is that even on a single GPU, you can serve many parallel chats at once. Think 10 "threads" of inference at 20 Tok/s.

Not only that, but there are also LLMs trained only on commons data.