Our goal was to build a tool that allowed us to test a range of "personal contexts" on a very focused everyday use case for us, reading HN!
We are exploring use of personal context with LLMs, specifically what kind of data, how much, and with how much additional effort on the user’s part was needed to get decent results. The test tool was a bit of fun on its own so we re-skinned it and decided to post it here.
First time posting anything on HN but folks at work encouraged me to drop a link. Keen on feedback or other interesting projects thinking about bootstrapping personal context for LLM workflows!
The tension we have been finding is that we dont want to require people to "know how to prompt" to get value out of having a profile, hence our ongoing thinking around how to bootstrap good personal profiles from various data sources.
As Koomen notes, a good profile feels like it could be the best weapon against "AI slop" in a case I want something sharp and specific. But getting to that requires knowing how to prompt most of the time.