i think the bit that needs the most work is classifying each post on the home page; quite a lot of posts that i would mark as "Dive" given its own classification of me ended up as "Skim".
Our goal was to build a tool that allowed us to test a range of "personal contexts" on a very focused everyday use case for us, reading HN!
We are exploring use of personal context with LLMs, specifically what kind of data, how much, and with how much additional effort on the user’s part was needed to get decent results. The test tool was a bit of fun on its own so we re-skinned it and decided to post it here.
First time posting anything on HN but folks at work encouraged me to drop a link. Keen on feedback or other interesting projects thinking about bootstrapping personal context for LLM workflows!
i think the bit that needs the most work is classifying each post on the home page; quite a lot of posts that i would mark as "Dive" given its own classification of me ended up as "Skim".
We aren't really sure yet how best to surface _why_ the model predicts what it does. You can hover over the skim label and there is a bit of reasoning text, which might shed some light on why for now. We will think more about how to make these relationships more clear in the process of tightening them up and generally improving them.
Once the relationships are a bit more clear theres probably the 80/20 rule of work to tighten up those predictions.
## Analysis of user's tech interest: The user shows a strong interest in foundational computing concepts, historical perspectives on technology, and cutting-edge advancements in AI/ML, particularly those related to model architecture and efficiency. They are also drawn to low-level programming, system design, and hardware. Conversely, they seem less interested in business/startup narratives, general data manipulation tools, and consumer-oriented tech news unless it has a deep technical underpinning.