←back to thread

467 points bookofjoe | 2 comments | | HN request time: 0.406s | source

I would very much like to enjoy HN the way I did years ago, as a place where I'd discover things that I never otherwise would have come across.

The increasing AI/LLM domination of the site has made it much less appealing to me.

Show context
gojomo ◴[] No.44573765[source]
~simonw's demo of a quickie customized HN front-end is great.

But ultimately, your browser should have a local, open-source, user-loyal LLM that's able to accept human-language descriptions of how you'd like your view of some or all sites to change, and just like old Greasemonkey scripts or special-purpose extensions, it'd just do it, in the DOM.

Then instead of needing to raise this issue via an "Ask HN", you'd just tell your browser: "when I visit HN, hide all the AI/LLM posts".

replies(2): >>44573864 #>>44573939 #
azath92 ◴[] No.44573864[source]
Its pretty easy to do the user-loyal bit, with a bit of prompting to give an llm your preferences/profile. Not ideologically loyal, but i mean acting in accordance with your interests.

The tricky part is having that act across all sites in a light and seamless way. Ive been working on a HN reskin, and it only is fast/transparent/cheap enough because HN has an api (no scraping needed), and the titles are descriptive enough that you can filter based on them, as simonws demo does. But its still HN specific.

I dont know if llms are fast enough at the moment to do this on the fly for arbitrary sites, but steps in that direction are interesting!

replies(1): >>44575255 #
1. gojomo ◴[] No.44575255[source]
I'd expect a noticeable delay with current local LLMs - especially visiting a site for the 1st time. But then they could potentially memoize their heuristics for certain designs, including recognzing when some "deeper thought" newly required by server-side redesigns.

But of course local GPU processing power, & optimizations for LLM-like tools, all adancing rapidly. And these local agents could potentially even outsource tough decisions to heavierweight remote services. Essentially, they'd maintain/reauthor your "custom extension", themselves using other models, as necessary.

And forward-thinking sites might try to make that process easier, with special APIs/docs/recipe-interchanges for all users' agents to share their progress on popular needs.

replies(1): >>44580549 #
2. azath92 ◴[] No.44580549[source]
Yeah, we found even the delay of non-local LLMs to be prohibitive. We started using claude for "smartest" recs and profile generation from preferences and it was so slow, on the order of a minute or so for a first visit and still 20-30s on repeat visits after storing a "profile" (essentially your notion of memoized heuristics) in local storage to come back to.

We ended up finding that a middle ground between that and ~simonw's no-AI but fast, was to use flash for fast semantic understanding of preferences and recs, but degraded quality compared with a friontier model.

> And forward-thinking sites might try to make that process easier, with special APIs/docs/recipe-interchanges for all users' agents to share their progress on popular needs.

HN is that! our exploration was made 1000% easier because they have an API which is "good enough" for most information.