Most active commenters
  • jzombie(3)

←back to thread

564 points nimbusega | 13 comments | | HN request time: 1.056s | source | bottom
1. nimbusega ◴[] No.42067000[source]
I made this to experiment with embeddings and explore how different ways of displaying information affect your perception.

It gets the top 100 stories, sends their html to GPT-4 to extract the main content (this was not producing good enough results with html parsing) and then gets an embedding using the title and content.

Likes/dislikes are stored in local storage and compared against all stories using cosine similarity to find the most relevant stories.

It costs about $10/day to run. I was thinking of offering additional value for a small subscription. Maybe more pages of the newspaper, full story content/comments, a weekly digest or ePub export or something?

replies(4): >>42067307 #>>42067813 #>>42072116 #>>42072371 #
2. ketzo ◴[] No.42067307[source]
I think some of the highest value from HN comes from the comments, and it's much harder to find the "best" ones, since they might be in threads you might not have otherwise read.

Not sure if it's a "premium feature" so to speak, but would be very cool to extend this to comments generally.

replies(2): >>42067598 #>>42077536 #
3. nimbusega ◴[] No.42067598[source]
Definitely, comments are usually better than the article. I thought of a 'Letters to the Editors' section that shows top comments (https://news.ycombinator.com/bestcomments) and references the parent story, but it might not be as useful without the context.

Maybe 'See Comments' here could load the comments on the same page? In a newspaper like style.

replies(1): >>42073635 #
4. jzombie ◴[] No.42067813[source]
> Likes/dislikes are stored in local storage and compared against all stories using cosine similarity to find the most relevant stories.

You're referring to using the embeddings for cosine similarity?

I am doing something similar with stocks. Taking several decades worth of 10-Q statements for a majority of stocks and weighted ETF holdings and using an autoencoder to generate embeddings that I run cosine and euclidean algorithms on via Rust WASM.

replies(2): >>42072665 #>>42133823 #
5. gsky ◴[] No.42072116[source]
Why would it cost $10 a day?

It should not cost more than a dollar a day.

Take AWS and azure credits and run it for free for years

replies(1): >>42077436 #
6. tagawa ◴[] No.42072371[source]
Nice – I like this a lot. I feel like I'd use this for slow-lane reading and the original HN site when I'm in a rush.

Regarding HTML to GPT-4, I seem to remember commenters here saying they got better results by converting the HTML to Markdown first, then sending to an LLM. Might save a bit of money too.

replies(1): >>42133832 #
7. tiborsaas ◴[] No.42072665[source]
> I am doing something similar with stocks.

How well does it work?

replies(1): >>42143005 #
8. genewitch ◴[] No.42073635{3}[source]
AI should be able to do "good enough" sentiment analysis combined with the "votes" should be able to quickly find agree/disagree and the quality of the comment - which should not be based merely on the number of complex words, or the length.

i certainly suspect that the 4chan and reddit datasets, combined with HN's, and building a LoRA that ranks the 4chan and reddit stuff lower and the good HN stuff higher. essentially, subtract all reddit and 4chan style comments from the set of HN comments' weights. Training SD loras was pretty quick but i haven't looked into LLM loras. regardless, the LLM with the HN-4chan&reddit can do sentiment analysis and use the votes; just feed it csv or json: votes, user, comment. I guess you could do votes/age as a cleanup, too.

All this to say i still wouldn't read or use it. I'm not a fan of robots entertaining me.

9. jkestner ◴[] No.42077536[source]
Render comments in the style of the Onion's man-on-the-street American Voices section.
10. mahin ◴[] No.42133823[source]
Yes. Your project sounds cool, post it!
replies(1): >>42143009 #
11. mahin ◴[] No.42133832[source]
That's a good idea. I've been experimenting and markdown seems to produce better results.
12. jzombie ◴[] No.42143005{3}[source]
It seems to do well for a lot of searches, though some are questionable, but I believe that I know why. I'm training some different autoencoders to give it some different perspectives.

The code lives here: https://github.com/jzombie/etf-matcher

The ad-hoc vector DB I've created lives here: https://github.com/jzombie/etf-matcher/blob/main/rust/src/da...

13. jzombie ◴[] No.42143009{3}[source]
I just responded to an adjacent query with the info.

https://news.ycombinator.com/threads?id=jzombie#42072665