Most active commenters
  • lagniappe(4)
  • (3)
  • sallveburrpi(3)

←back to thread

3337 points keepamovin | 46 comments | | HN request time: 2.552s | source | bottom
1. lagniappe ◴[] No.46206905[source]
This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.
replies(11): >>46207006 #>>46207026 #>>46207056 #>>46207057 #>>46207199 #>>46207372 #>>46207564 #>>46207766 #>>46208807 #>>46209243 #>>46211882 #
2. keepamovin ◴[] No.46207006[source]
I agree. What is a good update prompt I can give it to create a better variant?
replies(2): >>46207477 #>>46207481 #
3. teekert ◴[] No.46207026[source]
Yeah that’s very true, but I still think it’s pretty funny and original.
replies(2): >>46207079 #>>46207691 #
4. jonas21 ◴[] No.46207056[source]
That's what makes it fun. Apparently, Gemini has a better sense of humor than HN.
replies(5): >>46207401 #>>46207535 #>>46207596 #>>46207644 #>>46208318 #
5. dgritsko ◴[] No.46207057[source]
Surely there's gotta be a better term for this. Recency bias?
replies(2): >>46207244 #>>46208163 #
6. latexr ◴[] No.46207079[source]
> > the result ends up being very formulaic.

> Yeah that’s very true, but I still think it’s pretty funny and original.

Either it’s formulaic or it’s original, it can’t be both.

replies(1): >>46207376 #
7. IncreasePosts ◴[] No.46207199[source]
Isn't that a common pitfall of humans too?

In numerous shows these days AI is the big bad thing. Before that it was crypto. In the 1980s every bad guy was Russian, etc.

replies(2): >>46207676 #>>46207715 #
8. da_grift_shift ◴[] No.46207244[source]
You'll love taint checking then.

https://en.wikipedia.org/wiki/Taint_checking

https://semgrep.dev/docs/writing-rules/data-flow/taint-mode/...

9. saintfire ◴[] No.46207372[source]
Algodrill is copied verbatim, as far as I can tell.
replies(2): >>46207580 #>>46208735 #
10. teekert ◴[] No.46207376{3}[source]
According to an original formula hehe
11. allisdust ◴[] No.46207401[source]
This seem to woosh right over everyone's heads :)
replies(2): >>46207534 #>>46207637 #
12. ehsankia ◴[] No.46207477[source]
You could try passing it 10-20 front pages across a much wider time range.

You can use: https://news.ycombinator.com/front?day=2025-12-04 to get the frontpage on a given date.

replies(1): >>46208167 #
13. wasabi991011 ◴[] No.46207481[source]
If you do an update prompt, I hope you still keep this one around!

It's formulaic yeah, but that's what puts it into the realm of hilarious parody.

replies(1): >>46241822 #
14. ◴[] No.46207534{3}[source]
15. whimsicalism ◴[] No.46207535[source]
I would find it even more fun if it were more speculative, misapplied uses of 'woosh' aside.
16. adastra22 ◴[] No.46207564[source]
That’s the joke…
replies(1): >>46208171 #
17. tanseydavid ◴[] No.46207580[source]
I found the repetition (10 years later) to be quite humorous.
replies(1): >>46207653 #
18. hyperbovine ◴[] No.46207596[source]
The bar is low.
19. ◴[] No.46207637{3}[source]
20. ◴[] No.46207644[source]
21. sallveburrpi ◴[] No.46207653{3}[source]
Time is a flat circle
replies(1): >>46207956 #
22. whimsicalism ◴[] No.46207676[source]
In numerous TV shows before AI, crypto was the big bad thing?
replies(1): >>46221189 #
23. glenstein ◴[] No.46207691[source]
The problem is not that it fails to be cheeky, but that "its funny" is depressing in a context where there was a live question of whether it's a sincere attempt at prediction.

When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.

replies(1): >>46207980 #
24. farazbabar ◴[] No.46207715[source]
Us middle eastern/brown guys have been making a come back?
25. thomastjeffery ◴[] No.46207766[source]
I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".

It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.

These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.

26. tsunamifury ◴[] No.46207956{4}[source]
FYI, this quote was meant to be the ramblings of a drunk who says something that sounds deep but is actually meaningless.
replies(1): >>46217939 #
27. jacobr1 ◴[] No.46207980{3}[source]
Honestly it feels like what I, or many of my colleagues would do if given the assignment. Take the current front page, or a summary of the top tropes or recurring topics, revise them for 1 or 2 steps of technical progress and call it a day. It isn't assignment to predict the future, it is an assignment to predict HN, which is a narrower thing.
replies(1): >>46211760 #
28. lagniappe ◴[] No.46208163[source]
It's called context taint.
29. lagniappe ◴[] No.46208167{3}[source]
This wont change anything it will just make it less evident to those who missed a day of checking HN.
30. lagniappe ◴[] No.46208171[source]
Really? What's the punchline? I like jokes.
31. lucianbr ◴[] No.46208318[source]
But there's no mention of fun or humor in the prompt.
replies(3): >>46208439 #>>46208499 #>>46209010 #
32. auxiliarymoose ◴[] No.46208439{3}[source]
Fun will be prohibited until morale improves.
replies(1): >>46208461 #
33. lucianbr ◴[] No.46208461{4}[source]
I mean it's very funny. Just I'm laughing at the AI, not with it.
34. jama211 ◴[] No.46208499{3}[source]
I don’t ask it to be sycophantic in my prompts either but it does that anyway too.
35. niam ◴[] No.46208735[source]
It fits in nicely imo. It's plausible (services re-appear on hn often enough), and hilarious because it implies the protracted importance of Leetcode.

Though I agree that the LLM perhaps didn't "intend" that.

36. tempestn ◴[] No.46208807[source]
That's what the OP asked for, essentially. They copied today's homepage into the prompt and asked it for a version 10 years in the future.
37. monerozcash ◴[] No.46209010{3}[source]
Judging by the reply posted by the OP, the OP probably maintains a pretty humorous tone while chatting with the AI. It's not just about the prompt, but the context too.
38. HarHarVeryFunny ◴[] No.46209243[source]
I think that's what makes it funny - the future turns out to be just as dismal and predictable as we expect it to be. Google kills Gemini, etc.

Humor isn't exactly a strong point of LLMs, but here it's tapped into the formulaic hive mind of HN, and it works as humor!

39. glenstein ◴[] No.46211760{4}[source]
Right, because you would read the teacher and realize they don't want you to actually complete the assignment to the letter. So you would do jokes in response to a request for prediction.
40. kccqzy ◴[] No.46211882[source]
But it would otherwise be not fun at all. Anthropic didn’t exist ten years ago, and yet today an announcement by them would land on the front page. Would it be fun if this hypothetical front page showed an announcement made by a future startup that hasn’t been founded yet? Of course not.
replies(1): >>46234986 #
41. sallveburrpi ◴[] No.46217939{5}[source]
It’s actually referencing Nietzsche referencing Empedocles, but your point works as well I guess
replies(1): >>46226306 #
42. IncreasePosts ◴[] No.46221189{3}[source]
Not a bad thing necessarily, but some part of the plot, and usually with things going awry or emphasizing the scammy nature of blockchain.

Examples: Shameless season 11, The Simpsons S31E13, Superstore season 5, the good wife S3E13, greys anatomy S14E8, big bang theory S11E9, Billions season 5, some later seasons of Mr Robot, etc

43. tsunamifury ◴[] No.46226306{6}[source]
haha thats both not true, and still works as drunk nonsense.

But good job googling this and getting fooled by an LLM

replies(1): >>46239801 #
44. tempestn ◴[] No.46234986[source]
I don't know, I would have enjoyed a "Floopzy launches with $10B seed round" or something.
45. sallveburrpi ◴[] No.46239801{7}[source]
I guess you got fooled by an LLM my friend: https://en.wikipedia.org/wiki/Eternal_return
46. keepamovin ◴[] No.46241822{3}[source]
Thank you. It was a great one-shot and I didn't end up doing any updates. Thrilled to see how it inspired work from: Thomas Morford (CSE @ UC Merced, thomasm6m6) who did the amazing article/thread generation (in < 100 lines of PY!): https://sw.vtom.net/hn35/news.html ; and also from Andrej Karpathy (ex-OpenAI, now Eureak Labs, karpathy) who did an interesting analysis of prescience quality of threads/commenters inspired by a reply linking the page of 10 years past to compare: https://karpathy.bearblog.dev/auto-grade-hn/

This was wonderful. 3000 points? I mean, fuck. Among the biggest posts of all time, and definitely of Show HN. Funny for me is that all the work I've done in the last 10 years, probably 100 Show HN's all different, this was by far the hugest. Could be months of work, no interest. And this thing, which dropped into my mind, probably 30 minutes, demolished them all. It's hilarious that it even beat out legitimate AI posts, and contaminated search results with future stories.

One of the funniest things for me was hearing how people tabbed away from the page, only to come back and momentarily feel it was the actual HN page. Hahahahaha! :)

All I can say is, I love you all. Watching it stay at the top for 24 hours...it felt like it wasn't something I made at first. But it was. Cool