←back to thread

324 points dvh | 1 comments | | HN request time: 0s | source
Show context
jahsome ◴[] No.43298548[source]
I absolutely love how fired up the average YouTube commenter was about Honey... for about 72 hours. People completely unaffected in any way were demanding class action lawsuits, etc with seemingly no clue why they were even upset. Then the subject completely left their minds.

This observation is of course entirely anecdotal, but manufactured outrage is so fascinating, even if it currently eroding the very foundations of society.

replies(18): >>43298579 #>>43298600 #>>43298610 #>>43298640 #>>43298733 #>>43298933 #>>43298942 #>>43298977 #>>43299229 #>>43299390 #>>43299411 #>>43299451 #>>43299754 #>>43299776 #>>43300000 #>>43300017 #>>43300261 #>>43300604 #
mrtksn ◴[] No.43298942[source]
I feel like the internet is turning into TV. There are not that many things going on, instead, there's a firehose that directs all the rage or all the love to something for some period of time. Almost like the legacy media picking topics and directing the narrative.

I'm particularly annoyed by Twitter lately because I can no longer share anything with my GF because she have already seen it. Our timelines are largely similar, it doesn't matter much who do you follow. Also, the algorithmic discovery being the default is very effective to create this channels(Technology Connections recently made a video about it).

On Twitter it appears like there are few talking points, or "channels", are being pushed based on location and few other things maybe and apparently to get exposure you have to say something that fits the narrative.

Maybe its not intentional, maybe its the result of the algo dividing people in cohorts or something but I'm very annoyed by the potentially destructive effect of the firehose. Everyone being very outraged of something for short period of time or being very excited for short period of time can't be healthy because it lacks depth and continuation.

replies(3): >>43299014 #>>43299188 #>>43299629 #
xg15 ◴[] No.43299014[source]
> and apparently to get exposure you have to say the something that fits the narrative.

I think we should really be aware that, if tech companies weren't already able to build something like this anyway, with LLMs they are definitely able now.

There is lots of talk about the generative powers of LLMs, but they also have unprecedented analysis powers: You can now easily build something that automatically checks whether a tweet expresses a certain opinion or narrative and automatically upranks or downranks it based on the results.

So if you're the owner of a platform, you can now fully control the appearance of what "people are saying" on the platform, without even having to use bots or fake messages.

(Of course you could use those as well, in addition, if the opinion you want to push is so bad there aren't enough real users to uprank in the first place)

replies(2): >>43299102 #>>43300292 #
1. mrtksn ◴[] No.43299102[source]
Definitely. AFAIK they previously used to do sentiment analysis and Facebook faced some backslash for experimenting over the mood of their users by manipulating their timelines but today it must be possible to do %100 editorial moderation using LLMs and pretend that whatever you want is the general public sentiment.

I also notice that "influencers" are also influenced by this. They pick the talking points from real time media like Twitter and then make coherent videos over this stuff and it gets legitimized. People rarely revisit their past works once the firehose is spraying at some other direction and the fake public sentiment becomes the real public sentiment.