Reddit, instagram, X, Facebook, TikTok, LinkedIn, Threads, etc are all the equivalent of digital junk food and I’d argue that we’re all a lot more negatively affected by it than we think. There’s a reason ‘brain rot’ was word of the year.
Reddit, instagram, X, Facebook, TikTok, LinkedIn, Threads, etc are all the equivalent of digital junk food and I’d argue that we’re all a lot more negatively affected by it than we think. There’s a reason ‘brain rot’ was word of the year.
I'm going to offer my two accounts as examples
https://bsky.app/profile/up-8.bsky.social
both of these are 'cyborg' accounts in that I have my RSS reader, classifier and autoposter. I am looking to build a lot more automation.
My Mastodon feed took a large set of rules to block out #uspol and certain communities of miserable people. My feed has stayed outrage-free since last month.
My measurements showed that Bluesky's 'Discover' feed blocked about 75% of emotionally negative material before Jan 20, since then people are inflamed but looking closely at my feed it seems they are deliberately trying to help certain people who felt stuck on X to migrate, that is, giving huge amounts of visibility to journalists, journalism professors, activists, and such so that they can run up 200k+ follower counts.
I understand. (I've been brainstorming ideas about "how to get people off X" with a friend and tonight I'm going to tell him that Bluesky has it) I've used "less like this", "unfollow" [1], "mute", "block" and such and my discover feed is getting good again.
I have two classifiers in the development pipeline, one to detect "screenshots of text" and "image memes", also a text classifier that is better at sentiment than my current one (I think ModernBERT + LSTM should be possible to train reliably, unlike fine-tuned BERTs.) I'm not so much interested in classifying posts as I am in classifying people; some of them are easy, there are 40,000 people who have a certain image meme pinned that I know I never want to follow. Just recently I figured out how to make training sets for these things without having to look too closely at a lot of toxic content.
I'm also eliminating the dependencies that are keeping this from being open sourced or commercialized so I may I have something to share this summer.
[1] one strike for an outrage post