In general anything that has "algorytmic content ordering" that pushes content triggering strong emotional reactions should be banned and burned to the ground.
People stating their perspectives and arguing against other's with complete disregard for civility (or being 'mean' as you said), makes it far more difficult for people to respect opposing viewpoints.
Of course, my solution was to stop using those services. But, I wouldn't be surprised if certain personality types are unable to do that (same as they can't quit smoking or porn or whatever else).
Or do we only ban websites that design their algorithms to trigger strong emotional emotions? How do you define that? Even Musk doesn't go around saying that the algorithm is modified to promote alt right, instead he pretends it is all about "bringing balance back". Furthermore, I would argue that systems based on votes such as Reddit or HN are much more likely than other systems to push such content. We could issue a regulation to ban specific platforms or websites (TikTok, X...) by naming them individually, but that would probably go against many rules of free competition, and would be quite easily circumvented.
Not that I disagree on the effect of social medias on society, but regulating this is not as easy as "let's ban the algorithm".
FB/X modus operandi is keep as much people for as long possible glued to the screen. The most triggering content will awaken all those "keyboard wariors" to fight.
So instead of seeing your friends and people you follow on there you would mostly see something that would affect you one way or another (hence proliferation of more and more extreme stuff).
Google is going downhill but for different reasons - they also care only about investors bottomline but being the biggest ad-provider they don't care all that much if people spend time on google.com page or not.
It's basically a dark entity that cranks up footbal hooligans and then push them on the collision course.
There is no civility there.
For such people everything must be framed in a good versus evil, us vs them or generally sensationalist manner to sustain any kind of attention.
It's insane that the same community that rails against attempts to police encryption, that believes in the ethos of free software, that "piracy isn't theft" and "you can't make math illegal" and that champions crypto/blockchain to prevent censorship is so sympathetic to banning "content ordering algorithms."
The problem is not the algorithms, the problem is the content, and the way people curate that content. Platforms choosing to push harmful content and not police it is a policy issue.
Is the content also free speech? Yes. But like most people I don't subscribe to an absolutist definition of free speech nor do I believe free speech means speech without consequences (absent government censorship) or that it compels a platform.
So I think it's perfectly legitimate for platforms to ban or moderate content even beyond what's strictly legal, and far less dangerous than having governments use their monopoly on violence to control what sorting algorithms you're allowed to use, or to forcibly nationalize and regulate any platform that has over some arbitrary number of users (which is something else a lot of people seem to want.)
We should be very careful about the degree of regulation we want governments to apply to what is in essence the only free mass communications medium in existence. Yes, the narrative is that the internet is entirely centralized and controlled by Google/Facebook/Twitter now but that isn't really true. It would absolutely become true if the government regulated the internet like the FCC regulates over the air broadcasts. Just look at the chaos that age verification laws are creating. Do we really want more of that?
There's mean content ("I think you are an asshole") and there's content that's going to cause actual harm because it either goads others to violence or because it creates a constant cortisol increase from fear and dehumanization.
Indeed. You are free to praise the president or face the consequences. Some freedom.
And if you accept my premise, it's probably not the websites, but rather the humans themselves.
As someone who spent an embarrassingly long time on what lots of people claim to be the most toxic forum in the world (not sure about that, it's the biggest in the Nordics though, that's for sure), and even moderated some categories on that forum that many people wouldn't touch with a ten-foot pole, it really isn't that hard to moderate even when the topics are sensitive and most users are assholes.
I'd argue that moderation is difficult today on lots of platforms because it's happening too much "on the fly" so you end up with moderators working with the rules differently and applying them differently, depending on mood/topic/whatever.
If you instead make a hard list of explicit rules, with examples, and also establish internal precedents that moderators can follow, a lot of the hard work around moderation basically disappears, regardless of how divisive the topic is. But it's hard and time-consuming work, and requires careful deliberation and transparent ruling.
This administration is taking a newly-formed censorship regime that was largely operated by the nepo babies of politicians running do-nothing tax-supported nonprofits, but implemented and operated by Mossad agents, and removing the nepo babies from the loop.
You can say "retard" now, but if you call somebody who executes Palestinian children a retard, you're going on a government blacklist.
edit: This post has been classified and filed, and associated with me for the rest of my life.
But admitting FB did publicly say they manipulate their users' emotions for engagement, and a law is passed preventing that. How do you assess that the new FB algorithm is not manipulating emotions for engagement? How do you enforce your law? If you are not allowed to create outrage, are you allowed to promote posts that expose politicians corruption? Where is the limit?
Once again, I hate these algorithms. But we cannot regulate by saying "stop being evil", we need specific metrics, targets, objectives. A law too broad will ban Google as much as Facebook, and a law too narrow can be circumvented in many ways.
[0] https://www.wsj.com/tech/facebook-algorithm-change-zuckerber...
There in the issue that a news site generally has limited number of contributors where has a social media site has an infinite number of contributors.
In either case, it seems like the same laws apply like defamation laws, fraud laws, etc apply to the authors of the posts which might be easier to target when it’s a news site as the site itself takes responsibility for the content
This is basically a fight against human nature. If I could get one wish, it would be legislation that forces social media sites to explain in detail how their algorithms work. I have to believe that a company could make a profitable social media site that doesn't try all the tricks in the books to hook their users to their site and rile them up. They may not be Meta-sized, but I would think there would be a living in it.
Recent social media (& maybe "recent" no longer applies) doesn't have this kind of community run tooling
Aren't you describing your own comment? Aren't upvotes pushing that to the top? So isn't HN the thing that needs to be banned according to your comment?
They are qualitatively distinct. Facebooks' algorithm is demonstrably harmful. HN's not so much.
In general the mere fact that there is limited number of contributors that are known and indicated authorship goes a long way. Also - all publishers have to register indicating who is behind particular "medium".
Contrary, social-"media" there is no accountability. Anyone can publish anything and there is basically no information who published that. You can sue but then again publishing platform has no information about the author so the process is long and convoluted.
Making social-media what it started from (network of close friends) where you only see the content they publish and requirement of actual details who is behind the particular profile (could be for pages/profiles with more than something like 10k followers, in which case - let's be honest - it's not "friend" at that point) would go a long way.
Ban any kind of provider-defined feed that is not chronological and does not include content of users the user does not follow, with the exception for clearly marked as-such advertising. Easy to write as a law, even easier to verify compliance.
I am sure it's going to be swell.
Let's also require tech companies to only allow content that has been approved by the central committee for peace and tolerance (TM) while we are it!
No risk of censorship there.
No, none of the moderators were paid, but I do think the ~2/3 admins were paid. But yeah, I did it purely out of the want for the forum to remain high-quality, as did most of the other moderators AFAIK.
> Recent social media (& maybe "recent" no longer applies) doesn't have this kind of community run tooling
Agree, although reddit with its "every subreddit is basically its own forum but not really" (admins still delete stuff you wouldn't + vice-versa) kind of did an extreme version of community run tooling, with the obvious end result that moderation is super unequal across reddit, and opaque.
Bluesky is worth mentioning as well, with their self-proclaimed "stackable" moderation, which is kind of some fresh air in the space. https://bsky.social/about/blog/03-12-2024-stackable-moderati...
I think this is a pretty perfect use case for banning. The harms are mostly derived from the business model. If the social media companies were banned from operating them, and the bans were evaded by DIYers, Mastodon and the like, most of the problems disappear.
When there's still money in the black market alternative, banning doesn't work well (see: narcotics).
[0] https://imgur.com/we-should-improve-society-somewhat-T6abwxn
It's imperfect, but afaik most social media does the opposite (all "engagement" is good engagement), and I imagine, say, Twitter would be much nicer if it tuned its algo to not propagate posts with an unusually high view/retweet count relative to likes.
In the USA there exist similar forces who also introduced bills with similar ideas multiple times in the last decade. One of those is currently in congress.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
My point, overall, is that there is all the criticism of social media that excludes HN is based on vibes. And if we're about to ban social media for the EU then hopefully we have more than vibes to go off of.
Maybe the problem is the websites that amplify the most controversial and problematic content because they get the most clicks, so these companies can report better DAUs and MAUs.
1) We can build open-source clients with user-configurable client-side recommendation algorithms.
2) We can shame the people actively working to make this problem worse, especially if they make 1) or 3) harder.
3) We can build decentralized protocols like Nostr to pry social media from the hands of tech giants altogether.
These solutions are not mutually exclusive, so we should pursue all of them.
The evils of social media are not consequences of people using the internet to connect with other people, they're consequences of people using platforms where you can buy a following instead of having to earn it.