Toxicity on Social Media
Comments
doginasuit
swed420
> For most of its history, Reddit didn't have an algorithm that promoted stories beyond upvotes and time since posting, that might even still be the case.
Reddit is heavily botted, including by capital interests, and has been for a long time. This includes basic up/down vote activity.
> I think there is a significant percentage of users that do not initiate extreme content but participate in amplifying it.
Yes, it's probably initiated by bots, and then real users are easily persuaded to follow the manufactured herd.
These issues are not exclusive to reddit, either.
doginasuit
The problem with the bots explanation is that it is unfalsifiable. On top of that, this fits a very recognizable pattern of human behavior. I'm very skeptical that blocking all the bots would even move the needle.
swed420
It's unfalsifiable, therefore it isn't happening? Not very convincing for anyone who's engaged with these platforms where reactions happen seconds after your own clicks.
Alternatively, ask yourself, would monied interests be inclined to exploit these platforms? Of course they would. It'd need a solid explanation as to why they wouldn't.
doginasuit
I never said I think it isn't happening, I just don't think it is driving the phenomenon. Platforms like Lemmy do not have enough traffic to justify a bot presence by monied interests and their general discussion communities are just as extreme as Reddit, if not more.
swed420
Likely because distributed platforms like lemmy/mastodon suffer from the "fiefdom" problem, plus siloed communities forming due to ideologically defined rules.
gruez
>Reddit is heavily botted, including by capital interests.
Is that why /r/all is consistently anti-capitalism and anti-business?
dmbche
Read The society of spectacle by guy debord
nozzlegear
Can you give us a summary, so we can discuss what you're alluding to without a several-day waiting period while we read the book?
dmbche
Not really, or not better than Wikipedia would. Sometimes it's more pertinent to read than to chat about a thing you might read.
You can also not read it, you do you, but it's enlightening.
swed420
It's manufactured controlled opposition which is designed to look crazier than unchecked capitalism, and it's succeeding wildly at its mission.
See: wsws.org as a longstanding external example
gruez
So if reddit is more left than you are, it's "manufactured controlled opposition which is designed to look crazier than unchecked capitalism", and I guess if it's less left than you are, then they're a bunch of DINOs (LINOs?) like Manchin or Sinema?
swed420
It's a bunch of siloed group think offered in a variety of flavors. Definitely not a monolith, though basic liberal types (who virtue signal as being more "left" than they really are) might be the most common user.
Siloed groups are the easiest to monetize to advertisers, and they're also not a threat to a faux democracy.
tekla
My god, Bluesky is probably actually run as manufactured controlled opposition from all the crazy stuff you see on there. It makes perfect sense. HOW DEEP DOES THIS GO.
swed420
Yes, and it provides something for the X/twitter reactionaries to "react" to.
rightward_ratchet++;
throwawayffffas
It's pretty clear, the platforms feed and profit of promoting the loudest most toxic views out there. It's time to just treat these platforms as publishers instead of platforms and make them liable for their speech. Because that is what it is the automation around the method the speech is delivered is irrelevant. As is the viability of the platforms in a environment where they are responsible for what happens on them. We were fine before them, some of them are already dying.
dwa3592
I personally feel Reddit is extremely polarized and toxic. It really depends on the community.
okr
Yeah, have the same sentiment towards Twitter before it has been bought by Mr. Musk. You were always close to being banned. And the American government was colluding. I don't like the tone on X now, but hey, no one silences me, and that is awesome.
avidruntime
How was the US Gov colluding? My recollection of the twitter files was it largely blew out of proportion (for non-tech aligned audiences) that Twitter received tips from CISA regarding misinformation/disinformation and Twitter decided whether to take action on accounts, sometimes they did, sometimes they didn't.
gosub100
A group of whistleblowers tried to come forward about twitter before Elon bought it. There were entire departments dedicated to suppressing certain ideas and trends, while amplifying others.
array_key_first
Realistically this is something you need to do to some degree. I mean, you probably want to silence the "kill yourself" part of twitter and want to amplify the "please don't kill yourself" part.
Regardless, I think it's fairly clear that twitter is as manufactured as it ever was.
tipsytoad
As someone who basically only hangs out on HN, what are the trends on Reddit? I thought it would vary by channel
AMerrit
There are still good niche and well moderated subreddits, but the big ones are pretty annoying. Lots of most extreme version of headline/summary to get the clicks. Tonnes of bot comments, especially with the rise of LLM. Repetitive joke responses.
Ntrails
Honestly take me back to time ordered forums. Threads, voting, algorithms were all a mistake.
antisthenes
They still exist. You just need to be content with smaller communities.
micromacrofoot
they're probably more numerous than ever, just harder to find
nullc
> that might even still be the case.
No way, hasn't been true for many years. Try viewing the site from a few different people's systems.
Arkhaine_kupo
> Despite this, we have still seen a steady trend toward extreme views on the platform.
There were 3 conditions that were working and were removed very very quickly
1) it was a web application only. Which enforces an interaction that is more contientous
2) it skewed older. Compared to other sites like IFunny or Instagram the age profile was closer to 30 than 12.
3) the upvote/downvote mechanic was used to upvote relevant content not something you agreed with. And downvote to drown overused jokes, lack of nuance posts etc.
But in 2020 reddit destroyed 3rd party apis and went full head on the app.
Age plummeted, app useage is mroe casual than laptops and length of posts went full brainrot and lastly there was no enforcement to teach people what upvotes meant. So it became thumbs up or down, and the jokes went from heavily downvoted to always the top comment.
150 million users in 6 months is the death of any conversation and reddit did it on purpose to try an IPO.
CM30
It also had a design that was offputting to a lot of casual users, which probably kept out folks that didn't really have anything meaningful to say/didn't want to contribute much. Same with Hacker News: the average Joe doesn't find this site all that appealing compared to Twitter/Facebook/Instagram/whatever, so it mostly appeals to more techy, intellectual users than those platforms do.
genghisjahn
Slashdot's rating system was the best I've ever seen. But I'm sure it doesn't help improve engagement, in fact the system had so many rules (for an online ratings system) that I'd guess it reduced engagement. Which might have been a feature in its glory days.
CM30
You mean how you had to give a reason for your rating rather than just choosing 'like' or 'dislike'?
Honestly, I think that more nuanced setup may help limited toxicity quite a lot. If there's no general upvote/downvote option, people might have to actually think why they like/dislike something rather than treating the system as a "I dislike this because I disagree and everyone should think the same way I do" setup.
It's why I quite like the reaction systems some forum scripts have. Yeah they're not perfect (many still have like/dislike options by default), but giving users reasons for why they upvote/downvote a post makes things a lot more meaningful. I also quite like how for some of them, agree and disagree don't actually change how the post appears or count as a rating. They just exist so people can see how many people agree or disagree with something and that's it.
ishouldstayaway
(3) here was always just a fantasy. It never actually worked that way.
Arkhaine_kupo
Any platform with millions of users and dedicated communities will be hard to generalise, but there were countless examples of it happening.
Length of posts has plummeted, "meme" content and "twitter" like language was repudiated while now its basically the main mode of communication.
There used to be "famous" usernames, not everyone agreed with them but most people considered their input valuable, ending perhaps with the famous Unidan incident.
I would admit that having been in the site for 15 years the degradation has been continuos and small communities were much better than default subs from the get go. But the Eternal September post App release has been irreversible and made the site culture absolute trash
nozzlegear
Agreed. Moderators would even try to use custom CSS to remove the downvote buttons on their subs to prevent people from downvoting comments they disagreed with, as that was against "reddiquette." But oftentimes you'd want to indulge your monkey brain and punish people who have differing opinions by making the number go down and their comments go gray.
thin_carapace
if existed a term less crass than 'reddit circle jerk', I would use it. people love it for a reason, bipolar thinking in a herd is natural, we are networked XOR cells by design.
id be seperating social media for any related analysis, as two major milestones demarcate distinct usage patterns (first algorithms, then LLMs). imo those factors would influence the discussion about as much as a platforms inherent construction.
toxicity has evolved over time, we have progressed from mere keyboard warriors, to nation states delivering propaganda campaigns with a click, now finally half the internet is bot activity.
robot-wrangler
This is amazing analysis, presentation, and has a call to action at the end. Some of this guys other stuff: https://tobias.cc/reading
The only point I'd add is that it's not handling time evolution in wicked problems quite right. Agree that the noisy room is distorting the world in exactly the ways described. But what if we've been in there so long, and the world has become so distorted.. that reality itself slides towards the once-extreme positions? Easiest to see this with climate-change controversy since that is the way that sort of thing happens, regardless of whether you think it's happened yet. Cascade, phase change, and collapse don't just call a truce.
So you have to anticipate that, acknowledging the pessimist is actually right, and that systems are a real bitch. Then you point out that if we're already doomed, we have nothing to lose nothing by trying. Systems are complex after all, that's the whole problem.. so if we miscalculated on the doom, then bothering to try actually saves us. Checkmate pessimists.
dwa3592
I liked the article and I also sympathize with a lot of the concerns around this topic because I am from the group that became silent after a while. Popular social media does not have any appetite for nuance. By design it cannot have that. Community checks are alright but I would rather spend an hour on hacker news trying to understand the nuanced perspective of 40 people than looking at very diluted opinion (4 choices) of 40000 people on twitter.
Also someone once told me - "To get your voice heard in a loud room, you either need to be very tall or extremely loud." Tall in this context are rich and influential people. Basically money and influence buys you height in a loud room
plewd
I re-realized this about a week ago when the "red button vs blue button" debate started appearing a lot on Reddit and Instagram. It's frustrating when every comment is just a shallow knee-jerk reaction from one side re-iterating their perspective or clowning on the other.
The whole debate could be summarized in a paragraph or two, but the social media environment is unfortunately curated towards diluted opinions (as you said) instead of nuanced ones.
All that to say I'm happy HN is still holding strong in terms of quality as compared to other platforms.
mayhemducks
Hacker news is not a representative subset of humanity. So you are restricting your range of understanding. Maybe that's necessary for mental health, (it is for me) but the tradeoff isn't great either.
lesostep
No reliably reachable subset is representative of humanity.
But I also want to argue against the range of understanding argument. Attention has a limit. Anyone who wants develop a deep understanding for any topic would do themselves a disfavor by trying to expand their range aimlessly.
We can't all know everything all at once, so we should just develop some common sense for the most important topics instead. Like "people generally good and against violence". We used to have that once, we can rebuild it now.
moolcool
> Tall in this context are rich and influential people
This is a double edged sword. Lots of "thought leaders" on twitter have outed themselves as lacking in both thought, and leadership.
plewd
Most of them only got to that position from being loud in the first place, so I'd think you could still put them in the latter category.
seltzerboys
This article is awesome but it doesn't acknowledge that the problem has been maliciously manufactured by social media companies. they do not have incentive to curb the distortion of extremism and therefore any attempt to do so in a grassroots way will likely not be effective. then there's the bot problem but that is probably easier to address if we actually committed to doing so.
__MatrixMan__
Do we need their permission to implement it? Could be a browser extension.
robot-wrangler
Or post it as a reply in url, hope for the silent majority to support it enough to send it to the top. If the platform only supports out-links it's tough.. maybe use QR codes as avatars?
__MatrixMan__
Trouble with replies is that:
1. They're not as visible as the post they're attempting to moderate, so people just won't notice that there's more info.
2. Many people practice https://indieweb.org/POSSE so you now need to duplicate your reply across many separate social networks if you're going to match the reach of the specified content.
The nice thing about a plugin is you can associate the annotation with the underlying content by CTPH hash (i.e. the underlying tech for virus signatures) so it shows up wherever the annotated content shows up, regardless of URL and and with identical visibility since you're going by what appears on the screen not by whatever internal logic the underlying site uses.
wrxd
It could. But most people use social media from mobile apps so it doesn’t really help in the grand scheme of things
mrmarket
true. but that's on the individual vs. the platform so it won't be perfect adoption. better than nothing tho.
__MatrixMan__
Sure, its opt-in, but it has the potential to span platforms. Works just as well on somebody's blog as it does on Twitter.
swed420
> Do we need their permission to implement it? Could be a browser extension.
Instead of bandaid-hack solutions leading to perpetual cat-and-mouse, why not build a citizen-owned platform from the ground up, as detailed here:
https://www.noemamag.com/the-last-days-of-social-media/
You would barely even need to advertise for it if it was obviously better than any of the existing corporate slop. It would sell itself, and the "profit" would be the end result that everybody can enjoy.
__MatrixMan__
Because I don't think the problem has to do with who owns the platform, but rather its that the platform's design relies on infrastructure that can be owned in the first place.
The people who run existing social media didn't start out evil, being in a powerful position made them that way.
I'll be rooting for this user owned thing to stay true to its goals, but if it's shaped like the other ones in all ways but its ownership structure, then I won't be expecting it to do so.
swed420
I believe that threat could be prevented with the suggestions in the article.
> The people who run existing social media didn't start out evil
Um, not all of them:
> On July 6 instant messages by a 19-year Zuck appeared on Twitter, along with a link to a 2010 Business Insider story about an exchange that took place shortly after the Facebook founder launched the social-media phenomenon in his dorm room.
> “Yeah so if you ever need info about anyone at Harvard just me. I have over 4,000 emails, pictures, addresses, SNS,” Zuckerberg’s message says.
> “What? How’d you manage that one?” a friend asks.
> “People just submitted it,” Zuckerberg responds. “I don’t know why they 'trust me.' Dumb fucks.”
__MatrixMan__
Touche re: Zuck, but I still think that even had he started a saint, he'd be a sinner by now.
swed420
True, which is why you'd need well defined safeguards in place from the very beginning, with high visibility into the organization that you normally wouldn't find in a closed, for-profit business.
__MatrixMan__
You mean like OpenAI back when it was a nonprofit?
swed420
So figure out what they did wrong. It's going to have to be more original than the easily subverted traditional "nonprofit" model.
__MatrixMan__
Maybe the right lawyer is out there for that challenge. But legal code is running on compromised infrastructure these days so I think we should plan to operate as if the law is against us.
card_zero
OK, so to start with you're saying that there's a small noisy pro- side, and a small noisy anti- side, and a moderate majority. But then suddenly:
> The Majority Goes Silent - When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.
That's not the same thing, is it? Here the majority is, say, anti-, but they are being frightened by a noisy pro- minority. They're moderates in the sense that anti- is the conventional position to take. But they have opinions. (They could also be in the minority, and this fear of speaking up would still be a bad thing.)
Otherwise, if they're truly moderate, but are frightened into silence supposedly, what would they be saying if they dared? "Everybody listen to me, I have no strong opinion on this matter"?
Mordisquitos
I have a similar intuition to yours. On many topics it isn't so much self-censorship, it is simply that for many people there is rightly no motivation to go around announcing their nuanced, moderate, or no-opinion-but-unimpressed-by-extremes positions.
Even when people do have strong opinions on a topic (and a moderate opinion can also be strong), most people have better things to do with their lives than to go around blasting their opinions to the world as a hobby. And the few in this camp that do are not very likely to be amplified by the engagement algorithms.
lelanthran
> OK, so to start with you're saying that there's a small noisy pro- side, and a small noisy anti- side, and a moderate majority. But then suddenly:
> The Majority Goes Silent - When the majority of people looks at the feed and assumes they're outnumbered, people will often self-censor.
> That's not the same thing, is it? Here the majority is, say, anti-, but they are being frightened by a noisy pro- minority. They're moderates in the sense that anti- is the conventional position to take. But they have opinions.
I don't follow your argument (which is different to the one in the article):
There's a small noisy pro-side, a small noisy anti-side and a majority, but not necessarily a moderate majority!
The article doesn't say anything about the majority being moderates, does it?
> Otherwise, if they're truly moderate, but are frightened into silence supposedly, what would they be saying if they dared? "Everybody listen to me, I have no strong opinion on this matter"?
Not necessarily true; there's a noisy pro minority, a noisey anti- minority and a silent majority. Who know if they are pro or anti or equally split?
And even if they were actually moderate, they could see opinions like "everyone should have guns" and "no one should have guns", and keep their majority moderate opinion of "people should be allowed guns depending on whether they cross some objective line into dangerous or neglectful behaviour".
That's both a moderate and a majority position, and yet you won't see it expressed in a forum because all the noise is being made by the two extremes.
The argument you're making is that the silent majority must necessarily be moderates, but that's not a requirement.
Cthulhu_
But that assumes a black-and-white viewpoint - one or the other. But there's a big nuanced gray area that is underrepresented everywhere.
Take immigration or refugees - the obvious thing is that you're either for or against it. But there's so many things in between, so much nuance, etc. And that takes reasonable adults to think and talk about.
JimmyBuckets
This seems like a great idea. Even without the linked surveys. Two questions I have:
- how you does this handle the fact that a lot of accounts on social media platforms are bots that maybe controlled by a small number of people.
- how do we actually get this implemented?
63stack
This is my question as well, especially about the "community check". How will it be ensured that the "community check" is not going to be dominated by bots pushing an agenda? How is that different from "just another comment section hidden behind a green button"?
vintermann
I guess they think fighting bots is a separate problem. Fair enough, since bots would still be a problem even if they pushed "reasonable" takes.
energy123
Regulating the algorithm is my favorite answer. Ban the recommendation engine on large social media sites. Make it a chronological feed of who you follow. Make it boring. I don't know all the details, but something like this.
rcxdude
I agree, recommendation algorithms are a huge part of the problem. Consciously choosing what you interact with is a very important part of media consumption IMO and most social media sites give you very little tools to do that (no, having likes/dislikes affect your personalised feed is not enough, especially when that also becomes 'engagement' and boosts it everyone's feed in general). These algorithms should be dumber in all areas except spam prevention (and even then, if there's less stuff in your feed you didn't specifically choose to see, spam should be a much smaller problem anyway).
tskj
I think this undersells the problem of discovery a little bit though. For example, youtube has been great at serving me longform content I want to engage with and wouldn't have discovered any other way.
(then they started having shorts, so I cancelled youtube premium)
pibaker
How many people do you think use HN's "new" and "comments" pages? These are exactly what you asked for, a feed of the website you are currently on without the recommendation system's influence.
I personally find them nigh unusable because of the lack of any kind of filtering. I am on HN precisely because it has a somewhat working post sorting system — a recommendation engine, as an activist who wants to get HN in trouble might say.
Anyway I doubt a regulation as such would fly under the first amendment. Recommendation is expressing and opinion and expressing an opinion is speech. If I think one post is better and deserves to be on the top spot, I believe I should have a right to say it without some guy in DC telling me to shut up.
tardedmeme
I want some kind of algorithm though. If some of my friends post a lot and some post a little, I want to see a more even split. And I want to see some posts from friends of friends, and from strangers who are posting similarly to my friends.
plewd
I honestly don't think it's possible for platforms to have "nice" algorithms like this without slowly slipping into the "maximum-engagement" algorithms we're plagued with now. I remember seeing this happen with Instagram, slowly going from a chronological feed to a confusing one where you can never be certain you've caught up with your network.
In a perfect world it would be great to have a platform that allows open-sourced algorithms for people to choose from, although that's a crazy pipe dream.
johnpaulkiser
I think tweaking section 230 in the US would have a similar effect. Make corporations liable for posts that they algorithmically amplify. The "discovery" algorithms would become banal overnight without an outright ban.
bell-cot
But as with pretty much every cardinal sin of late-stage capitalism - there are a whole lot of very entitled people, who are both very accustomed to and skilled at getting their own way, who are heavily invested in opposing any real solution to the problem.
robot-wrangler
> how do we actually get this implemented?
Hackers might be interested to know that there's an "open questions" section at the end of TFA. Some of it probably wants simulation, some wants theorems.
Camel-ai pubs/frameworks might be related and useful, for example: https://github.com/camel-ai/agent-trust
Several model checkers also have primitives for working with common-knowledge. TFA puts it like this:
> Learning a fact changes what you know. Seeing it displayed publicly — where everyone else can see it too — where you know others can also see it, changes what everyone knows, and subsequently how they act.
An important piece of technical vocabulary, it really seems we need this to talk about a lot of problems lately. Here is Terence Tao talking about some related math for disinformation and politics ( https://mathstodon.xyz/@tao/114866548969775485 ) and summing it up this way:
> we barely even have the vocabulary to discuss, let alone analyze, games in which control of information is a major battleground.
He kinda means in general though I think.. probably we can find heuristics and crunch a case or two
dependsontheq
I have been working on a monitoring and prebunking system for digital manipulation and desinformation. We are focusing not on the content or narrative but on the psychological patterns and manipulation techniques that are used.
It's the most disturbing thing I have ever worked on, there is much more out there than moste people realize and a lot of it uses deceptive dark patterns.
If somebody is interested in talking more about this or is working on similar things, always welcome!
tardedmeme
How do you convince people that your system is not itself disinformation?
dependsontheq
I get that question a lot and I can understand it, so there are two things here at work, one is freedom of speech. Depending on your local rules (I am from Germany so our rules i.e. concerning the Nazi past are different than the US rules) freedom of speech should be guaranteed, and opinions shouldn't be labeled as desinformation or manipulation based on their content.
What we are monitoring are deceptive patterns on a text or transcript level. Deceptive patterns can be things like information inconsistency in one post, context shifts in one post that are used to reframe something, or video patterns like fake statistics or fake headlines that are not consistent with the main content.
All of these patterns are actual science backed psychological manipulation patterns and they are consistently used in the most viral posts we detect. My perspective after one year working on this is that the average media literacy is even lower than we think and that we build an evolutionary system with the social media platforms that is optimized to increase the performance of digital manipulation actors.
Mezzie
This is the area I wanted to work in before I got sick and ended up being pretty worthless.
I have the following questions, in no particular order - I'm writing this comment off the top of my head in stream of consciousness while procrastinating:
- Why did you decide on a new technical system/platform as the way to go? This might sound like a silly question, but one thing I've noticed when talking to techies who are interested in this problem is that it can veer into 'I have a hammer, look at all these nails!' The reason I ask this is one thing I've noticed in working with the average person (or even very educated non-techy sorts) is that they view themselves as having little to no agency when interacting with technical systems, and I worry that adding another one just further encourages them to outsource that agency (just believing you instead of randos on Instagram/TT). This is versus things like outreach through different channels, formalized educational programs, producing of children's educational material (teach the parents while they read to their kids, for example, which lets them set aside the ego of being lectured to as an adult), traditional/alternative media campaigns, etc. If it's just a case of that's your skill set and resources, fair enough!
- You mentioned in a sister comment the low rates of media literacy. I 100% agree. Do you have/have you found any good ways to handle the combination of high education/socio-economic class and low media literacy? I've noticed very similar patterns across education levels, but my peers with graduate degrees or some manner of social 'success' fully believe themselves to be media literate and in fact some of them could recite most of the deceptive tricks and point them out if asked. They still knee-jerk believe things that confirm their priors.
- Is there any educational focus on heuristics and ways that the average person can satisfice their way to something better than the status quo? A lot of effort in this area seems to assume some platonic ideal of an informed, rational citizen with plenty of time to dedicate to educating themselves/learning better habits. Because of this, they tend to be information dumps. In addition to the low media literacy, there are a lot of people (at least here in America - I can't say for your education system) who lack the requisite knowledge to understand what you're telling them. I know that could sound a little insane, but a lot of people can't manage hypotheticals or understand second-order effects. We've also got the studies about people's attention spans. Going through what amounts to paragraphs of psychological text (or video) presented in a factual way will make people scroll or their eyes glaze over, but actions they can take in their life (e.g. stopping social media use for a month and noticing how their thoughts change, specifically following a small group/topic that you don't belong to/have much interest in to see how conversations change over time without personal investment, etc.) might be more approachable. Right now, the two approaches seem to be 'let experts educate you so you can learn a byzantine system in your 45 minutes a day of free time' and 'just go touch grass/let's go back to 1990'. I don't think either of those are realistic for the average person.
neogodless
There's money in politics and money in social media.
And the money decides how to run the circus. Not for the benefit of all.
So it is a really hard problem.
hermitcrab
New social trends and technologies frequently cause some level of moral panic. Moral panics of the past have been caused by all sorts of things, that now seem rather quaint: novels, bicycles, comics, television, videos, heavy metal, dungeons and dragons etc. But social media feels very different. It really does seem to be causing major societal disruption.
tardedmeme
So did all those other things. And many of them did cause major societal disruption. And most of those were for the worse.
hermitcrab
>And many of them did cause major societal disruption.
Which ones?
tardedmeme
Printing press ("novels"), bicycles, television, videos. Half the video games today trace back to dungeons and dragons so that was disruptive too.
hermitcrab
I'm not convinced that bicycles, novels and dungeons and drags were major societal disruptions. The printing press certainly was.
wffurr
Social media is more like the printing press than any of those things. It radically changed the economics of distributing information. The printing press brought down the Catholic church and the Kings of Europe. The disruption caused by social media and disintermediated free distribution is just getting started.
krige
You may notice that both the european kings and the church coexisted with the printing press for centuries. No, what did the kings in was the great war, and what broke the Catholic Church's stranglehold on Europe (because it's still around and hardly powerless) was either the french revolution or the great war again.
TFNA
Notice much Catholic Church power in Northern Germany and the Nordic countries? The printing press was a huge part of how Northern Europe went so hard for Lutheranism that Catholicism became a vague folk memory. Around the same time, Great Britain and the Netherlands became mostly Protestant, and even France had a substantial Protestant community before the Counter-Reformation was set on it. This is what the OP was talking about.
krapp
The current social panic always feels very different. But people literally believe social media is the sole cause of all of modern society's problems, that it's a mind-control platform and a cancer on society. I've seen people say they would welcome a fascist dictatorship if only it meant destroying social media. I've seen people say they want "algorithms" made illegal.
It's obvious from the hyperbole around the discourse alone that this moral panic has reached levels of derangement that far outclass any rational basis for judgement.
Does social media have negative consequences? Sure. Are people assholes on the internet? Always have been. Is social media the greatest and most existentially perilous evil ever conceived by humankind? No.
I think in ten years people will look back at this (on whatever strictly censored and regulated internet replaces this one) with the same bemused confusion as we do the Satanic Panic. And honestly in forty years, if technological civilization still exists, we'll find out how much of that was stoked by the CIA or other interests.
speak_plainly
It comes down to the kind of society we want to create, not some existential threat. Social media has an outsized effect on everything from the food people eat to the medical care they receive. The incentives of social media create a great number of distortions within the social media sphere but also in the real world.
Is traveling to Tokyo just to sprint across the Shibuya Scramble for a slightly less-crowded Instagram selfie really a model of the good life? Should someone like Zuckerberg have this level of control over the activities and minds of the human race? Is Mr. Beast a role model for children by industrializing the exploitation of human virtue?
Human social pressure and follower mindsets are part of the human experience but systematically gaming those instincts in real-time so money flows to a social media company at all costs in some strange digital sharecropping scheme is what’s new and the hierarchy of others trying to capture a small piece of that pie creates these distortions.
tolerance
I couldn't have said any of that better myself...
> Human social pressure and follower mindsets are part of the human experience but systematically gaming those instincts in real-time so money flows to a social media company at all costs in some strange digital sharecropping scheme is what’s new and the hierarchy of others trying to capture a small piece of that pie creates these distortions.
To what I think @krapp's point is: these dynamics are not exclusive to social media. At their core they're led by something far more primal than what social media only exacerbates. Governments are not as naive as the general public. Regulations effected in 2026 to "regulate social media" could have consequences on how information is spread among people in 2040.
seltzerboys
if you strip social media down to its essential parts it's clear that it can easily cause huge problems for a society. it's basically a never-ending 24/7 stream of information amplified out to anywhere in the world that is:
1. insanely low-effort to post 2. requires NO discernment, proof, credibility, or peer review to post 3. 'viral' in that opinions circulate because other people have interacted with them, not because they are right or meaningful. so bad news, good news, real news and fake news all travel at the same speed, lowering discernment even further 4. echo chambers are baked into the form. people are more likely to interact with content they agree with vs. content that is true or impactful. this creates circles of people agreeing with each other on increasingly niched-down topics.
it is extremely different from newspapers and television.
krapp
If you strip social media down to its essential parts it's simply a multimedia communications and networking paradigm. Nothing ontologically good or evil about it.
You aren't listing problems intrinsic to social media per se, so much as how people choose to use it and how specific platforms choose to operate. The latter of which is a problem when Twitter, Facebook and the like optimize for engagement through controversy, but I think when we focus on social media as a whole we risk throwing the baby out with the bathwater in restricting human rights and the ability of people to network and communicate freely without interference by state interlocutors.
funimpoded
> If you strip social media down to its essential parts it's simply a multimedia communications and networking paradigm. Nothing ontologically good or evil about it.
“The medium is the message”.
This stuff’s been around long enough we’ve got a pretty good idea of what its “message” is.
tolerance
> “The medium is the message”.
What's this mean?
funimpoded
It’s a famous phrase coined by McLuhan. He means that the form of a medium determines the kind of overall message it delivers. A case of scrolls carries a different message from a bound codex collecting those same scrolls, and so on. Whatever the hypothetical ability to deliver the same messages over books, TV, silent films, talkies, YouTube shorts, tweets, radio, handwritten letters, emails, et c., in practice the media themselves shape the messages that they deliver, so the broader “message” they effect in the form of shaping society and public life are very different.
tolerance
I'm not sure how that affects the arguments at play here. Social media is not a single thing. It has various forms each bearing its own kind of "message" revealing and influencing parts of society in a variety of ways depending on the platform.
It's a lot easier to not have to delineate the myriad of effects that each platform has on its users and issue broad-sweeping legislation that will have consequences on how information is distributed and how people can interact with each other according to how information is spread.
We're dealing with technology capable of containing various kinds of media at a scale far unlike what McLuhan observed 60 years ago. That's not so say that he's become obsolete.
Read this and let me know if I'm getting it wrong...
https://web.archive.org/web/20060605204535/http://individual...
mrmarket
well yeah. like hacker news is a social platform with checks and balances in place to prevent mass hysteria and ragebaiting. but if we're honest about the biggest social media platforms of the day, each of the things listed are features of them. and because these tools are actually incentivized against fixing each of the problems listed, they will not fix them. so they're functionally essential parts of the social media platforms that are actually shaping public opinion.
tardedmeme
Hacker News does not prevent mass hysteria and ragebaiting. It seems like for any social media side, the appearance of preventing negative behaviors is worth far more than actually preventing negative behaviors, which can actually subtract stakeholder value in many cases.
pibaker
HN is better than a lot of places such as most of Reddit but saying HN prevents mass hysteria is laughable. When xz was backdoored I saw people were asking open source projects to require government IDs to prevent future attacks. When LK99 came out I was told on HN that it must be real because the prediction markets say it is real. HN also loves to hate certain libraries and softwares the industry has converged into using — which is related to the point in TFA about a minority of peoples speaking over a majority.
BoredPositron
You: Other people are unhinged hyperbolists. Here, let me characterize them hyperbolically to prove it, therefore I am the calm rational one and by the way, civilization may collapse and the CIA might be behind this.
I mean wtf. Is this your parody account?
vintermann
If thinking CIA manipulates social media and the world is currently in deep trouble is extremism, count me in the 3.1% already, buddy.
krapp
You see, you're doing the thing.
Every bit of hyperbole I mentioned is practically quoted verbatim from some thread or another here, it is what people believe, and you can't even bring yourself to approach me in good faith because I've committed wrongthink by defending the existence of social media even implicitly.
The CIA and other governments are running influence campaigns across social media. The links between the major social media platforms and intelligence agencies are well known and well documented. And civilization is threatened by numerous factors, such as our over-investment in AI and the mass deskilling and destabilization that will create, creeping fascism and increasing political violence in a multipolar world, climate change leading to mass famine, pandemics in a post-scientific age, etc.
But people want to destroy social media (and by extension, want to destroy the freedom of communication it allows) rather than bother to consider that the real problem is the same problem we've always had - government and corporate interests trying to control our lives and manufacture consent through fear and panic.
They ran the same playbook prior to social media but the process was so normalized because they controlled so much of the media and culture that no one really even noticed it. Now people notice but they can't distinguish between the symptom and the disease.
seltzerboys
i disagree that people would prefer a fascist dictatorship if it meant social media was done away with. i haven't ever seen that opinion anywhere on the entire internet.
however i agree that the CIA and other governments are running influence campaigns on social media. i think that's been proven actually.
the answer, as always, isn't 'destroy decentralized communication' or public discourse online. it's to have tighter regulations on how algorithms are configured. what's pushed vs. what's suppressed because it's obviously intentionally inflammatory/trolling.
this is an issue requiring extreme nuance. but to say that being worried about how social media today affects society is like 'the satanic panic' is kind of absurd.
BoredPositron
Case closed.
krapp
This is what I get for trying to have a serious conversation here.
Congratulations on the endorphin hit. You really zinged me. I need to find where the grownups hang out.
BoredPositron
The thing is you are not you are only pretending to do so. Your predispositions are so ingrained that you even adopted the phrasing and speech pattern of deranged /pol threads. There is nothing to be gained except you reaching for the next talking point on your list.
vintermann
The "random sample" part of the solution is good. The "trusted polls" part of the solution is not good, because who decides if a poll is trusted? There are certainly a lot of polls I don't trust, because I suspect them of
1. cheating or being lazy with the sampling
2. Being a weasel with the phrasing to get the desired result
3. Being a push poll.
Still, a "trusted" poll is slightly better than a freeform "community note", especially if it sticks solely to how prevalent an opinion is.
Slashdot used random sampling in moderation 30-ish years ago. It worked OK, except that scores were used for very little (crucially they didn't even sort by them), and they had a more gameable non-randomized system to moderate the random system. And of course it was probably vulnerable to Sybil attacks.
(By the way, I guessed 4% for the number of toxic users)
finghin
Can’t be overstated that it takes great effort to make a poll trustable, in any meaningful context
po1nt
I was on social media since sharing Zynga game invites was majority of the posts. I've seen countless of magic bullets attempting to fix the polarization issues. Algorithm adjustments, fact checkers, community notes.
I feel like the real problem is the people. Many of us just want to be told what to think to blend in with society, some of us demonstrate Dunning-Kruger publicly and a few of us really want to drive the polarization for clout and attention.
Everyday I see people promote increasingly stupid ideas on both sides, further pushing my believe that the only solution is to severely limit what government can do, therefore making all this discussion pointless.
tolerance
This proposal assumes that the social media of the next five years will be anything like the social media of the last ten.
It's an interesting initiative though. One that I also think could have unintended consequences that would additionally seed greater distrust in the media—which isn't necessarily a bad thing. But I imagine that the people who already sense this distrust and distaste toward the impression of polarization that the media gives are becoming less and less likely to subject themselves to the nude opinions of anonymous strangers online.
Jeanbu
Really cool visualization and nice article. However, I noticed that to the question: "What percentage of Democratic supporters do you think are LGBTQ?" it answers %6.
That surprised me quite a bit, since the national average is over 9% according to Gallup, and considering demographics (younger people tend to lean Dem and have a higher LGBTQ rate than the average population) that figure is certainly wrong.
__MatrixMan__
I think another angle to this is that some people just can't ignore a number that might be interpreted as a score. They want followers not because they have a message that they want to get out there, but rather because more is better.
These people are unwittingly working for the platforms to drive engagement, often to the exclusion of any goal they might've had before the addictive aspects of social media kicked in.
I think we get less of this kind of behavior here on HN because each username is not bedazzled with metrics. You can see up vote counts for your own comments, but you can only infer those counts for others. The scoreboard is hidden, so it isn't triggering as much bad behavior from people who can't handle such things.
I think we could get even better behavior out of people if we never showed them raw counts of updoots, but instead only showed them metrics relating to their explicitly stated social graph, plus maybe one hop out:
> Alice and two of her friends like this
> Charlie likes this
It gives a sort of directionality to the feedback. Instead of seeking the high score as granted, likely, by a bot army, you learn something about Alice's corner of the social network. Maybe you should get to know Alice's friends better.
Because that loud 3% that are being harnessed by the platforms to drive engagement via content we all hate... Their primary sin is just that they fell for it. They're like alcoholics, if we want to help them into a mode where they're less problematic, we should hang out someplace besides the bar.
66yatman
Why is the reach of the content excluded?
wrxd
The claim that this isn't a hard problem to solve seems very optimistic to me.
The tiny minority dominates the feeds because that's how the incentives for algorithmic driven social media are structured. Do we really expect Meta, X, TikTok to anything that could reduce engagement?
Good luck having any of the mainstream social media apps add the banner they propose.
rapnie
Great article format with all the dynamic widgets in it. Will have to give this a good read. It is a very interesting topic given how much of (global) public opinion is formed through "social" media.
boxed
Huh. I guessed 13% of Demographic voters as LGBTQ, and the correct answer is 6%. But if you look at wikipedia the numbers globally for gay, lesbian, and bi should be above 6%. That's weird. I would expect Democrats to be slightly above the general numbers...
Mezzie
There's a couple of reasons this might be.
- A fair number of LGBTQ people don't feel represented by either party and so wouldn't answer yes to aligning with either the Dems or the Republicans. More communists, anarchists, third-party voters, etc.
- Voters tend older than the general population, and a lot of the LGBTQ elders are either dead or not attached to the community. The biggest demographic in the LGBTQ community by numbers are the bisexuals and a lot of 65 year old bisexuals went through their lives acting and living as straight. Likewise, a lot of older people with gender dysphoria never knew being trans was an option, so they may never have identified with it.
camillomiller
Fantastic presentation. Unfortunately the conclusion is painfully naive and forgives the platforms too much.
>We Could Do This Now - Platforms already have a lot of these capabilities. They already survey users. They even know how to run sophisticated polls. There are a few technical details to work out (spec here), but this is not a hard problem to solve.
Why do you think something like this is not already implemented? Platforms literally profit from this division, so why would they be incentivised to do anything? What's needed is not a good gesture from the overly powerful platforms, is fast, hard and deep regulation.
api
“The nuts are always the loudest” has been an observation forever.
This is showing how in the social media system the dynamics play out.
ketzu
> toxic tweets receive ~86% more retweets
The part that annoys me about the toxicity, or repetetive and annoying topics on reddit, HN, etc. is not that I am unaware that the content is produced by a small fraction. (I underestimated the count! I guessed 2%)
It's that people espouse it: They upvote and retweet it.
> Both sides develop wildly inaccurate beliefs about who the other side actually is.
That was a guess I had for a while. People have a strawman version of their out-groups in mind and quickly map people to that if an unknown person says something that indicates they might be part of the out-group.
> What percentage of the other side supports political violence?
It would be interesting to see the in-group statistic as well: "What percentage of your own side supports policical violence?", in my experience people also justify very shitty behavior as long as its from their in-group. (This plays heavily into the first point of espousing all kinds of shit)
---
It would be interesting to see if the community check actually changes anything. But the actual data seems to be only possibly for very generic topics - those we have the data on already. Something that would not be available for daily-fresh topics.
For my personal sanity I simply left reddit and stopped opening comments on certain HN posts - of course that does not help with the societal problems. Unfortunately.
Arkhaine_kupo
> People have a strawman version of their out-groups in mind and quickly map people to that if an unknown person says something that indicates they might be part of the out-group.
I think something that is not calibrated in the post and also missing in this reply is that believes and actions do not need to be aligned.
Both groups say around 10% of members support political violence, however no democratic president is pardoning wholesale domestic terrorists. And the 90% of republicans who condemn political violence are not repudiating, removing themselves or condeming the fact that far right groups are the most dangerous demo according to the FBI, or that most political violence occurs in rep states, or the direct correlation of the NRA infiltration into rep campaigning and mass shootings...
Like if you say you dislike violence but defend the system that creates the violence and pardon the people who commit the violence and share the table and take the money from the violent people... your "beliefs" are not worth much.
The whole conversation about out-groups is less relevant when discussing left wing policy due to the fact that it is not orchestrated AROUND in and outgroups. Right wing ideology is de-facto a ingroup political theory where some people must be excluded. When you add morality being justified due to being in group you end up with some very concerning politics where actions are judged on beloning to the group and not the morality of the action or the consequences.
See the blue collar protect the children anti abortion crew voting for a new york millionaire owner of a beuty pagent who was best friend with the worlds best known human child trafficker...
The believe system collapses the second you put the right tee shirt on, and that is what makes polling those people irrelevant. They simply will support whatever is in front of them as long as they belong to the in group. War bad in ukraine, war good in Iran. Taxes bad in 2018, tariff taxes good now. Sillicon Valley tech people all leftwing indian soy boys in 2016 now all alpha podcast ai cool guys who fund our president.
nothing matters as long as you wear the tee shirt
intended
Great presentation and based on evidence!
There is a fatal flaw with the solution though: inauthentic users.
The solution aims to reduce the distortionary effects of social networks in a market of ideas and conversations.
Inatuthentic users have deceptive participation. There are 2 types: Trolls and organized/motivated inauthentic users.
These users have a strong incentive to protect the distorted perceptions in the exchange, and will adapt to reduce the effectiveness of Community Check.
(I have a theoretical solution for it, however I am not quite sure how to test it out)
breppp
"What percentage of the other side supports political violence"
Both Democrats and Republicans estimated 30% but actually.. only 10% of both sides supported political violence
That number is crazy in so many ways and the post is overly nonchalant about it. The "distortion" isn't what's worrying here
kibwen
The magnitude of that number is a consequence of the effects being discussed in the post. And unless you find a way to solve the tyranny of the loudest, it's only going to continue to increase.
breppp
I agree with the notion in the post, though I do not agree that users will feel the format is not being pushed top-down by the man
I just had an issue with the way that number was completely overlooked
amazingamazing
It’s the same everywhere, including here. Why are downvoted posts greyed out and made harder to read? Why not just display the net vote. Why must your reputation be constantly displayed?
Lobste.rs in these regards are better.
paganel
How the hell does he define “toxic” discourse? I suppose, and I might be wrong on that, that it involves the millennial-liberal definition of it, which is why I stopped reading this about two paragraphs in.
Anyway, social media is dead, has been dead for quite a few years now, the majority of us are out back there touching grass, it’s only the fringes (on both the political left and right) who’re still obsessed about it.
juliusceasar
Some governments actively empower this and call it Hasbara, like Israel: https://x.com/FurkanGozukara/status/2053965645427966313?s=20
Platform algorithms are a big part of the problem but I think this misses another important part. I don't think it is simple as a passive silent majority being drowned out by a few loud voices.
For most of its history, Reddit didn't have an algorithm that promoted stories beyond upvotes and time since posting, that might even still be the case. Despite this, we have still seen a steady trend toward extreme views on the platform. To be fair, it has the reputation (at least in some circles) of being the most redeemable of the major social media platforms, probably thanks to the simplicity of its algorithm. Unfortunately, that's not saying much, it's a low bar to clear. What explains the polarization of Reddit in the absence of a bouncer amplifying extremism?
I think there is a significant percentage of users that do not initiate extreme content but participate in amplifying it. They may even find it problematic, but they really don't like the extreme views they hear on the other side. Or maybe it is the content they came to see out of morbid curiosity, something I am guilty of sometimes. The bar is so crowded because people find it preferable to the empty one down the street that has the expectation that people behave respectfully.
Incredible presentation, but I think the awareness we need to spread is a movement away from social media in general. As a social outlet it is generally incompatible with healthy social functioning and individual wellbeing. Face-to-face interaction has inherent guardrails for avoiding these problems and supporting the kind of social experience that we are really looking for.