If policing online hatred and misinformation is already impossible, we need to start doing more to support its targets

Daniel Rosehill
7 min readApr 9, 2021
Cyberbullying: if we can’t prevent it from happening, we need to work on strengthening mechanisms to support targets. Image: Pixabay

This week will go down in my life as the week when I was reminded about how ugly the internet can be.

I received a slew of targeted harassment, stalking, cyberbullying, gaslighting, and anti-Semitic abuse on Reddit.

And — at the time of writing — the platform’s only response has been to affirm a revenge “report abuse” report suspending me from the network (my suspension lasts three days.) (By contrast, posters who just days ago lashed out at my “big beak”, taunted me by calling me a “Sperg”, and told me that I was a “shell of a man” are free to continue spewing bile over the platform…)

Having moved past the futile stage of attempting to counter the lies of anonymous internet trolls (note: don’t even try), the next stage in my reaction to this has been spending time thinking about what we — societies, the world — can do about the growing menace of online hatred and cyberbullying.

Because while my case may provide a tidy and well-documented illustration of the fact that many major social platforms are proving unwilling to properly confront bullying and harassment, it is only one of likely tens of thousands of cases that take place on the internet every week.

The thinking I have been doing about online hate speech this week — and anti-Semitism — also ties nicely into something I have also been doing a lot of thinking about recently: Disinformation, anonymity, and pseudonymity and the roles, if any, they have to play in our society, in which the lines dividing reality from fiction are looking increasingly gray.

More presciently:

How can we create a world in which we can facilitate the few (arguably) constructive use-cases for these things — like publishing books under pen names and devising online assets to infiltrate and monitor hate communities — without enabling enormous bad to take place and inflicting harm upon the targets of cyberbullying and online harassment?

With the advent of deep-fakes and all manner of AI trickery, the world is slowly waking up to the reality that nothing written online is proven to be true — until it’s proven to be true.

This debate is only likely to prove more relevant and salient as the world confronts a reality in which the lines between reality and fiction become increasingly blurry.

(Fascinating case in point: prosecutors are grappling with the question of whether a video at the center of a harassment case is a deep fake or whether it might in fact be genuine. H/T: Peter Duffy.)

A few days ago, before the cyberbullies of Reddit stalked me to and began targeting me even in Reddit’s forum for discussing cyberbullying (yes, really), I tried to hem out some thoughts there about how we (the internet-using community at large) could facilitate online communities such as Reddit that facilitate some of anonymity’s “white hat” use-cases (whistleblowing, allowing people to discuss sensitive issues like health) but without making them hotbeds for harassment and abuse.

Or more simply put: Can we preserve the good things that anonymity enables without inviting the baddies along for the ride?

Perhaps, I wondered, online communities could implement something akin to Know Your Customer — let’s call it Know Your User (KYU) — preventing signups from being able to use their platforms while remaining totally anonymous.

After all, virtually all of the internet’s most notorious sources of hate speech, including Reddit and 4Chan, owe their notoriety largely due to how easy they make it for users to signup without disclosing a single detail of personally identifiable information. Users that don’t face any social or legal consequences for their action have virtually no reason to fear being vicious to strangers they encounter online.

And now my thoughts, after they have evolved a little, are these:

While I believe that such a system could work, I also believe that it’s probably not realistic to expect that it would do so.

Why?

Because in a world in which more than 4 billion individuals have internet connectivity, policing a signup process at scale — perhaps even with the help of AI — is likely to prove impossible.

And when niche social platforms fail to attract a critical mass user base they tend to quickly vanish from the internet. Nobody likes the thought of participate in an online community … without community.

Add to that concerns from privacy advocates about free speech, and it becomes easy to see how any online community that attempts to make hate speech impossible is likely to face obsolescence from proving an inconvenient impediment to the mainstream.

This morning the world awoke to learn of the passing of the Duke of Edinburgh, Prince Philip.

I decided to see what the Redditors of this world had to say in response.

There, I found a thread predictably filled with exactly the same type of vitriol and hatred that I had been subjected to just a few hours previously and for much of the past week.

By the exact same breed of nasty online internet user who will never have to face the social consequences of wishing that a man they didn’t know had passed away sooner.

While the differences between the late royal consort and I are obvious — he was a public figure, I’m an obscure freelance writer — the kind of hatred we both received online looked, at times, disconcertingly similar.

In an alternate universe in which the Queen of England used Reddit, I could only imagine, for a moment, the kind of heart-wrenching pain she would surely feel to see thousands of cybertrolls speaking lies and ills of a man who had just passed on — none of whom will likely face any consequences, nor legal nor social, for their actions and hatred.

When 20 or 30 trolls spew lies and abuse one could imagine that a determined moderation team could tackle the issue (even then, given the ease with which users can re-create accounts such efforts would likely prove futile).

But what could possibly be done — I wondered — when there are 20,000 haters and ill-wishers to tackle? How could we ensure the integrity of a social network then?

When people get nasty at scale, and there are platforms available to facilitate it that are unwilling to change their policies, we perhaps need to shift from thinking “what can be done about this?” to asking ourselves “what can we do to support and help targets of anonymous online abuse?”

Because in a free internet, we may simply have to reach the conclusion that we lack the power to coerce the social networks of tomorrow into properly stamping out the hate speech that they are increasingly host to at volume. Yes, even with the advent of AI and algorithms.

Instead, we may be forced to reach the conclusion that scaling online conversation while also adequately protecting those who are targets of misinformation and abuse perpetrated there are mutually impossible objectives.

Even when AI steps into help humans, the sheer size of online conversation — as hemmed out in YouTube comments, Reddit threads, and Facebook groups — may already be too large for the current systems in place we have with which to moderate them. And even if the clearweb can be policed, what about the internet’s underworld — the dark web — where such fora can proliferate entirely anonymously?

If that is indeed the conclusion that we reach then I suggest that we, as a society, need to split our focus between enforcement mechanisms — spotting fake news, reporting online bullies — and supporting targets that have been subject to online hatred to deal with the emotional toll that this kind of activity can engender.

More concretely, that might mean:

  • Greater awareness of cyberbullying and the detrimental effect it can have upon its targets
  • Greater awareness of networks that are being negligent in their responsibility to avoid hosting hurtful content and protecting their users
  • An increased supply of mental health professionals and others who are equipped to support those who have suffered egregious cases of cyberbullying to support them through the recovery process

Cyberbullying is a growing menace in our societies and as more of it goes unchecked and unchallenged it is reasonable to expect that the pool of targets affected by it will sadly exponentially grow.

If traditional enforcement mechanisms can’t cope with the scaling of online conversation, we may already have reached a point at which stamping out online misinformation and hatred through social networks is downright impossible.

If it’s not time for the conversation to move fully into exploring ways through which we can protect targets, then it’s at least time to split our attention equally.

(Note: anything I have written to date about cyberbullying on Medium has been quickly targeted by cyberbullies who have left abusive comments shortly after publication. Sadly, for that reason, I have to close the discussion before anybody has a change to response.)

To receive posts like this to your inbox, please consider signing up for my personal email newsletter:

--

--

Daniel Rosehill

Daytime: writing for other people. Nighttime: writing for me. Or the other way round. Enjoys: Linux, tech, beer, random things. https://www.danielrosehill.com