Facebook’s bullshit has to be stopped — here’s how

Misinformation could be minimised, but as a society we’re still enjoying the temporary relief of self-harming

Mike Scialom
4 min readDec 27, 2020


Automation can be used to achieve any number of goals, but it all depends on groupthink

It’s outrageous that Facebook is allowed to tell the world that policing its site for harmful content is not possible without an army of fact-checkers when they know full well that such a goal could easily be achieved with a little strategic thinking — thinking which could help avoid democracy and the economy they depend on from toppling.

Clearly Mark Zuckerberg doesn’t want his people to go down that road, so let’s do the thinking on his behalf, shall we? Now, I’m no Nick Clegg, but to most people out there, that’s an advantage…

Firstly, Facebook wants you to think that they couldn’t possibly track every single one of the average 350 million posts made every day — which isn’t true, but it would be prohibitively expensive and time-consuming. However, of those posts, only 36 per cent are related to politics — still more than 100 million posts a day, but you could break it down further using algorithms and have a relatively small number of human decision-makers and still achieve results which would save the citizens of the world from being bombarded by fake news, lies and hate-stirrers. For instance, you don’t need to monitor users who rarely post or those who only post neutral content about animals, or jokes, or family events. It would be so simple to have an algorithm grade every Facebook user for the likelihood of their posting disinformation. The actual number of misinformation super-spreaders is relatively low, it’s just their impact that is huge. So find out who they are and target them!

After a number of strikes — say, more than three racist posts — your account should be automatically deleted

You could start this quest by putting anyone who has used the words ‘Trump’, ‘Brexit’, ‘jihad’, ‘5G’, ‘Vaccine’ etc in a post on the Facebook security radar. If you have previously post anti-Semitic, anti-Muslim, racist or misogynistic content, your posts should be at the head of the queue for scrutiny by the detection team: mostly, this content would be read and blocked (if necessary) by AI. After a number of strikes — say, more than three racist posts — your account should be automatically deleted.

The idea, which Facebook promotes assiduously, that thousands of human checkers would have to sit there reading through all the posts for the site to be clean of any infringement is nonsense and they know it: the concept is parlayed simply to make you feel sorry for them. These people are, after all, the global masters of emotions, and they know that an appeal to one of the most basic emotions of all — pity — is the winner winner chicken dinner option.

The fact is, not only could they be doing more, they could stamp all the hate- and fear-mongering out quite easily. But they don’t want to, which brings us to the second issue concerning this platform, which is simply a gigantic scam to defraud us of genuine feelings: content which is divisive, outrageous and fraudulent appeals to our sense of confusion and despair about what is happening to us. To be a fascist or a misogynist is a safety valve for the fear and anxiety about existence that we dare not confess to in real life. It’s comforting to many (apparently) to think that some people are somehow ‘below us’ on the wheel of life, rather than to consider that every one of us, including animals and all of nature, is actually part of a unity.

Facebook is simply a gigantic scam to defraud us of genuine feelings

For too many people, externalising distress at the futility of a life not being fully lived creates a feeling of being worthwhile and belonging that is spurious, but still real enough for a mini-moment. The knowledge that our dog whistles are being heard by others makes us feel super-human, if only for a brief second — but as Zuckerberg knows, that short flame of reward and self-regard is enough to make grown men (let’s face it, it is usually men) feel relevant, important even, and nourished definitely.

Facebook is flying high but, unlike Icarus, it has no wings

So actually Facebook is a phase: we need to grow out of it, and we do that by identifying the messiness in our own lives, and accepting that it’s okay, because the alternative is to live lives that are simply black and white, lacking the nuance of adult thoughts and feelings.

Right now, the reason that Facebook continues to allow ‘alternative facts’ and other misinformation to proliferate is precisely because posts that are divisive and controversial generate more engagement, and that means more advertising revenue. Put simply, we all need to grow out of the cheap thrills that social media gives us, and concentrate on something meaningful, rather than validate the chaos theorists that run Silicon Valley’s most profitable platforms. That doesn’t necessarily mean that you should delete your account, but it does mean that you should become a digital agitator.

One day soon Facebook will learn how to adapt its business model so that we all become agitators: for instance, it could introduce a button, alongside the ‘like’ button’, which decrees a post harmful, and if a post generates enough ‘harmful content’ warnings from the user base, a human fact checker would analyse it. But we’re not there yet, because we’re still too busy self-harming.



Mike Scialom

Journalist, writer; facilitator at Cambridge Open Media