internet hate.
The troubling rise and seemingly mass-acceptance of online abuse directed at those living with disabilities.
Let me tell you about Tom and his wife Sarah (not their real names, but everything else is true). I’ll start by saying I don’t know how they do it. I don’t know how they continue to create and share content on the internet when, every day, they are bombarded with a deluge of abuse from internet trolls.
Tom is in a wheelchair, and Sarah, who’s able-bodied, is Tom’s carer, as well as his spouse.
Every time I see a new social media post or YouTube video of theirs, I catch myself bracing for what I might find when I hit that little icon and descend into their comment section.
Often, it’s immediate. “She’s only with him for the welfare checks.”, “I think she must be getting railed on the side!”, “What does she even see in him? He looks like a child.”
On and on, ad nauseam.
Sometimes, mercifully, it requires a bit of digging. Past the friendlier, more encouraging comments, to where you’ll eventually come across the hatred. It’s almost always there.
Then there’s the risk of going viral. The ‘aim of the game’ for just about every influencer, but not without issues for the ones with disabilities. When the algorithm picks you up and pushes you out there, way beyond the safety bubble created by your regular followers - that’s where the real messed-up stuff starts to rear its ugly head.
In August 2024, Tom wrote an eloquent yet painful update on his Facebook page, in which he referenced a video that had gone viral a few days prior. The video was meant to be funny and lighthearted, and showed Tom sinking in a swimming pool whenever Sarah let go of him. It was part of a bigger online trend at the time where people were sharing their ‘athletic mishaps’ and jokingly explaining why they failed to make it to the Olympics.
100,000 people ‘liked’ a comment that referred to Tom as a vegetable. 80,000 people liked a comment from someone who said they “struggle to see Tom and Sarah in a relationship”. Another person said, “One day she’s not going to pull him back up,” and another, “Dating him is like doing charity work.”
Most galling of all? “You should just drown him.”
Consider this carefully. To some, we, the disabled community, are sub-human.
Tom and Sarah are far from being alone. I’ve seen comments like this on countless other profiles of influencers. From Jennie Berry (Wheelie Good Life) to Cory Lee (Curb Free with Cory Lee), and on important campaigns, like Assume That I Can.
Twitter was bad, X is a whole lot worse
When Elon Musk purchased Twitter and promptly changed its name to X, it felt like a sea change quickly followed. Alongside Musk’s hardline mantra of freedom of speech appears an equally held belief in freedom of consequence, with the darkest and most disturbing comments and opinions seemingly protected under the distorted guise of America’s First Amendment.
Since then, other social media platforms such as the entire suite offered by Meta appear to be following this trajectory of allowing freedom of speech to reign supreme (as long as it aligns with their agenda, of course).
Sure, those with certain public standings may find themselves ‘cancelled’ when caught sharing something particularly controversial or offensive, but what about the average Joe or Jane? The ones that slip under the radar, free to spout hate and to share their vilest thoughts?
It’s why you’ll hardly catch me posting pictures online. I rarely give interviews, my appearance on podcasts is even rarer, and as for featuring in videos… never. Call me nervous, call me shy, call me whatever you like, but as someone who’s battled with their appearance all my life, and who knows just how vindictive the internet can be, I’d rather preserve my mental health. Even if it does slow my progress as editor, author, publisher, community builder, and wearer of many hats, for my brand, The World is Accessible.
What can be done?
Suggestion One:
In the UK, as an example, there have long been calls by prominent figures for social media platforms to take more accountability, with recommendations that ID should be required when creating an account. This, however, raises some concerns regarding privacy and identity theft. Cases such as the Cambridge Analytica scandal not only demonstrate that data breaches occur, but that sometimes, our data is being sold to the highest bidder.
I do believe, strongly, that social media platforms should make much greater efforts to crack down on hate speech, discrimination, and bullying.
Suggestion Two:
Although we live in an era where large multinational corporations are scaling back their human workforce in favour of AI tech, we desperately need a human touch when sifting through reported comments. This would facilitate a swifter and more robust response, including the shutting down of accounts tied to the worst offenders. To facilitate this, social media companies could look to hire specific cyberbullying taskforces, with individuals with disabilities being employed not only for their lived experiences and greater understanding of the often nuanced examples of online hate towards the disabled community, but also, as an additional hiring opportunity for folks with disabilities… A net-net win.
Suggestion Three:
As part of the curriculum, I think kids should be taught digital literacy and empathy in schools. As with any ‘ism, be it racism, sexism, or ableism, tackling the problem requires a bottom-up approach. We need to educate children on these matters so that they stand the best chance of growing up to be compassionate and empathetic adults who understand that their words matter and that what some may perceive as being a joke could be very damaging to someone else. When I was at school, people would sing “Sticks and stones may break my bones, but words will never hurt me”. Well, it’s bullshit. Children (and adults) need to know that whatever they say online carries as much impact as if they were to say it directly to someone’s face.
Suggestion Four:
Finally, in conjunction with community teams overseeing post and comment regulation, creators themselves should be given better moderation tools. They can be the first line of defense against online abuse. I don’t have all the answers, and can’t elaborate too much on what may or may not be possible from a moderation standpoint - but surely keyword blockers and the auto-deletion of harmful comments could return some of the power to creators. Some platforms, such as YouTube, already offer this.
. . .
I know that if I was ‘out there’, not caring; following the repetitive advice of old friends and peers to just not give a fuck, the growth of The World is Accessible could be supercharged by now. I am under no illusion that, whilst their written content is good, the success of the likes of Cory Lee is also largely down to their willingness to create video content for their social media channels. In this fast-paced, content-thirsty world, people have to become their own reality TV stars. Their lives are consumed by the eyes, ears, and increasingly rotted brains of the masses. And with that comes the open invitation for hate. The comments from those who, on the whole, are refusing to admit that they’re hurting and are projecting that hurt onto others through spiteful, careless, and unempathetic comments.
The internet isn’t going anywhere, and sadly, neither are the trolls nor the cruelty they spew. But that doesn’t mean we just have to sit back and accept it. I believe that with proper education, moderation, and communities demanding better, we can finally move towards something that resembles conquering this uphill battle. We have to try, because I think we owe it to everyone who’s ever felt like throwing their phone or laptop off the wall, or cried themselves to sleep over something a stranger on the internet said in a comment.
If we as a community don’t try, who the hell will?



Well said. There have been cases of online bullying here in Japan with very sad results (see Hana Kimura, but hundreds of regular, non-star SNS users). Words matter.