Parent Concerned Kid’s Middle School Assignment Promotes Misogyny And Inceldom
There’s no two ways about it — America is in a bit of a crisis when it comes to online disinformation, propaganda, and the radicalization of young people, especially males. And generative AI tools are making it all worse.
One of the key reasons this is happening is because of a dangerous lack of media literacy skills in people of all ages, but especially young people. But some parents’ responses to these terrifying trends, while understandable, have the potential to do more harm than good.
A parent is worried her son will be taught ‘misogyny and inceldom’ by being asked to fact-check ChatGPT.
ChatGPT is causing major problems in schools. Teachers regularly lament how their jobs have basically devolved into little more than AI plagiarism detection. For one parent who wrote into Slate’s “Care and Feeding” parental advice column, their son’s teacher has come up with a solution: He assigns his students to have ChatGPT write them an essay, then has them fact-check it to see just how wrong AI tools can get things.
It’s a brilliant idea, but one that has this parent terrified. “I do not want Patrick doing this assignment,” they wrote to “Care and Feeding” editor Michelle Herman. “ChatGPT is an irreducibly sexist device that reads and then regurgitates garbage that pushes impressionable young men into misogyny and inceldom.”
Jupiterimages | Canva Pro
She’s right. Analysis has repeatedly shown that young men are particularly susceptible to radicalization by hate groups and other extremist content online, especially regarding women and gender. This is in part because many social media platform’s algorithms have been shown to prioritize such content in a terrifying process known as algorithmic radicalization.
“He is absolutely banned from using AI of any sort in our household,” the parent went on to say of their son. Adding that they are adamant that their son’s teacher needs to cancel this assignment in order to “protect the kids from the robots,” as they pithily put it.
Generative AI tools like ChatGPT have been shown to be highly susceptible to extremism and disinformation.
There are no two ways about it — this parent’s concerns are very real, and they’re right to worry because generative AI tools like ChatGPT have repeatedly been shown to be unreliable when it comes to accuracy.
By now, we’ve likely all encountered some of the downright insane search results Google’s AI returnslike telling people they should “eat at least one rock per day” or that running with scissors “has health benefits.”
But the problem goes far deeper than those ridiculous examples. A Purdue University study found that ChatGPT gave wrong answers 52% of the time. A South African professor of psychiatry received outright “fabrications and falsifications” from ChatGPT about schizophrenia. And Georgetown law professor Sheryll Cashin found ChatGPT gave totally inaccurate information about the history of slavery. These are just a few examples.
Even worse, AI tools have been shown to be just as susceptible to propaganda and disinformation, including extremist bigotry like incel content, as we humans have proven to be.
A 2020 study by the Center on Terrorism, Extremism, and Counterterrorism at the Middlebury Institute of International Studies found that ChatGPT had “impressively deep knowledge of extremist communities,” including QAnon and Nazi-related groups — one of the key online spaces where the incel content this parent is so worried about thrives. The Center also found ChatGPT was able to replicate things like the mass shooter manifestos inspired by this extremist content.
And more recent events show AI tools haven’t improved significantly enough since then. A 2024 Canadian study found AI tools regurgitate misinformation up to 25% of the time, and a prime example is an experiment NBC News conducted the day of the presidential debate between Donald Trump and Joe Biden earlier this year.
NBC found that tools, including ChatGPT and Microsoft’s Copilot were regurgitating a false online conspiracy theory that the debate would be on a two-minute delay so Biden’s supposed cognitive decline could be covered up with real-time live editing. The theory had only been online for a matter of hours.
Learning how to fact-check the information given by tools like AI is a critical media literacy skill kids need.
There’s no doubt that the risk of this parent’s son encountering disinformation and propaganda when using ChatGPT is very real. But avoiding it entirely is counterproductive — and dangerously so.
“I grant you, AI is spooky. But it is here to stay,” Herman said in her spot-on response to the mom, adding that his teacher’s assignment is “smart” because exposing its limitations will inspire “skepticism” towards the tool and make her son “smarter” about it.
She’s absolutely right. These tools are not going anywhere, and avoiding them is not going to teach today’s young people the vital skills they need to not be duped by these tools. It’s vital to remember that those of us who came of age before these tools have many built-in skills for research and especially the sussing out of BS that today’s kids simply do not and never will never encounter unless forced to do so — by assignments like this one, for example.
They must learn these skills if we are to have any hope of them being able to separate the truthful wheat from all the propaganda chaff choking our online media ecosystem nowadays. Media literacy is already at an all-time low. Shielding kids from it will not teach them anything, let alone how to navigate the dystopian cesspool of disinformation that is now the default.
Unless you want your kids to become like many of their boomer grandparents who reflexively believe every bit of bizarre, ridiculous AI slop they encounter on Facebook, you need to make sure they know how to detect online BS on tools like ChatGPT.
They aren’t going anywhere, and ignorance won’t protect them from radicalization. In fact, it will do the opposite.
John Sundholm is a writer, editor, and video personality with 20 years of experience in media and entertainment. He covers culture, mental health, and human interest topics.
Comments are closed.