Australia’s Social Media Ban for Kids: Why Research Shows It’s Failing
Australia moved first. It passed a nationwide ban on social media use for children under 16 in December. Four months later, the early results look weak.
A new survey from the Molly Rose Foundation paints a clear picture. Many children still use the same platforms the law aimed to block. The ban has not cut access in a meaningful way.
The study focused on Australian users aged 12 to 15. It found that 61% of those who had accounts before the ban still have access today. That is three in five users. The law has not pushed most of them off the platforms.
The biggest apps have kept much of their young audience. Around 53% of former TikTok users still log in. The same share applies to YouTube. For Instagram, the figure is 52%. These are not small gaps. They show a broad failure to enforce the rule.
Why Social Media Age Bans Are Failing?
Early reports pointed to simple tricks. Some teens fooled facial age checks with makeup or odd expressions. Others used VPNs to mask their location. Some even used older faces friends or family to pass checks. These methods drew attention, but they are not the main story.
The survey shows a deeper issue. In most cases, children did not need workarounds at all. Platforms failed to spot and remove underage accounts. Many accounts stayed active without any challenge.
The data backs this up. About 64% of YouTube users said the platform took no action on their account. For Snapchat, the figure is 61%. For both Instagram and TikTok, it stands at 60%. Most young users saw no attempt to remove or block them.
This points to a gap between law and practice. The rule exists, but enforcement sits with private platforms. If those systems fail, the law has little bite.
The policy goal was simple. Lawmakers wanted to make children safer online. The results so far do not show a clear gain. About 51% of surveyed users said the ban made no difference to their safety. Another 14% said they feel less safe. That is a warning sign. A policy that aims to protect should not leave some users feeling worse.
There are a few ways to read this. One is technical. Age checks remain easy to bypass or avoid. Facial scans and ID systems can help, but they are not foolproof. False negatives and privacy concerns also limit how far platforms push these tools.
Why Legislative Age Barriers Face a Reality Check?
Another is behavioral. Teens are quick to adapt. When rules change, they find new paths. Social media is also tied to social life. That makes strict bans hard to enforce without strong buy-in from users and parents.
A third factor is incentives. Platforms have little reason to remove large numbers of users unless rules force them to act. Even then, detection at scale is hard. Mistakes can block the wrong users, which brings its own risks.
The Molly Rose Foundation argues that bans alone will not deliver fast gains in safety. It calls for stronger duties on platforms, not just age limits. That includes better moderation, safer design, and clear accountability.
Other countries are watching. Greece, France, Indonesia, Austria, Spain, and the United Kingdom have explored or moved toward similar rules. In the U.S., several states have passed their own limits, and some expect a federal push in time.
Australia’s case offers an early test. It shows that passing a law is the easy part. Making it work is harder. Systems must detect age with care and accuracy. Platforms must act on what they find. Users must see value in the change.
Right now, those pieces do not line up. Most under-16 users still reach the same apps. Many see no change in safety. Some feel less safe than before.
That does not mean the goal is wrong. It means the method needs work. Stronger enforcement, better tools, and clearer duties for platforms may help. So might education for parents and teens.
For now, the result is plain. The ban exists on paper. In practice, it has not moved behavior enough to meet its aim.
Comments are closed.