Prof. Syed Munir Khasru
South China Morning Post
December 11, 2025
https://www.scmp.com/opinion/world-opinion/article/3335735/australias-social-media-ban-offers-path-between-excess-and-inaction
____________________________________________
For policymakers elsewhere, Canberra’s approach to children’s social media use isn’t a model to copy but one to learn from
On December 10, Australia began enforcing the world’s strictest social media age rule: platforms must now verify that users are aged 16 or above, or face fines of up to A$49.5 million (US$33 million), a bold experiment in returning oversight of childhood from Silicon Valley to democratically elected institutions.
The relevant law, passed by the Australian parliament last year, represents more than protectionism or moral panic. It is a deliberate experiment in rebalancing the roles of state, family and corporations in safeguarding child welfare in the digital age. However, Australia is not alone in grappling with these questions; it represents one of several distinct regulatory models emerging globally.
Australia’s hard ban sits alongside two contrasting approaches: mainland China’s state-led “minor mode” and Hong Kong’s advisory guidelines. Each reveals different assumptions about state power, parental authority and corporate responsibility.
Since January 2024, China’s Regulations on the Protection of Minors in Cyberspace have required platforms, device makers and app stores to implement protections for minors. For online game service providers, these include identity authentication, age-appropriate content warnings and usage time caps.
This April, Beijing rolled out a mobile internet minor mode that parents can activate to filter out harmful content and restrict screen time for verified minors. This state-led architecture embeds child protection within broader internet governance, effective and comprehensive but inseparable from Beijing’s control over digital speech.
Hong Kong offers a softer alternative. Despite concerns over youth screen time and smartphone ownership among primary schoolchildren, the city has explicitly ruled out legislative bans, at least for now. Instead, Hong Kong is reviewing guidelines and relying on parental discipline. A parent-led group is urging delaying smartphone usage until age 14, but without legal enforcement, compliance remains voluntary.
Australia’s approach splits the difference: accountability without mainland China’s surveillance infrastructure, legal teeth without Hong Kong’s hesitation. The legislation builds on Canberra’s track record of confronting Silicon Valley.
For example, Australia’s news media bargaining code forced platforms to pay publishers. Meanwhile, its anti-scam rules have imposed substantial penalties for user protection failures. Polling found that 77 per cent of the public supported restricting social media access for under-16s. Such widespread agreement is rare in polarised societies.
Crucially, Australia backed legislative ambition with technical rigour, including by commissioning an Age Assurance Technology Trial. The country’s evidence-based approach distinguishes its ban from a symbolic gesture, creating an empirical foundation that other nations can examine.
The model differs fundamentally from that of other Western countries. The US Children’s Online Privacy Protection Act focuses on parental consent for children under 13, while other legislative efforts emphasise content controls rather than a blanket age-based ban. France requires social media platforms to obtain parental consent for children under 15 to create accounts.
Yet boldness invites scrutiny. The social media ban is facing a legal challenge at the Australian High Court from teenagers arguing that it burdens the implied freedom of political communication. Blanket bans risk excluding young people from civic discourse, a concern that resonates differently depending on constitutional speech protections.
Age assurance carries significant privacy risks pertaining to biometric data and identity documentation and retention vulnerabilities. While regulatory guidance favours layered approaches over invasive mandates, errors are inevitable. A serious risk is displacement, or adolescents simply migrating to less regulated platforms where they may face greater exposure to scams.
Critics also warn bans could disproportionately harm marginalised youth, including LGBTQ teens, rural populations and people with disabilities who rely on online communities for support.
For policymakers around the world, this isn’t a model to copy but one to learn from. First, fund technical work before legislating. Australia’s age assurance trial was about evidence, not wishful thinking. Commission independent trials, publish results and let advocates scrutinise them. Technical limitations will surface. It’s better that they do early on, during a period of legislative flexibility.
Second, build privacy protections into architecture. Age verification can become surveillance infrastructure without strict data minimisation and oversight. Require layered approaches, behavioural signals and voluntary verification, not invasive biometric defaults. When systems err, young people need accessible appeals.
Third, wrestle with civic participation. The legal challenge in Australia isn’t frivolous; democracies must engage their youth. Constitutional frameworks protecting speech rights more explicitly may not sustain blanket bans. Consider carve-outs for civic platforms or age-appropriate moderated spaces.
Fourth, assume displacement. Teenagers will evade restrictions through virtual private networks or unregulated services. Fund digital literacy, create moderated spaces that aren’t surveillance engines, and strengthen enforcement against platforms outside regulatory perimeters.
Finally, recognise the limits of unilateralism. Platforms are global and age assurance standards are new so enforcement capacity varies. Use multilateral forums, such as the UN, to build common standards and reciprocal mechanisms. Otherwise, fragmented regulation will lead to exploitation.
Whether Australia’s law survives the legal challenge or succeeds in protecting children remains uncertain. But comparing three models reveals fundamental choices: Beijing shows state-led regulation can institutionalise child safety controls but bundles them with broader internet governance. Hong Kong highlights the limitations of voluntary guidelines without legal enforcement. Australia offers democracies a clear, enforceable framework while raising questions about feasibility and rights.
Should digital childhoods be shaped by platforms that are optimised for engagement, market mechanisms filtered through parental choice or democratically accountable states acting on public health? Australia has chosen state responsibility backed by a democratic mandate. Other nations now have an example to study, adapt or reject, but can no longer claim the regulatory path is untested.
0 Comments