Instagram’s latest parental controls, announced on 17 September introduce mandatory teen accounts with privacy features aimed at protecting younger users.
The new accounts, called “Teen Accounts,” will be automatic for all Instagram users under the age of 18, both for teens already using the app and for those signing up.
These accounts restrict messaging, limit sensitive content, and give parents greater oversight of their teens’ activities on the platform.
While it should be recognised that Meta will lose users as a result of these new privacy features, it is critical that governments and regulators do not see Meta’s approach as a universal model for safety. There are fundamental issues that it does not address.
Children access a large number of sites and platforms, not just Instagram. It is estimated that the average child accesses nearly 50 apps per week and the apps used change over time. It is not practical or realistic to expect regulators to negotiate with parents to configure parental settings on all of these apps.
Minors have the time, skills and motivation to hack controls and the age assurance measures that Meta is implementing is understood to have gaps that underage users will exploit. Children, families and schools have unique circumstances, and child development experts have serious concerns about blunt age-based access rules. Ultimately, universal rules should be resisted and may harm children.
Obscured in Meta’s announcement is that that company provides safety features to businesses and personalities that they do not offer parents. Automated means are available for so-called influencers to plug into all of their social media accounts and automatically scan, moderate messages and delete hate-speech. This is a high priority need of parents but is not provided and no reasons are given.
The bottom line is this. The only truly reliable and effective approach to controlling all the online activity of minors is by controlling the device they are using via on-device safety technology. Given modern encryption and the scale and dynamics of the internet, it is the only effective way to privately identify users, inspect all activity and apply rules to all activity.
Technology available in order to address this problem is trustworthy, proven, cannot be removed by kids and it’s available now. So why isn’t it everywhere? Because Google, Apple and Microsoft limit these capabilities to business app developers.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataSuch technology is being used reliably on tens of millions of devices and is particularly successfully in the US education sector on school-issued devices. When installed by enterprises, on-device safety tech can deliver all of the core needs of online safety for minors. Porn blocking, social media age restrictions, screen time management, visibility and alerting, are easy to use and extremely difficult to bypass.
This anti-competitive behaviour has been evidenced by the ACCC, EU and US antitrust inquiries. While the changes by Instagram should be welcomed, there are still concerning gaps that they leave behind and we should be aware that they create a false sense of security for parents.
What we urgently need is not unilateral security enhancements that address some but not all of the issues, but government regulations that ensure parents have the same access to the right safety technology that big enterprises already enjoy. It is frankly unacceptable that they do not have that today.