In 2021, a Wall Street Journal investigation detailed research findings that exposed Meta’s role — particularly its Instagram platform — in exacerbating mental health issues in teenagers. The social giant’s internal documents became the subject of a Senate Judiciary Committee hearing called “Big Tech and the Online Child Exploitation Crisis” as Instagram came under intense scrutiny about the correlation between increased eating disorders, depression, and self-harm among teen users who frequent Instagram.
Also: Is social media safe for kids? Surgeon general calls for a warning label
Since then, Meta CEO Mark Zuckerberg has been pressured to find solutions that amplify online child and teen safety across his social media platforms.
On Tuesday, Meta began rolling out its Teen Accounts feature for users under the age of 18. This new account type is private by default and employs a set of restrictions for minors. Only users 16 years and older can loosen some of these settings with their parent’s permission. The goal: Transform the way young people navigate and use social media.
The new built-in protections and limitations, Instagram head Adam Mosseri told the New York Times, aim to address “parents’ top concerns about their children online, including limiting inappropriate contact, inappropriate content, and too much screen time.”
According to the Meta Newsroom, Teen Accounts are designed “to better support parents” and “give parents control” by granting them a supervisor role over their teen’s accounts — specifically users under the age of 16. However, Meta added, “If parents want more oversight over their older teen’s (16+) experiences, they simply have to turn on parental supervision. Then, they can approve any changes to these settings, irrespective of their teen’s age.”
Moreover, the new protections include “Messaging restrictions” that place the strictest available messaging settings on young users’ accounts, “so they can only be messaged by people they follow or are already connected to.”
“Sensitive content restrictions” will automatically limit the type of content — such as violent content or content promoting cosmetic procedures — that teens see in Explore and Reels.
Accounts also will notify users with time limit reminders that “tell them to leave the app after 60 minutes each day,” while a new “Sleep mode” will silence notifications and send auto-replies from 10 p.m. to 7 a.m.
Also: How to get ChatGPT to roast your Instagram feed
Meta stated that the most restrictive version of its anti-bullying feature will be turned on automatically for teen users, and offensive words and phrases will be hidden from their comment sections and DM requests.
Meta also announced that it will be deploying artificial intelligence (AI) to weed out those who are lying about their ages. These age prediction tools — currently being tested for planned deployment in the US early next year — “will scrutinize behavioral signals such as when an account was created, what kind of content and accounts it interacts with, and how the user writes. Those who Meta deems could be teens will then be asked to verify their ages.”
The rollout won’t be immediate for all — new users will be directed into teen accounts upon signing up if they are under 18, but existing teen users may not see immediate changes. On a global scale, users not based in the US won’t see changes in their accounts until next year, according to a Meta fact sheet.