Social Media Bans

Having the ability to use social media as a young individual allows people to stay in touch with their friends and share their thoughts on topics that impact their daily lives. However, in various parts of the United States and around the world, there are numerous limitations placed on these platforms that restrict young people’s access. These limitations are often justified as a way to protect minors from harmful content. However, social media provides countless benefits to the lives of young people.

In today’s world, where social media is the most easily accessible way that young people can connect with others and voice their freedom of speech, banning social media for them is an act of fascism. Placing major restrictions or banning youth completely can instead limit their opportunities for communication and civic participation. Social media is necessary for youth education, activism, free speech, and connection with marginalized communities. 

NYRA advocates against all types of social media bans and restrictions for young people. Young people deserve the right to have free access to the internet, online communities, and social media platforms just as adults do. 

Table of Contents


Back to Top

The National Youth Rights Association

If you’re interested in Youth Rights, consider volunteering with us. We are always looking for new members and would love to have you on board. If you have a personal story to share, about social media restrictions negatively impacting your life, or about a general youth rights violation, consider sending us an email at nyra@youthrights.org. We’d love to help get your story out to the world.


Back to Top

Federal Laws on Social Media Restrictions for Youth

There is no single federal law that completely bans minors from using social media platforms in the United States. Federal laws are mostly in charge of protecting the privacy of youth and keeping track of how technology companies collect and use data from devices belonging to minors. This includes the Children’s Online Privacy Protection Act (COPPA), which restricts certain websites and online services from collecting personal information from youth under 13 years of age without consent from a parent or guardian. 

Due to this law, a majority of social media platforms set their minimum account age to 13 years. For example, platforms including Instagram, TikTok, Snapchat, Facebook, and YouTube all use this minimum age. However, these rules are created through company policies and do not directly ban youth from using social media through federal law. Instead, the law focuses on how companies should handle youth data and privacy.

Courts have been known to support this approach and have always allowed states and companies to set age requirements and policies for social media use. 


Back to Top

State Laws on Social Media Restrictions for Minors

In most states, there are no direct policies that ban youth from using social media as a whole. However, some states have begun introducing rules that prevent younger individuals from opening social media accounts without the consent of a parent or guardian. Other states have introduced the idea of requiring social media companies or app stores to verify the exact age of users before they are able to download or use certain platforms.

These laws show that states are currently working on finding ways to balance online safety without completely banning social media usage for youth. While protecting minors from harmful online content is important, policies that completely block access to social media may prevent young people from today’s modern form of communication. 

Certain states have placed stricter regulations for social media usage among young people. These regulations often require parental consent before a minor is able to create an account, or they allow parents to oversee and manage specific features on the platform. 

States with Stricter Social Media Laws for Youth include: 

Utah passed the Social Media Regulation Act (S.B. 152 and H.B. 311). This law requires social media companies to verify the age of users and obtain parental consent before minors can create accounts.

Arkansas passed the Social Media Safety Act (S.B. 396). This law requires social media companies to verify the age of users and obtain parental consent before minors can create accounts.

Texas passed the Securing Children Online Through Parental Empowerment Act (H.B. 18), also known as the SCOPE Act. This law requires platforms to provide stronger protections for minors and limits how companies collect personal data from youth users.

Florida passed Online Protections for Minors (H.B. 3). This law requires age verification and attempts to limit certain social media features for minors.

Georgia passed the Protecting Georgia’s Children on Social Media Act (S.B. 351). This law requires social media companies to verify the age of users and obtain parental consent for minors to create accounts.

Tennessee passed the Protecting Children from Social Media Act (Public Chapter 899). This law requires parental consent for minors to create accounts on social media platforms.

Virginia passed the Consumer Data Protection Act (S.B. 854). This law includes age verification requirements and protections for minors using online platforms.

Nebraska passed the Parental Rights in Social Media Act (L.B. 383). This law requires social media companies to verify user ages and obtain parental consent for minors.

Mississippi passed the Walker Montgomery Protecting Children Online Act (H.B. 1126). This law protects minors from harmful online content and improves safety requirements for social media platforms.

Louisiana passed the Online Protections for Minors Act (H.B. 570), which requires age verification and stronger safety protections for minors using social media. 

Minnesota passed the law H.F. 3488, which proposes age verification requirements and additional “protections” for minors with social media.

Ohio passed a law under Ohio Revised Code §1349.09, also known as the Parental Notification by Social Media Operations Act. This law requires social media companies to obtain parental consent before minors can create accounts.

California passed the Age-Appropriate Design Code Act (A.B.-2273). This law requires online platforms to include stronger privacy and safety protections for minors.

New York introduced the Stop Addictive Feeds Exploitation (SAFE) for Kids Act, which limits addictive social media features and improves protection for minors.


Back to Top

Other Countries With Age-Based Social Media Bans

Apart from the United States, a few other countries have placed laws limiting the use of social media for youth. This includes:

The United Kingdom Online Safety Act

One of the most important laws about keeping young people safe online is the Online Safety Act, which was passed in the United Kingdom in the year of 2023. This law was created to enhance protections for children and teenagers who use social media and various online platforms. For this policy, technology companies are required to closely monitor harmful content posted online and to improve safety systems for younger users.  

Under this law, social media companies and online services must place stronger systems to prevent minors from seeing harmful or age-inappropriate content. Platforms are required to create better age verification systems that go further than just entering their birthdate. This is because self-reported birthdates can easily be falsified by users who want to bypass the age restrictions. 

Companies may be required to use more secure verification systems. This includes systems such as identity verification, facial age-estimation technology, or other digital methods that help estimate if a user is above or below a certain age. These requirements are meant to reduce the chances of children viewing harmful content online. 

However, according to the Information Technology & Innovation Foundation (ITIF), many users in the United Kingdom have already found ways to bypass this law using virtual private networks (VPNs) to hide their location from verification systems. One of the VPN apps called Proton VPN reported sign-ups increased more than 1,400 percent just minutes after the law was placed. 

The law is enforced by Ofcom, the communications regulator in the United Kingdom. This organization has the ability to monitor companies closer than before to ensure that they follow the rules. Companies that ignore the rule are faced with extremely large financial penalties up to £18 million or 10% of their global annual revenue. 

Australia’s Social Media Ban for Under 16 Years of Age

Australia has been known to have one of the strictest policies for youth access to social media platforms in the world. In 2024, the Australian government placed a ban for children under 16 years of age from creating accounts on social media platforms. 

Under this law, companies in charge of platforms including Instagram, TikTok, and Snapchat are required to prevent users under 16 years of age from accessing their platforms. Technology companies are responsible for enforcing these age limits through verification processes and account monitoring.

If companies do not properly enforce these restrictions, they will be faced with large financial penalties and other legal consequences. The law places the responsibility on technology companies and not the minor themselves or their families.

This policy is incredibly unreasonable, oppressive and a violation of free speech. Banning all youth from social media prevents young people from communicating with peers and engaging in discussions that take place through modern digital platforms. Today, social media has become the most common form of communication. Due to this fact, educating youth on responsible usage and placing minor protections would be more effective compared to banning youth completely. 


Back to Top

Social Media Platforms With Youth Restrictions

Today, a majority of the most popular social media platforms have already placed stricter age verification systems in order to reduce youth usage. This includes:

Instagram: Requires users to be at least 13 years of age and may use AI-based age estimation tools. When necessary, users must upload an official ID or record a facial recognition video. These videos are analyzed by Yoti, a tool that estimates a user’s age based on facial features. 

TikTok: Uses AI tool Yoti to analyze a user’s behavior, profile details, and video content to estimate if a user is above 13 years of age. For flagged accounts, users have the ability to upload an official ID, submit a facial recognition video, or submit an appeal to stop their account from receiving a ban. 

Snapchat: Requires users to be at least 13 years of age and uses systems to analyze account activity, behavior, and user reports to estimate if someone is underage. Flagged accounts may face removal or restrictions on certain types of content. Depending on the location of the user, certain features may be blocked. 

YouTube: Requires users to be at least 13 years of age to create an account and may require the use of age verification tools for age-restricted content. This may include uploading an official ID or submitting a facial recognition video to estimate if a user is over 18 years of age. If a system identifies teen users, restrictions are automatically placed to block age-inappropriate content. The platform also offers supervised accounts on YouTube Kids for pre-teens and teens.

Discord: Requires users to be at least 13 years of age and may require age verification to access certain servers or features labeled as 18 years of age or older. This may include uploading a form of identification (such as ID) or submitting a facial recognition video to estimate if a user is over 18 years of age. The platform also uses moderation systems to identify underage users and remove accounts that violate age requirements. 

Facebook: Requires users to be at least 13 years of age and may use AI-based age estimation tools. When necessary, users must upload an official ID or record a facial recognition video. These videos are analyzed by Yoti, a tool that estimates a user’s age based on facial features. These tools are often only used in certain regions, or if a user tries to change their age from under 18 years of age to over 18.

X: Limits access to certain age-restricted content. In some cases, users have the opportunity to verify their identity by uploading an official ID and submitting a selfie to access blocked features. Uploaded personal information and data are verified by the tools Persona and Stripe. X Premium subscribers also have the option to confirm their identity to unlock additional benefits associated with other features provided by X. 

These restrictions are oftentimes placed to protect youth from certain content found online; however, this can instead put major risks on the privacy and personal information of minors. Not only is it extremely common for facial recognition and ID verification systems to be wrong, they could also lead to even greater dangers such as the exposure of personal data and information. Even if this exposure happens by accident, it can have serious long-term effects, especially on minors. 


Back to Top

Default “Teen Accounts” For Youth on Social Media Platforms

Some social media platforms use “teen accounts” as a way to add safety features without completely blocking minors from usage of their platforms. These accounts are placed automatically to users who create accounts set to birthdates under 18 years of age. These accounts come with stronger restrictions on content, privacy settings, time management, and in some cases require parental controls. 

Platforms such as Instagram, Facebook, and Messenger heavily encourage the use of teen accounts, which place unreasonable restrictions on the social media accounts that minors are allowed to possess. These accounts oftentimes give access control and monitoring power to parents, which can become a major issue for young people, when their parents are heavily oppressive.

Platforms such as Youtube and Discord use default teen accounts. This means that when a user creates an account, they are automatically set to teen settings. If a user would like to remove these restrictions, the only way would be to go through the age verification methods such as uploading an official ID or submitting a facial recognition video. However, as seen in past breaches, sharing this kind of data to these systems can be extremely dangerous for minors. 


Back to Top

Age Verification Data Breaches 

There are a variety of unfortunate incidents where major leaks of personal information took place due to these age verification tools. 

Discord’s Data Breach Incident: In the year of 2025, Discord, an online communication platform, announced a data breach stating that over 70,000 government-issued IDs and other personal information was leaked in a breach of a third-party vendor, Persona, handling age verification data. This included names, Discord username, contact details, billing information, IP addresses, corporate data, and government-issued ID images. 

Tea App’s Data Breach Incident: In the year of 2025, the Tea app, a dating app that allows women to do background checks on men, announced a data breach where over 72,000 images were stolen and made public online. Some of these images included leaks of personal information such as government-issued IDs.

AU10TIX Data Exposure: In the year of 2024, AU10TIX, an app that provides identity and age verification tools for a variety of apps, had a security issue where sensitive personal information was exposed. Although there was no direct evidence of data exposure after the app addressed the issue, security issues such as these are incredibly dangerous, even if no proof of stolen information was found afterwards. 

Persona Identities Exposure Incident: In the year of 2026, Persona, an app that provides age verification services, had a security issue where sensitive personal information was left exposed. Although there was no direct evidence of data exposure, the situation shows how easy it truly is for these systems to have major flaws that could cause danger to millions of users. 

These events show that although these systems were created to improve the safety of minors, they instead cause even more risks for users. With these verification processes enforced, users are required to just trust that their data will be protected by the platform. If sensitive personal information is exposed, it can lead to identity theft, privacy violations, and a variety of other long-term issues. 


Back to Top

Benefits For Marginalized Youth Through Social Media

Social meda platforms such as these are a necessity in the modern world for a variety of reasons, which makes it all the more unreasonable for companies to restrict young people from accessing them. For starters, social media allows teens to communicate with others and participate in communities that share their interests, which can be especially helpful for marginalized individuals. As stated by the National Library of Medicine, social media offers marginalized adolescents an easier way to connect with others who share similar identities and interests. This helps lower rates of mental health issues among youth who may feel out of place offline. 

For example, according to a study by The Trevor Project, a majority of LGBTQ+ youth use online platforms as a way to connect with others because it is difficult to do so in their daily lives. After taking a survey for research, LGBTQ+ youth reported feeling very safe and supported on the following platforms: TikTok (59%), Discord (45%), YouTube (41%), and Instagram (41%). It has been stated that platforms with lower rates could be due to fewer LGBTQ+ youth representations on each platform. 

Not only is social media a great way for marginalized youth to find supportive connections, it can also serve as a tool for youth to deal with stress and negative emotions. Research by the National Center for Health Research found that social media helps teens to be accepted (58%), supported (67%), creative (71%), and connected with friends (80%). In addition, 70% of adolescent girls of color find race-affirming content on social media platforms. 

This goes to show that these platforms provide a much easier way for individuals to find accepting communities, especially for those who may feel unsupported in usual everyday lives.

Stories of Marginalized Youth Benefitting from Social Media

#BlackLivesMatter: In the year of 2020, many people shared videos and stories under the hashtag #BlackLivesMatter after George Floyd, a black American father, was killed by a cop in Minneapolis due to racial injustice. The story spread quickly worldwide helping share the importance of racial equality. 

Asian American and Pacific Islander (AAPI) Campaigns: In the year of 2020, after increased racism and violent acts aimed at Asians took place, many individuals shared their stories under the hashtag #StopAsianHate. The posts spread across social media, helping raise awareness about discrimination and connected youth to supportive communities. 

#TransRightsAreHumanRights: In the year of 2021, many transgender and non-binary individuals used the hashtag #TransRightsAreHumanRights to share their stories of discrimination and pride. These posts helped raise awareness about the challenges faced by trans individuals, and helped them connect with supportive allies. This movement helped the world better understand gender identity while also sharing the importance of acceptance. 

#DisabilityTooWhite: In the year of 2020, disabled youth of color shared their experiences online about how racial issues connected with disability spaces. The hashtag #DisabilityTooWhite spread across social media allowing disabled individuals of color to build a community with others who share similar experiences.


Back to Top

Educational Benefits of Social Media for Young People

Social media can also be an extremely helpful tool for learning and educational purposes. Platforms including YouTube, Instagram, and TikTok allow teens to find resources quicker to help them succeed academically and personally. 

For example, many teenagers use social media to learn about current events and discuss important issues with others. A survey by Common Sense Media found that 54% of teenagers learn about news and current events through social media, and 65% stated that it helps them better understand what is going on. This proves that the platforms can provide access to information that may not always be available through other sources.

Social media also helps young people find educational resources to further understand school subjects. Teens who are especially interested in attending university or following certain career paths can find extracurricular opportunities, online workshops, and projects related to their interests. These platforms give students a chance to gain experiences that can help with college applications or future career paths. 

In addition, social media allows teens to access study guides and video explainers for free that can help them with understanding topics that they may find difficult at first. Students who struggle to understand lessons in class can use online resources for some additional help to develop a stronger understanding of the subject. This can make learning easier and faster for youth to gain more confidence in their academics. 


Back to Top

Students Benefiting from Social Media Activism

Social media has a variety of benefits for the way young people participate in activism and advocacy today. Platforms including Instagram, X, and TikTok allow youth to raise awareness about issues they care about and communicate with others who share their concerns. According to research by San Francisco Foghorn, 79% of students stated that social media is extremely important for activism in this day and age, and 100% reported participating in some form of digital activism, while only 5% have taken part in in-person activism. This shows that online platforms serve as a great and more effective way for youth to speak out about certain situations that affect their everyday lives. 

Stories of Youth Activism Through Social Media

Greta Thunberg: At 15 years of age, Greta started posting about her school strikes for climate change. Her posts eventually spread worldwide and helped begin the “Fridays for Future” movement. This inspired numerous students everywhere to protest and raise awareness about issues they care about. 

#MarchForOurLives: After a shooting that occurred in Parkland, Florida, students used social media to organize rallies and promote gun control. A variety of hashtags and posts spread worldwide and encouraged millions of people to join protests across the United States. 

ICE Out Student Protests: During the start of 2026, thousands of students located across the United States coordinated “ICE out” walkouts using social media to spread the message and encourage participation. These youth-led protests helped show how much youth truly cares about speaking out for human rights. 


Back to Top

Conclusion

Social media access laws in the United States are mostly decided by state rules and technology company policies. While young people often use online platforms for communication, learning, and participating in their communities, they still face limits when trying to access social media.

These restrictions can slow digital learning and prevent young people from gaining real experience with online communication and technology. Social media is incredibly necessary for youth activism, free speech, education, connection with communities, and much more. Teaching responsible access to social media would be a much better and more effective approach to supporting digital responsibility and youth participation.