by Sarah von Hoyningen-Huene, Jutta Oberlin//
1. Introduction
Instagram, one of the world’s most popular social media platforms and part of the social media giant Meta, is constantly evolving to meet the changing needs of its users and adapt to the growing digital landscape. Underage users are an important customer group for Instagram, but their protection is partly subject to specific legislation. The platforms based their own terms of use on the framework conditions of the US Children’s Online Privacy Protection Act (COPPA)1. But Coppa only protects children up to the age of thirteen, which means that the particularly vulnerable group of teenagers has so far remained without any special restrictions. The growing concerns of child protection associations and parents, tragic events related to social media usage and an awakening understanding of the dangers of platforms for young people have increased the pressure on legislators and platform managers in recent years. Recently, Instagram introduced a series of new measures aimed at enhancing user experience, promoting safety, and ensuring responsible content sharing. This article explores these new measures, examining their potential impact on teeange users in particular and the cause of protecting children’s digital safety in general.
2. Instagram’s new Measures
2.1 The Teenager Account
In the case of minors over the age of 13, i.e. teenagers, Instagram has been promising more security and delivered in September of 2024 by introducing new measures that both restrict the use and allow parental control of the use of all accounts from teenagers. Accounts identified as teen accounts by Instagram are automatically assigned to the “Teen Accounts” category and are subject to the new protection mechanisms. Head of Meta Policy Nick Clegg assumes that this protection mechanism will reduce the amount of time young people spend on Instagram, but also hopes that the new measures will increase long lost parental trust. Regaining parental trust seems to have been the primary goal of the new measures: There are new protection mechanisms that children under the age of 16 cannot change without their parents’ consent. The introduction of new teen accounts will initially take place in the USA, the UK, Canada and Australia. For the European Union, the launch of these accounts is planned by the end of the year, with global implementation the following year.2
2.2 The New Measures
2.2.1 Privacy by default
The use of private accounts allows young people to individually accept (or decline) new followers’ requests before others can see the published content of the teenager. For this reason, teen accounts are set to private by default. The new messaging settings for teenagers are restricted as they can now only receive messages from people they follow or are already in contact with. Teenagers can only be tagged or mentioned by people they follow.3
2.2.2 Night mode
The night mode ensures that notifications are automatically silenced between 10 p.m. and 7 a.m. and that direct messages are responded to with automated replies so that young users’ sleep is not disturbed and thus improved.4
2.2.3 From the parental sphere of protection to the AI-generated sphere of protection
2.2.3.1 Periods of use
In addition, a daily usage time limit can be set so that after 60 minutes a notification prompts teenagers to leave the application. Parents can set a fixed daily usage limit, after which the app is no longer accessible to the child or restrict access for certain periods of time.
2.2.3.2 Content review
Sensitive content, such as cosmetic procedures or violent images, is restricted. Offensive words and expressions are also filtered from comments to promote a safer communication environment.5
2.2.3.3 Chat partner controls
Furthermore, parents are given the opportunity to see an overview of the chat partners of their underage children over the past seven days, but without having access to the content of the messages.6 This measure has so far caused the most backlash, since it is questionable whether this measure does not restrict minors’ fundamental rights to privacy too drastically and whether it is therefore proportionate or not.
2.2.3.4 Exposing underage “adults” with AI
The mechanisms implemented in the AI systems to detect false age claims analyze, among other things, profile information and user interactions with posts and other accounts. Based on this data, assumptions can be made as to whether underage users may be impersonating adults. User accounts that are identified by these systems are automatically converted to teen accounts. Meta’s aim with this approach is to promote further discussion about user-friendly control mechanisms for parents that can be applied across all platforms.7
2.2.4 Promotion of Data Mindfulness
It is particularly important that the platform now offers assistance when minors are asked to send images by others. The phenomenon of adults or other teenagers requesting nude images from minors has not yet been brought under control – a presumption which law enforcement agencies can only confirm. Instagram gives minors instructions on what to do if someone asks for nude images: “Sharing nude photos or videos or those with sexual content is against Instagram’s Community Guidelines, so you can simply reply: “No. It’s not allowed on Instagram.”8 Instagram not only provides exemplary instructions for specific cases, but also teaches media literacy by pointing out the general consequences of, for example, sharing images with third parties.9 Instagram worked with ConnectSafely10 to provide this new guidance tools which is great progress.
3. The WWWW of the new measures
3.1 What is at stake? Children’s Right to Privacy
The constant monitoring of children’s communication partners by parents is a drastic measure that undoubtedly affects a person’s private sphere and therefore constitutes a violation of personal rights. On the other hand, the dangers that lurk for children on the internet and especially on social platforms must also be considered. Due to the dangers on Instagram, the authors therefore see it as the mildest means that the parents of children under the age of 16 know their communication partners and can start a conversation with the children if necessary, in the event of suspicion.
The authors also agree that monitoring all communication would no longer be proportionate and could also be a violation of privacy. Bischof even considers systematic surveillance for non-educational reasons a form of violence.11
3.2 Why? The omnipresent Dangers on the Platforms
Predators have easy access to social media platforms and use them to get in contact with minors. Cybergrooming is the targeted contact of an adult with a minor for the purpose of sexual contact in digital space or real life. The aim of cybergrooming can be to to digitally approach a minor to establish a relationship and, for example, to obtain nude photos, which can then be used to blackmail them into real-life sexual contact.12 Obtained explicit content can also be used for Sextortion. Sextortion victims are blackmailed with image and video material showing them nude or performing sexual acts. The image and video can be authentic, or AI generated. The term sextortion is made up of “sex” and “extortion”.13
Cyberbullying, targeted bullying via digital platforms, should also be considered. It is not uncommon for digitally available materials to be used for this purpose. It is well known that this can have serious consequences for the victim, including suicide.14 Young people aged 13 to 17 are the most frequently affected by cyberbullying, accounting for around 28% of all victims.15 Hate speech means verbally and non-verbally communicated contempt and devaluation against a specific person or group of people. Studies have shown that young people are most exposed to the phenomenon of hate speech on the internet.16 The supposed distance and anonymity tempt people to say things that they would probably not say to anyone’s face.
Exposure to problematic content such as depictions of suicide and self-harm can also have a negative impact on young people and encourage their own hopelessness, suicidal thoughts and self-harming behavior. The results of recent studies investigating the effects of exposure to such content on social media are alarming, as they indicate that the exposure to such content is detrimental.17
3.3 Why now? Prevention or Preemption?
The company and other providers of online services are regularly accused of failing to take adequate measures to protect young users on their platforms.18 The newest changes are among the most far-reaching measures yet taken by an american platform provider to influence young people’s use of social media. In recent years, parent and children’s groups have warned that Instagram, TikTok, Snapchat and other apps have regularly exposed children and young people to bullying, pedophilia, sexual blackmail and content that promotes self-harm and eating disorders.19
The negative experiences that young people have online have increasingly become the focus of parents and child protection advocates in recent years. Especially in the United States of America: As recently as June 2024, Vivek Murthy, the US Surgeon General, called for cigarette-like labeling of social media to warn of the potential mental health risks. Only a month later, the US Senate passed a bipartisan bill called the “Kids Online Safety Act”20, which would impose safety and privacy requirements for children and teens on social media. Some states have already enacted restrictions on social media. The CEO of Meta, Mark Zuckerberg, has also come under increasing political pressure due to the risks of social media for young people: Meta is facing numerous lawsuits accusing the company of knowingly luring children to its apps and systematically downplaying the associated risks.21 A ban of the Chinese platform TikTok (that is comparable to Instagram) was widely discussed in the USA and in the course of this discussion it was also repeatedly argued that TikTok had introduced far-reaching usage restrictions for children in anticipation of the feared regulation by the Chinese state.22 It must therefore be doubted that the new measures were taken only on the basis of endogenous intrinsic recognition of the need for prevention. Rather, it can be assumed that the platform operators fear the increasing political pressure and are trying to mitigate this by pre-empting certain regulations and demonstrating goodwill.
3.4 What more? A Wishlist
Without any doubt: The new measures are a good step towards reducing the risks of social media platforms for teenagers. The authors suggest expanding the definition of inappropriate content to include images that could potentially expose or negatively influence the person depicted. One possible solution for detecting such images could be the use of advanced artificial intelligence capable of recognizing new forms of embarrassment or inappropriateness. Minors should also be given the opportunity to easily send in supposedly inappropriate or embarrassing images to a platform advisory team, who can provide advice and support. Another suggestion is the introduction of a “help button” that directs users who come across inappropriate or criminally relevant content to an offer of support and advice. This could provide immediate help.23 As some crime has migrated to the digital world, the need for an increased police presence online is being discussed. Stronger cooperation between platform operators and law enforcement would be sensible. Private security services or moderators, who ensure order like administrators, could also ensure greater security on social networks.24
It would also be advisable for platform operators to ensure that the rights of the persons depicted are not violated and that the requirements for the consent of a child are met each time an image is uploaded. A pop-up window reminding users that all persons depicted must have consented to the publication would be a possible solution. Platforms could also include these rules in their terms of use.25
Even the latest measures could not satisfactorily solve the problem of the lack of age verification. It remains to be hoped that AI can be an effective tool for getting to grips with this problem.
What becomes obvious is that, despite all measures, it is essential to actively teach young people media skills. Teaching media literacy is seen as central to educating young people about the risks of social media. One idea would be to introduce a document, like an instruction manual, that explains the risks of social media in simple language, particularly in an age-appropriate section for children. Minors could be required to watch an educational video that addresses specific risks for their age group before using the platforms. Such measures could be supported by statutory youth protection regulations.26
4 Conclusion
The new protective measures on Instagram are a small but important step towards improving child protection on social media platforms. Only time will tell how effective these measures really are. The measures increase transparency and enable parents to better control and track their children’s everyday digital lives – a seemingly impossible undertaking to another world as it seemed until now. So far, this has been done without completely restricting young people’s privacy – a pragmatic compromise that should be applauded.
Despite this progress, some problems remain. In the long term, there is a need for stronger cross-platform control mechanisms that enable parents to monitor their children’s online activities efficiently, but not completely, regardless of the platform used. This requires internationally coordinated standards and legal regulations, which must be implemented to a greater extent to harmonize the various protective measures and thus ensure the protection of minors in the digital space.
In the future, not only age verification will become more important, but also the discussion of measures that potentially restrict fundamental rights, such as increased surveillance, as currently envisaged by platforms such as Instagram. Finding the right balance between data protection and surveillance while at the same time ensuring effective protection against abuse and cyberbullying is of key relevance. The increasing implementation of such measures raises the question of their proportionality and actual benefit. The measures taken also clearly show that the most important measure cannot be provided either by platforms or by legislation: Relentless media education. Only parents who are educated on the risks of social media can offer their children support and guidance. And only educated children can become responsible media users. Let’s get to work.
Sarah von Hoyningen-Huene, Criminal Law, Protection of Children, Digitalisation – Europa Institut an der Universität Zürich (EIZ)
Jutta Oberlin, Privacy Lawyer, LL.M., EMBA (HSG), Data Privacy Lawyer, IAPP Advisory Board Member
1Children’s Online Privacy Protection Act of 1998, 15 U.S.C. 6501-6505.
2Frankfurter Allgemeine Zeitung, 17. September 2024: https://www.faz.net/aktuell/wirtschaft/unternehmen/instagram-kuendigt-jugendschutz-massnahmen-an-mehr-kontrolle-fuer-eltern-19989912.html [last visited on 17.09.2024].
3Die Tagesschau, Instagram gibt Eltern von Teenagern mehr Macht, 17. September 2024: https://www.tagesschau.de/wirtschaft/digitales/instagram-soziales-netzwerk-eltern-kontrolle-100.html [last visited on 24. September 2024].
4Mike Isaac/Natasha Singer, Instagram, facing pressure over child safety online, unveils sweeping changes, The New York Times, 17. September 2024.
5Frankfurter Allgemeine Zeitung, Instagram kündigt weitreichenden Jugendschutz an, 17. September 2024: https://www.faz.net/aktuell/wirtschaft/unternehmen/instagram-kuendigt-jugendschutz-massnahmen-an-mehr-kontrolle-fuer-eltern-19989912.html [last visited on 17.09.2024].
6Frankfurter Allgemeine Zeitung (Fn. 5).
7Frankfurter Allgemeine Zeitung (Fn. 5).
8https://www.facebook.com/help/instagram/646840095358740/?locale=en_US&helpref=hc_fnav&cms_id=646840095358740 [last visited on 24.09.2024].
9https://www.facebook.com/help/instagram/646840095358740/?helpref=hc_fnav [last visited on 17.09.2024].
10www.connectsafely.org [last visited on 24.09.2024].
11Severin Bischof, Stärkung der Kinderrechte als Präventivschutz vor häuslicher Gewalt, Zürich/St. Gallen 2016, 188 ff.
12For further information; Swiss Crime Prevention: https://www.skppsc.ch/de/download/cybergrooming/ [last visited on 17.09.2024] and Jutta Oberlin, Sarah von Hoyningen-Huene, Patrick Fassbind, Kindeswohlgefährdungen im Metaverse – Ein zivilrechtlicher Überblick, Zeitschrift für Kindes- und Erwachsenenschutz 02/2024, 78-91, 83.
13For further information; Swiss Crime Prevention: https://www.skppsc.ch/de/themen/internet/sextortion-erpressung/ [last visited on 18.09.2024].
14CNN: How a cell phone picture led to girl’s suicide, found at: https://edition.cnn.com/2010/LIVING/10/07/hope.witsells.story/index.html [last visited on 18.09.2024].
15Christoph Schneider/Catarina Katzer/Uwe Leest, Cyberlife – Spannungsfeld zwischen Faszination und Gefahr, abrufbar unter https://www.buendnis-gegen-cybermobbing.de/wp-content/uploads/2022/03/cybermobbingstudie_2013.pdf [last visited on 24.09.2024].
16UK Safer Internet Centre: Creating a better internet for all: Young people’s experiences of online empowerment + online hate, found at: https://childnetsic.s3.amazonaws.com/ufiles/SID2016/Executive%20Summary%20-%20Creating%20a%20Better%20Internet%20for%20All.pdf [last visited on 18.09.2024].
17Florian Arendt/Sebastian Scherr/Daniel Romer, Effects of exposure to self-harm on social media: Evidence from a two-wave panel study among young adults, New Media & Society, 21(11-12), 2422-2442.
18Frankfurter Allgemeine Zeitung (Fn. 5).
19Isaac/Singer (Fn. 4).
20https://www.congress.gov/bill/118th-congress/senate-bill/1409/text [last visited on 28.09.2024].
21Isaac/Singer (Fn. 4).
22Watson: Nur noch 40 Minuten TikTok am Tag für chinesische Kinder – und dazu Nachtsperre, 21.09.2021, https://www.watson.ch/digital/china/447173417-nur-noch-40-minuten-tiktok-pro-tag-und-nachtsperre-fuer-kinder-in-china [last visited on 24.09.2024].
23Jutta Oberlin/Sarah von Hoyningen-Huene/Patrick Fassbind, Kindeswohlgefährdungen im Metaverse – Ein zivilrechtlicher Überblick, Zeitschrift für Kindes- und Erwachsenenschutz 02/2024, 78-91, 87.
24Oberlin/von Hoyningen-Huene/Fassbind (Fn. 21), 89.
25Jutta Oberlin/Sarah von Hoyningen-Huene, Innocence in Danger – Wenn Likes auf Kosten der Kinder gehen, Schweizerische Juristen Zeitschrift 23/2022, 1123-1140, 1129.
26Jutta Oberlin/Sarah von Hoyningen-Huene, Strafrecht im Metaverse – Den Verbrechen der Zukunft auf der Spur, Forum Poenale 2/2024, 88 ff.