top of page
Search

GDPR and protecting children’s privacy: the way forward

Writer's picture: Martin SchmalzriedMartin Schmalzried

Last year, I have written an article about the General Data Protection Regulation (GDPR) and it’s implications on children’s privacy. With the ongoing discussions around the guidelines and implementation of sensitive issues like the age limit for requiring parental consent for online service processing personal data, I thought now was a good time to revisit the topic and provide some recommendations on the way forward.


In this essay, I will specifically address the issue of the right “cut off” age for requiring parental consent. I greatly encourage you to read my previous article since I will build on those ideas and since it covers a wider range of considerations about the GDPR and its likely impact on children’s privacy.


First, it is important to clarify a certain number of things. The GDPR does not require parental consent for children to access online services including social networks, it only requires parental consent for the processing of personal data. This distinction is absolutely key as I will show below.


Many actors said a limit set at 16 is the same as forbidding teenagers from accessing social networks or the Internet without seeking parental consent, or that teenagers and children would be more vulnerable online since they would use services anonymously. These are overly simplistic interpretations of the Regulation. Teenagers below the age of 16 would only require their parental consent for services which process their data; the intention of the law makers in this instance, was protecting teenagers and children from the commercial exploitation of their data and being overly exposed to commercial messages/marketing since online advertising now relies on processing a high degree of data to personalize advertising. Also, anonymity is an issue completely separate from the debate around data processing. A user can very well use his/her genuine name and post genuine pictures of him/herself without any “data processing”. Conversely, it is easy to pretend to be someone else and use a nickname on services which rely on heavy data processing (including Facebook). The proof is simply the number of children under the age of 13 currently on such services! Finally, on the question of being more “safe” in environments where data can be processed, this is also an overly simplistic view. It would depend what the definition of “data processing” is: does it include monitoring, reporting and online moderation?

The end result of the Regulation for teenagers’ access to online services can be illustrated by three scenarios:


– The first one where online services refrain from processing data of teenagers and thereby allowing them access without parental consent (meaning that any algorithms processing personal data would be disabled and content would be sorted automatically according to the date posted for instance). This, unfortunately, is highly unlikely, since targeted advertising is the business model on which most of these services rely, so interpreting and applying the Regulation in such a manner would effectively deprive them from around 30% of their revenue.


– The second where online services set the “cut off” age for using their services at 16, pretend that no under 16 year old uses their service, and engage in a selective “witch hunt” of underage accounts, closing them at random. This is possibly the worst outcome for both teenagers and online services as teenagers would need to lie about their age, and would therefore not benefit from the “protection” from certain types of advertising or content and would also be subject to having their account closed if they are identified as being under 16.


– The third scenario where online services set up a “parental consent” mechanism and where teenagers would need to pester their parents to get access to such online services. This also is a rather negative outcome for teenagers and their right to privacy, freedom…

In the end, the “blame game” has been mostly pinning down the failure of the Council, which should have known that online services would never refrain from processing data from under 16 year olds and thereby renounce to 30% or more of their revenue from targeted ads…


From COFACE-Families Europe’s side, we underline the necessity to reflect on a key question: how can we strike a better balance between the prevailing business model centered on advertising, data processing and profiling and the necessity to protect children and teenagers from the commercial use of their data and from advertising and marketing?

Besides, perhaps another wish of the Regulators is to ensure that teenagers below the age of 16 experience an unfiltered Internet instead of the Internet “Bubble” which displays only content that users already like. Services like Instagram used to sort pictures displayed according to the time they were posted, not based on an algorithm, and users were perfectly happy with it them (this was changed abruptly by Facebook last year). By the same token, teenagers below the age of 16 and children should have the right to access information with as little “algorithmic bias” as possible. Many users have expressed their discontent at Facebook’s decision to apply an algorithmic sorting on Instagram feeds, so the question of displaying information in a neutral way goes far beyond the debate of teenagers. But more on this later…


Now, let us examine in more detail what are the implications of the considerations above.

Deciding what is the “right” cut-off age for requiring parental consent for the processing of children’s personal data is a tricky endeavour and depends more on a person’s adherence to key philosophical moral principles and perceptions of justice.


Let’s get one thing out of the way: so far, there is no compelling scientific evidence to my knowledge, for instance stemming from research on child development, to justify a cut-off age of 13, 14, 15 or 16. Thus the decision to opt for one age or another has to be based on other considerations.


The “pragmatic” case based on the (most likely) end effects or the argument against a higher threshold.


The most likely scenario, as discussed above, in case cut-off age limit at 16 would force children to pester their parent(s) every time they install an app or subscribe to an online service. It may further limit the possibility of children from accessing a service if parental consent requirements cannot be fulfilled because of technical conditions imposed by the service (verification via e-ID, etc.). It could also simply push services to adopt a ‘post COPPA’ strategy: put a threshold of 16 for the use of their services, being fully aware that children below the age of 16 will lie about their age to subscribe, and finally resorting to a very uncomfortable ‘witch hunt’ where accounts of ‘minors’ are randomly deleted in order to pretend to comply with the rules. Children would end up suffering from the double blow of a lower protection (since they have to provide consent as if they were adults using a service designed for adults) and the potential loss of all their data in case their trickery is discovered by the service.


In such a scenario, settling at an age of 13 would effectively change nothing as it would align with COPPA standards, and children over 13 would not require parental consent for the processing of their personal data. A few additional GDPR rules would apply (see the previous article where I develop these) which still might have a positive influence on children’s exposure to advertising.


The case of the moral imperative and legislators’ intent or the argument for a higher threshold.


This section will be much longer as it requires to delve deep into moral and legal philosophy.


Two justifications for supporting a higher threshold derive from two philosophical ways of thinking about justice. The first one is the Kantian principle of the Moral Imperative: “Act according to the maxim that you would wish all other rational people to follow, as if it were a universal law.” Immanuel Kant makes a clear distinction between “universal law” and it’s end effects. For instance, if we all agree that it is wrong to lie, then regardless of the consequences such as in the case of lying to a murderer to protect your life, it is considered a breach of the law.


According to such a principle, if we all agree that children below the age of 16 should be protected from the exploitation of their data (save in cases where their legal guardian provides consent), then we should not look at the consequences, only at whether we believe such a principle should be universally applicable and cement it in law.


One example of such thinking is the case of tax avoidance. The tax code of many countries has grown thicker and more complex in an attempt to crack down on tax avoidance (which is legal but violates the letter or spirit of the law) and on tax evasion through better enforcement (as tax evasion is illegal). From this example we can see the Kantian principle in action: as a society we all agree that each person should contribute a fair share to government and society to pay for the benefit of having a State (rule of law, education, infrastructure…) and so even if we could argue that revising the tax code is a whack-a-mole game and that the super rich will always find ways around it, it still makes sense to legislate.

We can thus draw a parallel between “tax avoidance” and “privacy avoidance”, that is, companies which, while respecting the letter of the law do not respect the spirit or intentions behind the law (for instance, if they set a cut-off age of 16 for their services while making no effort to keep under 16 year olds from using their services).


This brings us to the other justification for agreeing on a higher threshold, based on Aristotelian moral principles: putting an emphasis on the spirit of the law and the intentions of the legislator (Aristotle called this the “Telos” or purpose as the basis for defining what was “right” or “just”). Should the intentions of the legislator not be respected, this might trigger in the future further legal action to ensure that the intentions are explicitly defined via a more specific law.


One example of this is the Directive on Payments Accounts which included a right for any European Citizen to open a basic bank account. In Europe, it was estimated in 2013 that 58 million European Citizens above the age of 15 did not have a basic bank account. “In today’s world, European citizens cannot fully participate in society without a basic bank account. Bank accounts have become an essential part of our everyday life, allowing us to make and receive payments, shop online, and pay utility bills”. Commissioner Barnier gave the banking industry one year to come up with some proposal or solution to this issue and when it became clear that self-regulation would not make a substantial difference, the Commission decided to regulate. It’s legal basis was the following:


“ Article 114 of the Treaty on the Functioning of the European Union. As explained above, by setting up an EU level framework in the fields covered by the proposal, it aims to remove the remaining barriers to the free movement of payment services and, more broadly, to the free movement of goods, persons, services and capital for which a fully integrated and developed single market for payment services is vital. The proposal also prevents any further fragmentation of the single market which could occur if Member States were to take diverging and inconsistent regulatory actions in this field.”


In short, the Commission justified specific action on the basis of a more general principle in the European Treaties, but it’s motives were much broader as we could see from the statement quoted above: basic bank accounts were considered to be part of Services of General Interest (SGI), which is why private actors (banks) were not completely free in deciding how to run their business.


How is this relevant to the GDPR? Well, the Internet has also slowly fallen in the realm of SGIs. For instance, while copyright holders were keen on pushing through laws that cut Internet access to individuals who violated copyright, Internet access was deemed so important for daily life that penalties could only go so far as reducing the bandwidth. Perhaps one day, Facebook, Google or other major online platforms/search engines/social networks will be deemed essential to fully participate online and leading to social exclusion if users would be forbidden access.


In that light, the next legal battle may be the right to a Basic Online Account on most major online platforms, which would be free from any data processing (which is exactly what the legislators did for the basic bank account: determining which services such a basic bank account would have, without any fancy features like overdraft capability, access to online investment platforms…). Legislators could wait for a certain period of time, just like Commissioner Barnier waited on banks, and see if online service providers “act in good faith” and respect the letter, spirit, intention of the GDPR and if the problems identified above materialize, legislate via a more specific law.


3 views0 comments

Recent Posts

See All

Comments


Follow Me

  • YouTube
  • LinkedIn
  • YouTube
  • Twitter
  • Twitter

© 2023 By Martin Schmalzried

bottom of page