Brussels Bulletin – report back on GDPR matters

Earlier this week in Brussels the Article 29 Working Party and the Commission independently organized two separate sets of meetings on the GDPR. They were arranged sequentially. This meant I was able to go to both. Article 29 came first.

At the Article 29  meeting there was a workshop on consent, one on profiling and one on data breach notifications. I attended the workshop on consent but there was a plenary with a report back from the other two.

Both sets of meetings were dominated by a wide range of industries and interests far removed from the usual child protection and child welfare circuits we normally inhabit.

Gillick Principles?

Having said that, a chap from the British National Health Service (NHS) had some pressing points about children being given access to their own medical records.

Where those records are accessible online – seemingly a great many already are and more will be in future – would a child’s doctor need to obtain parental consent before allowing the child access?  His view seemed to be (a) the NHS is not an Information Society Service provider within the meaning of the GDPR (because it is not commercial) and therefore is not bound by its terms, but (b) even if it was, the Gillick Principles should apply in any event i.e. irrespective of a child’s age and the statutory minimum age defined for wider purposes, a child’s doctor could grant access  exclusively to the child without having to obtain parental consent and without even having to inform the parents, providing the doctor judged the child to have sufficient understanding of what the records were and sensitivities surrounding them. Definitive answers or guidance came there none but in that respect, the NHS was not alone. Not by a long chalk.

Everybody has lots of questions

Which brings me to point number one. There is still a great deal of uncertainty hovering over many bits of the GDPR.  We are not alone. Not much comfort, but maybe a little.

Several of these areas of uncertainty have potentially enormous economic, legal, technical and operational implications for businesses and consumers alike. Everyone was impatient to hear when draft guidance documents would start to appear for comments. The answer was generally the same – “soon”.

Quite strong feelings were expressed that there was no way everything could be done satisfactorily in time for the May, 2018 deadline. There was talk of possible “transitional arrangements” but I thought we were already in transition.

Need for a specific focus on children

The very wide number of interests that were clamouring to make their points or ask their questions meant there was simply no way I could air all mine consequently I hammered home one simple point both in the consent workshop, in the plenary, then at the Commission’s meeting. This was about the need for a specific discussion around the position of children under the GDPR. That seemed to be accepted by everyone and both  PEGI and the Toy Industry trade association spoke up to agree with me.

I mentioned that the UK’s data protection authority – the ICO – had indicated they would organize a consultation on children and the GDPR but nobody from Article 29 or the Commission appeared to be aware of any other authority in any other Member State that was planning to do the same. That was a surprise.

A quick summary of a number of the other points I thought were important:

No update was available about the age limits that different countries were going to adopt.

It was acknowledged that having children on the same site or App at the same time but in different jurisdictions with different age limits was throwing up “interesting challenges”.

Article 35 impact assessments seemingly will not influence or alter a company’s obligation to carry out, or not carry out, age verification, neither will it determine the lengths to which a company might need to go to ensure any parental consents it obtains are authentic. I am not sure a court would ultimately see things that way but that was the view of most people in the room.

A single consent cannot be used for multiple purposes. But when you join Facebook or YouTube you are potentially engaging with a number of different types of activity, each of which can generate data that is processed. That being the case…..?

What happens when the GDPR kicks in? Are all previously obtained consents vitiated?

It seems it would be unwise for any company to leave consent solely within their Ts&Cs. They should be pulled out and presented as a standalone. This stems from the need for the giving of consent to be unambiguous and clear.

Officials repeatedly were at pains to point out that consent is only one basis on which data can be lawfully collected and processed although how or whether any of the other grounds will apply in the case of children must be open to doubt.


Last but by no means least, it is apparent that the issue of “profiling” is going to be huge. By coincidence, the UK’s ICO issued a “request for feedback” on profiling the day after the workshop and there is a specific section on children. For ease of reference I will quote from bits of it.

We are reminded that Recital 38 of the GDPR says

“….as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data. Such specific protection should, in particular, apply to the use of personal data of children for the purposes of marketing or creating personality or user profiles……” 

The ICO interprets this as meaning

Controllers must not carry out solely automated processing, including profiling, that produces legal or similar significant effects (as defined in Article 22(1)) in respect of a child.

A child for these purposes is someone who is under 18. The age limit for giving consent to data transactions without the need to obtain parental consent seemingly is irrelevant under this heading. Where is this going to leave social media sites that rely on automated processing to serve advertisements to persons under the age of 18 and will they be required to determine whether or not you are 18 or above before your profile can be assigned any advertisements?

Before we get too carried away there is an issue about whether or not advertising counts as being the product of a decision based solely on an automated process.  I think it must be, especially the way the modern system of bidding works. Is being exposed to an advertisement a “legal or significant effect” anyway? Some obviously thought it wasn’t. Others disagreed.

I also had an interesting side discussion about whether or not making it easy for children to lie about their age with impunity or with very little possibility of discovery, and gain benefits from so doing e.g. by becoming and remaining a member of a social media service, could in and of itself and without more constitute a harm? At least one person other than me thought it could. Watch his space.

You can see why many people from industry are having sleepless nights. So am I.


About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International and is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John is now or has formerly been an Adviser to the Council of Europe, the UN (ITU), the EU and UNICEF. John has advised many of the world's largest technology companies on online child safety. John's skill as a writer has also been widely recognised.
This entry was posted in Age verification, Consent, Default settings, E-commerce, Regulation, Self-regulation. Bookmark the permalink.