In the UK, and among EU institutions, the idea of self-regulation has reigned supreme pretty much since the arrival of the worldwide web in the early to mid 1990s started the internet on its trajectory towards becoming the mass medium we know today. The policy appeared to be widely and strongly supported by the internet industry as well as the politicians and senior officials who espoused it.
At EU level the most recent and prominent examples of the doctrine at work was evidenced by the creation of the CEO Coalition, followed swiftly by the ICT Coalition. A “Community of Practice” was even created to institutionalize the notion.
It all started with child abuse images
In the late 20th Century across a number of EU Member States hotlines began springing up to address the growing challenge of child abuse images appearing on the internet. In the UK’s case we established the IWF in 1996. In Britain and several other countries industry took the lead in getting the hotlines going but the EU stepped in to encourage this trend by helping with money.
Filtering and blocking child abuse images arrives
Around 2004 the IWF and BT pioneered url blocking as a means of restricting access to urls known to contain child abuse images prior to their eventual (and hopefully speedy) deletion at source. Historically, apart from Italy no country required internet service providers to carry out this type of url blocking but a large and growing number of online businesses did so on a voluntary basis.
I cannot speak for every hotline or speak to how they each handle these matters but the IWF compiles a list of qualifying urls and updates and distributes it twice daily to the many companies that use it to help keep their networks free of child abuse images. The practice has never been challenged in the courts, moreover the wider legal basis on which the IWF operates was set out in a memorandum issued jointly by the Crown Prosecution Service and the Association of Chief Police Officers. Later a leading human rights lawyer gave the IWF a clean bill of health in respect of compliance with human rights standards.
Parents are very concerned about a range of adult content
In the UK we also took things a step further. We responded to parental concerns to shield their children from age inappropriate but otherwise legal content by introducing default-on parental control software that operated at network level.
The mobile phone companies started doing this around 2005. Sky Broadband joined the club more recently. In the case of Sky parents could decide to modify or completely remove the filters. In relation to mobiles it was necessary only to complete an age verification process to get the filters lifted. In addition WiFi providers decided to introduce default-on filters but these would only work in public spaces where it was reasonable to expect children to be found on a regular basis.
Default- on filters met a huge parental demand for simplicity of implementation. It is accompanied by a great deal of education and awareness activity both before and after the fact.
This is not censorship. No content on the internet is removed or changed because of parental controls software. In the UK we have simply been seeking a way to replicate in the online space measures or rules which have been taken for granted in the physical world for a long time. The internet is not exempt from these standards but it has been difficult to find a way to implement them.
The UK’s approach was and it is still, strictly-speaking, experimental because we have yet to see a report as to its effectiveness. As we shall see in a moment, however, clearly the lack of evidence one way or the other did not deter legislators at the EU. They took a decision about the UK approach that was ( obviously?) based on first principles of some kind – but first principles of what sort exactly?
All blown away
It looks like all of the above has to go. Henceforth each Member State that wants to engage with online child safety in the ways outlined will have to pass a law either to allow parental controls software to be deployed at network level or, in the case of blocking access to child abuse images, to make it mandatory. You won’t be permitted to block access to known child abuse urls on a voluntary basis. The state has to require you to do it.
How did this come about?
In a legislative instrument that addressed net neutrality.
In the past when the EU has debated issues concerned with online child protection it has been clear from the title of the document or the draft instrument that this was the focus of the measure or at any rate that it was a principal focus. The online child protection community, parents and children’s organizations were put on notice and were able to engage, mobilise, lobby and express their views. Officials in the Commission, and doubtless within Member States, concerned with children’s policy were drawn in to the debate.
None of this happened here. It really is a disgraceful way to make new laws and to end a 20- year old policy. I leave on one side for now the potential political impact in countries such as the UK where Euro- sceptics will doubtless make hay with it.
There will be a transition period
The final text of the Directive has yet to be published but I have seen a (leaked) copy of the words that emerged from the Trialogue and I have spoken to several people close to the process, including lawyers. I am reasonably sure my reading of the situation is correct although “check against delivery” is always sound practice.
It appears there will be a transition period so nothing will end abruptly. I don’t doubt the UK will be able to pass the necessary laws, in effect to preserve the status quo, in roughly 5 minutes if needed but can we be confident that there will be no legislative congestion or other political complications in every Member State?
The future of the IWF in the balance?
Internet businesses were willing to establish and fund it when the IWF was a shining example of self-regulation. But now that seemingly everything of importance associated with what the IWF does is becoming the subject of legislation, indeed is becoming a legal requirement, some will ask why they should pay for it at all, or anyway why should they pay for it as an additional item to, say, the police service?
The EU has not covered itself in glory with this sad little episode. I fear the consequences will be far reaching. I am sure many of our leading online companies will continue to work collaboratively and voluntarily through bodies such as the ICT Coalition. The need for them to be seen to promote good online child safety practices has not gone away but the voices of the sceptics within their businesses will have been greatly strengthened. Start ups will feel even less inclined to become involved if they see that well-intentioned self-regulatory efforts can be reduced to nought so casually.
The last time the EU seriously engaged with the question of online child abuse images was in the context of the Directive on combating the sexual abuse and sexual exploitation of children and child pornography. It was adopted in 2011.
Unlike this time around the argument was then very public and it was protracted. There were public hearings in the Parliament and elsewhere. Arguments raged in newspapers and all parts of the media. Practically every children’s organization argued that blocking access to child abuse images prior to their deletion should be made compulsory. We lost. Article 25, 2 of the Directive made it optional. Now the EU has moved on again. It seems such blocking is not even allowed to be optional. Each State has to make it compulsory or it cannot happen at all.