Facebook urged to rethink its plans for encryption

Today children’s organizations from over 100 hundred countries around the world publish a letter urging Facebook to rethink its plans on encryption. So far the story has been picked up and reported by the BBC, the Financial Times and the New York Times.  UK broadcast media and other newspapers are also picking it up, as are broadcast and print media outlets elsewhere. Bravo the NSPCC for helping put the letter together and bravo ECPAT International for putting their weight behind gathering support from children’s groups across the globe. The letter can be viewed here.

Posted in Child abuse images, Default settings, Internet governance, Privacy, Regulation, Self-regulation

A big step with bigger implications

Earlier this week the British media were full of reports of the “Age Appropriate Design Code”. It had just been published by the Information Commissioner’s Office (ICO), the UK’s  data protection authority. The Code’s provisions are likely to become fully operative in little over a year.

The Code’s title has a slightly nerdy feel, suggesting it might provide advice about the best sorts of layout and colours to use on web sites aimed at young people, but the only other one I could come up with was “The Code That Tells  Online Businesses How To Handle Children’s Data And Respect The Privacy Rights Of Under-18s”.  The ICO’s one is better.

The Code owes its existence to the Data Protection Act 2018. This was the Act which adopted the EU’s GDPR  into UK law. However,  the redoubtable Baroness Kidron, supported by a wide range of children’s organizations, other Peers and the 5 Rights Foundation, spotted weaknesses in the European instrument and proposed an amendment which was then accepted by both Houses of Parliament.

The resulting Code does not conflict with the GDPR in any way. Rather it makes things more explicit and in so doing strengthens them and therefore makes it more likely they will be honoured by businesses and other organizations.

The code further nudges businesses towards making sure they know who is visiting their sites or using their services.   Sites cannot continue to say “this site or service is meant only for adults” and then take no meaningful steps to keep out “non-adults”.

There is a risk of getting overly theological about whether or to what extent the nature of the content of a site can be wholly disregarded when considering the data processing dimensions of its activities.  My hunch is the nature of the site itself will be hugely relevant though I am certain many lawyers will be greatly enriched by arguing the exact opposite.

A child is anyone under 18

The GDPR, the Code and UK law adopt the UNCRC standard of 18 to define who is a child. Recital 38 of the GDPR says the following

“Children  merit specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data.”

Within the UK jurisdiction the Code puts flesh on the bones of that statement which, incidentally, because it is a Recital not an Article is not in fact law.

There is no point me rehashing the 15 provisions of the Code. The 5 Rights Foundation has published its own handy summary which you can see here.

My two top picks

All 15 points of the Code are important but for me two stand out. These are

5.  ” Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions, or Government advice.”

To clarify things the ICO adds this

“We mean any use of data that is obviously detrimental to children’s physical or mental health and wellbeing or that goes against industry codes of practice, other regulatory provisions or Government advice on the welfare of children.”

and then there’s

6.  “Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).”

In the latter case, against the possibility that you did not fully understand what the ICO intended,  again they helpfully spell it out

“We mean that you need to adhere to your own published terms and conditions and policies.

We also mean that, when you set community rules and conditions of use for users of your service, you need to actively uphold or enforce those rules and conditions.”

Once more, I couldn’t have put it better myself.

Posted in Age verification, Consent, Privacy, Regulation, Self-regulation

Movement on age verification?

R v Secretary of State for the Home Department, ex Parte Fire Brigades Union is  a famous case from 1995. It was decided by  our Supreme Court (then still called the Judicial Committee of the House of Lords). The Government lost.

Under an Act of Parliament of 1988 the Government was meant to bring forward a new criminal injuries compensation scheme specifically in respect of fire fighters. The way things were left under the Act the scheme was to be introduced on “such day as the Secretary of State may by statutory instrument appoint”. 

Long story short, the Government subsequently announced it was not going to name a date. In effect by an administrative decision they had frustrated the clearly expressed will of Parliament.

Key excerpts from the court’s decision (pardon the archaic English legalese) are as follows

“It might come as a surprise to the man on the Clapham omnibus that legislative provisions in an Act of Parliament, which have passed both House of Parliament and received Royal Assent, can be set aside in this way by a member of the executive.”

and even more tellingly

True, [the Sections] do not have statutory force. But that does not mean they are writ in water. They contain a statement of Parliamentary intention, even though they create no enforceable rights. Approaching the matter in that way, I would read section 171 as providing that sections 108 to 117 shall come into force when the Home Secretary chooses, and not that they may come into force if he chooses. In other words, section 171 confers a power to say when, but not whether.”

Is this ringing any bells? It should. It is very close to what happened with the statutory provisions relating to age verification for commercial pornography sites, as enshrined in Part 3 of the Digital Economy Act 2017.

Several companies spent a lot of money getting ready for the commencement of the new policy. Everything was  in place then, out of the blue, so to speak, on 16th October 2019, the Government called a halt. Not an abandonment as such but, in effect, an adjournment sine die.

Frying different fish

At the time the Government had one overriding political objective. To secure a General Election.

Because of the then Parliamentary arithmetic the timing of such an election was not in their gift.  In some quarters the suspicion therefore is someone in No.10 started to fret. Suppose they suddenly managed to get Parliament’s agreement to hold an election (which they did)? Suppose the new regime of age verification for commercial porn sites kicked in just before or even during the election campaign (which it might have done)?

“Boris the Porn Killer”

Could “Boris the Porn Killer” supplant “Get Brexit Done” as a key theme of the election? Very unlikely. Even so might millions of men be angry because their porn supply had been cut off or interrupted while they completed the age verification process? What if unseen glitches emerged? Who would be blamed? Could it adversely affect votes in marginals?

Did a timid soul in the Conservative Party leadership decide it was best to take no chances? Just pull it. Utter a few warm words about wrapping up the policy with a wider initiative on online harms (which is what they did).

This is just a theory that is doing the rounds but we may be about to find out if there is any substance to it because, for their own different but entirely understandable reasons, the trade association which represents some of the companies that spent millions getting ready for the new scheme are seeking a judicial review of the decision and a number of individual companies are also suing for compensation. It is understood their claims run to around £3 million. Add to that the amounts thought to have been spent by the nominated Regulator (BBFC) and the Government itself and you get to around £5 million.

That’s a lot of money to spend on a pusillanimous and unprincipled panic.

 

Posted in Age verification, Pornography, Regulation, Self-regulation

Yesterday in Parliament – news and no news about porn

Yesterday was the final day of debate on the “Gracious Address” in the House of Lords. The Address had been delivered by Her Majesty on 19th December to mark the opening of a new Parliamentary year and a new Parliament following the General Election. The next day a more detailed announcement followed setting out the Government’s legislative programme for 2019-20.

Online Harms

There is to be an Online Harms Bill. This is good. Probably. Originally the Government said such a Bill would be put through pre-legislative scrutiny, which could also be good, but details of how and when remain scarce. This may presage substantial delay. We might be talking about two to three years. Which is terrible.

This question of delay and timescales is particularly significant when set in the context of the ease with which children can currently access millions of hard core pornography web sites. The crazy thing is we already have a law that could help shield kids from such material but the Government has refused to implement it. The law of which I speak is contained in Part 3 of the Digital Economy Act 2017.

Were Part 3 to be brought into effect the UK would become the first democratic country in the world to require commercial publishers of pornography on the internet to introduce age verification mechanisms as a way of restricting children’s access to their wares

Protection delayed is protection denied

Following a sustained campaign led by children’s organizations and a group of mainly women MPs and Peers, the idea of having such a law appeared in the Conservative Party Manifesto of 2015. In 2017 it completed its passage through Parliament with the support of all the major political parties.

Ministers brought forward a set of statutory instruments to establish the regulatory framework within which the policy would operate. A Regulator was nominated by the Government and agreed by Parliament (the BBFC). Millions of pounds were spent getting us to that point. A range of new and existing businesses also spent millions innovating highly efficient ways of carrying out age verification online. Something similar happened before when age verification for online gambling sites was introduced following the implementation of the Gambling Act 2005.

While initially hostile, the commercial pornography publishers accepted this was now law so they too prepared themselves for the new regime. The Information Commissioner was satisfied with the privacy aspects of the policy.

The fateful day

Absolutely everything was in place when, on 16th October, the Government called a halt. Out of the blue, so to speak. No prior warning.

Several media outlets reported the Government had had a change of heart and was dropping the policy altogether. I have seen nothing from Ministers speaking on the record which would justify that conclusion so unless there was lobby briefing to the contrary, I am at a loss to explain why journalists picked up the story in that way.

In search of “coherence”, apparently

The principal justification offered by the Government was that they wanted the measures to protect children from pornography to be folded into or made “coherent” with their evolving thinking on the wider Online Harms Bill which they were preparing. The Secretary of State’s exact words were:

“It is important that our policy aims and our overall policy on protecting children from online harms are developed coherently….. with the aim of bringing forward the most comprehensive approach possible to protecting children.

The Government have concluded that this objective of coherence will be best achieved through our wider online harms proposals…”

Certainly it is true Part 3 was enacted before the Government embarked on its larger odyssey but the question of the role of porn publishers is quite discrete and particular. Part 3 simply insists commercial publishers of pornography take responsibility for ensuring kids cannot access their sites so easily. Whatever the Government might decide to do with social media sites or other online businesses they are going to have to come back to it. Everybody working in the field knows that.

Is it even remotely possible the Government will say, in effect, “Following a rethink we now believe commercial publishers of pornography can carry on as before. They will have no legal obligation to do anything to keep children off their sites“? I don’t think so.

The very next day

Matters did not rest as they were left on 16th October. The very next day in the House of Commons over a dozen MPs questioned the Minister for Digital, Matt Warman MP, about the shock announcement.

In his replies Mr Warman acknowledged that restricting children’s access to commercial pornography sites was “critically urgent” before going on to say “I am not seeking to make age verification (for pornography sites) line up with (the Online Harms Bill) timescale”.

If protecting children from commercial pornography was so “critical” one has to wonder why it was stopped on the eve of implementation? By their actions the Government ensured children who could have been protected from seeing some truly shocking and harmful images, will not be. It did not have to be that way.

Nevertheless, as we have seen, Warman did indicate that moving forward on age verification for commercial pornography sites need not be bound to the same timetable as the promised Online Harms Bill. That does give some grounds for optimism. Might the new age verification regime yet be brought forward sooner rather than later? It could be. It should be. It would be very easy to do. “All” it requires is for the Government to bring one more statutory instrument to Parliament and name a commencement date.

Yesterday in the Lords in winding up the debate the Government gave assurances that they would be bringing forward “interim codes on online content and activity relating to terrorism and child sexual exploitation”. These are welcome but, at the risk of being repetitive, they do not address the responsibility of commercial publishers of pornography to keep kids off their properties. Part 3 of the Digital Economy Act 2017 does precisely and only that. But on this the Government was silent (although they have promised a letter answering a number of questions that were raised in the debate which Ministers did not cover in the summing up).

Alternatively, if the Government believes there is a specific problem with Part 3 as originally envisaged, they should say what it is. There are various rumours but nothing definitive has emerged from Whitehall.

Perhaps there is a legal method or Parliamentary procedure which could be deployed to amend or add to what we already have in a way which would meet the Government’s concerns? Surely the Opposition Parties would happily facilitate such a course of action?

Posted in Age verification, Child abuse images, Default settings, Internet governance, Pornography, Regulation, Self-regulation

Is the cure worse than the disease?

In a blog which focuses on the meaning of “privacy” in the modern world, Privacy International published an excellent summary of key international instruments which address the subject. At the end they announce their conclusion

Privacy is a qualified, fundamental human right. 

Note, they do not say privacy is an absolute right. That is borne out in all of the treaties and conventions to which Privacy International refers.

Yet look where we are headed with strong encryption.

We are creating what are, for practical purposes, impregnable or unreachable spaces.  These confer impunity on any and all manner of wrongdoing. Paedophiles and persons who wish to exchange child sex abuse material are permanently shielded, as are terrorists and an infinite variety of scam artists.

The rule of law is being undermined

We are looking at a world where warrants and court verdicts lie mute, incapable of fulfillment.  The rule of law is thereby being undermined.

Whereas previously a familiar cry one heard, for example in respect of apparently illegal content, was it should not be taken down without the say so of a judge, the same voices now seem content to contemplate a situation where all judges are rendered impotent.

Thus, on top of the long-established challenges associated with the internet: scale, speed, jurisdiction and complexity, we are adding a whole new layer.

Attacking the problem from the wrong end

Obviously, I get that there has been an erosion of public confidence and trust both in political institutions and in online businesses. Moreover I am not against encryption (see my previous blog) but the way it is being rolled out in some areas is disproportionate. The cure is turning out to be worse than the disease.

Limiting the ability of companies themselves to detect and prevent behaviour which  contravenes their own terms of services is wrong and makes a mockery of the very idea of having terms of service in the first place.

Making it impossible for law enforcement agencies with proper authority to see the content of a message likewise is simply wrong.

Sending cannabis through the post

If I decide to open up a sideline selling cannabis could I legitimately enlist the Royal Mail to help my business prosper by delivering weed to my customers? Of course not.

There is no reasonable expectation of absolute privacy vis-a-vis the otherwise sacred and untouchable postal service. Postal services all over the world take reasonable and proportionate steps to ensure their systems are not being used to aid and abet crimes. They sniff, they scan, x-ray and goodness knows what else.

Have people stopped using the post?

When it became known that this could happen did the mass of people abandon the postal system, outraged by this actual or potential encroachment of their right to privacy of communications? No. Neither would they desert Facebook Messenger if they knew that, only with proper authority and just cause, a message could be examined by the police or court officials.

But is it unreasonable to expect Facebook Messenger not to use strong encryption if all of its competitors are? That is a completely different question.

We really do need to call a halt and take a breath. Just because technologists have invented something it does not mean its use must become compulsory. Certain genies can be put back in the bottle if there is sufficient political will.

In respect of forms of encryption which preclude  the possibility of scrutiny by anyone the political will is growing. It needs an urgent push.

Posted in Apple, Child abuse images, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation

A big “thank you” to Facebook

The essence of Facebook’s argument for changing Facebook Messenger and Instagram Direct Messaging from unencrypted to encrypted environments is that “all the other major messaging Apps are already encrypted or are going to be so if we don’t do this it will hurt our business”. 

In the various presentations I have heard from Facebook personnel a more convincing or elevated motive has not yet surfaced.  There was a discussion about changing patterns of messaging, a trend towards smaller groups and so on but really it was pretty clear that while usually Facebook has been ahead of the curve, this time they were behind it. Encryption is now the fashion. They are going with it. Money beckons. It always does.

A much relished additional benefit of the company announcing its intention to encrypt is, if Governments and various interest groups fight them over it, Facebook gets a unique opportunity to present itself as a champion of privacy. The chutzpah, the irony, takes your breath away. But let that pass. What counts is now, not then.

Actually, I think we kind of owe Facebook a big “thank you”. They revealed the scale of bad or illegal behaviour in messaging Apps. In 2018 Facebook Messenger made 12 million reports to NCMEC, the USA’s official body for receiving details of online child sex abuse materials (csam).

In the same period how many reports were received from iMessage, the principal messaging App used on the Apple platform? 8.  That is not 8 million. That is 8 as in the single digit representing two fewer than 10. What is the difference between Facebook Messenger and iMessage? The latter is already encrypted.

Isn’t the real and obvious question therefore “If Facebook offers us a glimpse of the potential scale of offending in an unencrypted messaging environment what might be happening in the encrypted ones?”

No one knows.

I am going to write another blog on this (soon). It will be slightly more discursive (that’s code for “longer”) but in the meantime I think we need to shift the focus away from what one company (Facebook) is doing, to what encryption as a whole is doing or threatens to do to the modern world.

I mean we now know with complete certainty that techno wizards have an endless capacity for two things: making gigantic sums of money and getting things wrong.

Isn’t it time for citizens and our elected representatives to step up and say “Hold on guys. You are about to take another misstep. This time we can see it before you. We don’t want to wait for the apology or the promise to ‘try harder to get it right next time’. Let’s slow things down a little. Take a breath.”

My instinct is to say every service that deploys strong encryption must be required also to maintain a means by which, with proper authority e.g. a court order,  the contents and meta data associated with any particular message can be made available in plain text to the court or another appropriate agency. And enough of the talk of “back doors”.  No one I know wants them. Proper and transparent authority is what matters.

Moreover,  encryption comes in many forms and has many uses, most of them wholly benign. No way should anyone express blanket opposition to all forms of encryption everywhere and always. But in the realm of mass messaging services open to the public we need to insist companies explore, for example, the possibility of deploying tools which can scrutinise a message or content before it is encrypted. If alarm bells ring appropriate action can be taken. Here I am thinking in particular about csam but there could be other material of equal concern.

I understand there are flavours of strong encryption and ways of managing strong encryption which lend themselves more easily to the possibility of “peering in” to the encrypted tunnel to detect criminal behaviour. If that is true why would anyone want to use a flavour or a method that makes that impossible or appreciably harder?

Industry and Governments have created the climate or conditions that are fuelling the demand for encryption. We must not allow that climate to threaten the rule of law and neither should we allow it to put children in danger.

Posted in Child abuse images, Privacy, Regulation, Self-regulation

More shocking insights

Here’s another great piece from the New York Times. It’s the third in the series. The headline tells you what the focus is this time around.

Video Games and Online Chats are ‘Hunting Grounds’ for Sexual Predators

Children are the prey.

Once more the quality and depth of the research shines through, as does the amount of time and other resources the reporters devoted to smoking out the truth, talking to victims, their parents, law enforcement agencies and the companies themselves.

Internet usage among children in lots of countries is marching towards 100% for four year olds upwards. One in three of all internet users in the world is a child. This rises to over one in two in some places. Whatever else we might imagine, want or believe the internet to be it is unquestionably also a medium for children and families.

Paedophiles go where children go. Every internet company should have that fact always at the front of mind.  Very obviously, right now it isn’t.

Too many companies are hiding behind the laws which give them immunity from civil and criminal liability.  In fact the immunity creates a legal incentive for them to sit back. If the platforms did not have that immunity the services causing the problems for children would  be very different and almost certainly a lot safer.

You need to know who your customers are

On top of the immunity and part of the larger problem is the fact the same platforms are under no obligation to know who their customers are or verify any of the information they provide about themselves. It’s a lethal cocktail.

The platforms collect enough data to serve ads but not enough to keep children safe. They need to take greater responsibility for knowing who their customers are so that, if something bad happens, with proper authority the suspected wrong-doers can be swiftly and inexpensively identified. “Swiftly” and “inexpensively” are the key words there. If we can establish a new culture of accountability online crimes against children will reduce.

People who object to this idea cite the existence of totalitarian states as the reason why we need to defend the status quo.

Political problems in some parts of the world are therefore being used as a pretext for doing nothing or too little in all parts of the world. Children are an unfortunate sacrifice they are willing to pay. Not me.

And of course, the supreme irony is the people who benefit most from this are the shareholders of the very companies that created the problem in the first place. They created “surveillance capitalism” and stood by, or actively aided and abbetted, as it predictably morphed into a weapon of the “surveillance state”, vastly increasing the powers of oppressive regimes.

But not all regimes are oppressive. A great many do adhere to the Rule of Law. They do honour all the important human rights laws.

It is impossible to engage with someone who believes you cannot distinguish between the Governments of, say, Norway and North Korea.

Good news from the south

I have never met Annie McAdams but obviously we are soulmates. McAdams is  a personal injuries lawyer from the Lone Star State. She is trying to use product liability as a way of subverting the immunities the platforms have enjoyed hitherto. She has cases going in California, Georgia, Missouri and dear old Texas itself.

Facebook is fighting  them. But then they would, wouldn’t they?

Posted in Default settings, Internet governance, Privacy, Regulation, Self-regulation