Yesterday in Parliament – news and no news about porn

Yesterday was the final day of debate on the “Gracious Address” in the House of Lords. The Address had been delivered by Her Majesty on 19th December to mark the opening of a new Parliamentary year and a new Parliament following the General Election. The next day a more detailed announcement followed setting out the Government’s legislative programme for 2019-20.

Online Harms

There is to be an Online Harms Bill. This is good. Probably. Originally the Government said such a Bill would be put through pre-legislative scrutiny, which could also be good, but details of how and when remain scarce. This may presage substantial delay. We might be talking about two to three years. Which is terrible.

This question of delay and timescales is particularly significant when set in the context of the ease with which children can currently access millions of hard core pornography web sites. The crazy thing is we already have a law that could help shield kids from such material but the Government has refused to implement it. The law of which I speak is contained in Part 3 of the Digital Economy Act 2017.

Were Part 3 to be brought into effect the UK would become the first democratic country in the world to require commercial publishers of pornography on the internet to introduce age verification mechanisms as a way of restricting children’s access to their wares

Protection delayed is protection denied

Following a sustained campaign led by children’s organizations and a group of mainly women MPs and Peers, the idea of having such a law appeared in the Conservative Party Manifesto of 2015. In 2017 it completed its passage through Parliament with the support of all the major political parties.

Ministers brought forward a set of statutory instruments to establish the regulatory framework within which the policy would operate. A Regulator was nominated by the Government and agreed by Parliament (the BBFC). Millions of pounds were spent getting us to that point. A range of new and existing businesses also spent millions innovating highly efficient ways of carrying out age verification online. Something similar happened before when age verification for online gambling sites was introduced following the implementation of the Gambling Act 2005.

While initially hostile, the commercial pornography publishers accepted this was now law so they too prepared themselves for the new regime. The Information Commissioner was satisfied with the privacy aspects of the policy.

The fateful day

Absolutely everything was in place when, on 16th October, the Government called a halt. Out of the blue, so to speak. No prior warning.

Several media outlets reported the Government had had a change of heart and was dropping the policy altogether. I have seen nothing from Ministers speaking on the record which would justify that conclusion so unless there was lobby briefing to the contrary, I am at a loss to explain why journalists picked up the story in that way.

In search of “coherence”, apparently

The principal justification offered by the Government was that they wanted the measures to protect children from pornography to be folded into or made “coherent” with their evolving thinking on the wider Online Harms Bill which they were preparing. The Secretary of State’s exact words were:

“It is important that our policy aims and our overall policy on protecting children from online harms are developed coherently….. with the aim of bringing forward the most comprehensive approach possible to protecting children.

The Government have concluded that this objective of coherence will be best achieved through our wider online harms proposals…”

Certainly it is true Part 3 was enacted before the Government embarked on its larger odyssey but the question of the role of porn publishers is quite discrete and particular. Part 3 simply insists commercial publishers of pornography take responsibility for ensuring kids cannot access their sites so easily. Whatever the Government might decide to do with social media sites or other online businesses they are going to have to come back to it. Everybody working in the field knows that.

Is it even remotely possible the Government will say, in effect, “Following a rethink we now believe commercial publishers of pornography can carry on as before. They will have no legal obligation to do anything to keep children off their sites“? I don’t think so.

The very next day

Matters did not rest as they were left on 16th October. The very next day in the House of Commons over a dozen MPs questioned the Minister for Digital, Matt Warman MP, about the shock announcement.

In his replies Mr Warman acknowledged that restricting children’s access to commercial pornography sites was “critically urgent” before going on to say “I am not seeking to make age verification (for pornography sites) line up with (the Online Harms Bill) timescale”.

If protecting children from commercial pornography was so “critical” one has to wonder why it was stopped on the eve of implementation? By their actions the Government ensured children who could have been protected from seeing some truly shocking and harmful images, will not be. It did not have to be that way.

Nevertheless, as we have seen, Warman did indicate that moving forward on age verification for commercial pornography sites need not be bound to the same timetable as the promised Online Harms Bill. That does give some grounds for optimism. Might the new age verification regime yet be brought forward sooner rather than later? It could be. It should be. It would be very easy to do. “All” it requires is for the Government to bring one more statutory instrument to Parliament and name a commencement date.

Yesterday in the Lords in winding up the debate the Government gave assurances that they would be bringing forward “interim codes on online content and activity relating to terrorism and child sexual exploitation”. These are welcome but, at the risk of being repetitive, they do not address the responsibility of commercial publishers of pornography to keep kids off their properties. Part 3 of the Digital Economy Act 2017 does precisely and only that. But on this the Government was silent (although they have promised a letter answering a number of questions that were raised in the debate which Ministers did not cover in the summing up).

Alternatively, if the Government believes there is a specific problem with Part 3 as originally envisaged, they should say what it is. There are various rumours but nothing definitive has emerged from Whitehall.

Perhaps there is a legal method or Parliamentary procedure which could be deployed to amend or add to what we already have in a way which would meet the Government’s concerns? Surely the Opposition Parties would happily facilitate such a course of action?

Posted in Age verification, Child abuse images, Default settings, Internet governance, Pornography, Regulation, Self-regulation

Is the cure worse than the disease?

In a blog which focuses on the meaning of “privacy” in the modern world, Privacy International published an excellent summary of key international instruments which address the subject. At the end they announce their conclusion

Privacy is a qualified, fundamental human right. 

Note, they do not say privacy is an absolute right. That is borne out in all of the treaties and conventions to which Privacy International refers.

Yet look where we are headed with strong encryption.

We are creating what are, for practical purposes, impregnable or unreachable spaces.  These confer impunity on any and all manner of wrongdoing. Paedophiles and persons who wish to exchange child sex abuse material are permanently shielded, as are terrorists and an infinite variety of scam artists.

The rule of law is being undermined

We are looking at a world where warrants and court verdicts lie mute, incapable of fulfillment.  The rule of law is thereby being undermined.

Whereas previously a familiar cry one heard, for example in respect of apparently illegal content, was it should not be taken down without the say so of a judge, the same voices now seem content to contemplate a situation where all judges are rendered impotent.

Thus, on top of the long-established challenges associated with the internet: scale, speed, jurisdiction and complexity, we are adding a whole new layer.

Attacking the problem from the wrong end

Obviously, I get that there has been an erosion of public confidence and trust both in political institutions and in online businesses. Moreover I am not against encryption (see my previous blog) but the way it is being rolled out in some areas is disproportionate. The cure is turning out to be worse than the disease.

Limiting the ability of companies themselves to detect and prevent behaviour which  contravenes their own terms of services is wrong and makes a mockery of the very idea of having terms of service in the first place.

Making it impossible for law enforcement agencies with proper authority to see the content of a message likewise is simply wrong.

Sending cannabis through the post

If I decide to open up a sideline selling cannabis could I legitimately enlist the Royal Mail to help my business prosper by delivering weed to my customers? Of course not.

There is no reasonable expectation of absolute privacy vis-a-vis the otherwise sacred and untouchable postal service. Postal services all over the world take reasonable and proportionate steps to ensure their systems are not being used to aid and abet crimes. They sniff, they scan, x-ray and goodness knows what else.

Have people stopped using the post?

When it became known that this could happen did the mass of people abandon the postal system, outraged by this actual or potential encroachment of their right to privacy of communications? No. Neither would they desert Facebook Messenger if they knew that, only with proper authority and just cause, a message could be examined by the police or court officials.

But is it unreasonable to expect Facebook Messenger not to use strong encryption if all of its competitors are? That is a completely different question.

We really do need to call a halt and take a breath. Just because technologists have invented something it does not mean its use must become compulsory. Certain genies can be put back in the bottle if there is sufficient political will.

In respect of forms of encryption which preclude  the possibility of scrutiny by anyone the political will is growing. It needs an urgent push.

Posted in Apple, Child abuse images, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation

A big “thank you” to Facebook

The essence of Facebook’s argument for changing Facebook Messenger and Instagram Direct Messaging from unencrypted to encrypted environments is that “all the other major messaging Apps are already encrypted or are going to be so if we don’t do this it will hurt our business”. 

In the various presentations I have heard from Facebook personnel a more convincing or elevated motive has not yet surfaced.  There was a discussion about changing patterns of messaging, a trend towards smaller groups and so on but really it was pretty clear that while usually Facebook has been ahead of the curve, this time they were behind it. Encryption is now the fashion. They are going with it. Money beckons. It always does.

A much relished additional benefit of the company announcing its intention to encrypt is, if Governments and various interest groups fight them over it, Facebook gets a unique opportunity to present itself as a champion of privacy. The chutzpah, the irony, takes your breath away. But let that pass. What counts is now, not then.

Actually, I think we kind of owe Facebook a big “thank you”. They revealed the scale of bad or illegal behaviour in messaging Apps. In 2018 Facebook Messenger made 12 million reports to NCMEC, the USA’s official body for receiving details of online child sex abuse materials (csam).

In the same period how many reports were received from iMessage, the principal messaging App used on the Apple platform? 8.  That is not 8 million. That is 8 as in the single digit representing two fewer than 10. What is the difference between Facebook Messenger and iMessage? The latter is already encrypted.

Isn’t the real and obvious question therefore “If Facebook offers us a glimpse of the potential scale of offending in an unencrypted messaging environment what might be happening in the encrypted ones?”

No one knows.

I am going to write another blog on this (soon). It will be slightly more discursive (that’s code for “longer”) but in the meantime I think we need to shift the focus away from what one company (Facebook) is doing, to what encryption as a whole is doing or threatens to do to the modern world.

I mean we now know with complete certainty that techno wizards have an endless capacity for two things: making gigantic sums of money and getting things wrong.

Isn’t it time for citizens and our elected representatives to step up and say “Hold on guys. You are about to take another misstep. This time we can see it before you. We don’t want to wait for the apology or the promise to ‘try harder to get it right next time’. Let’s slow things down a little. Take a breath.”

My instinct is to say every service that deploys strong encryption must be required also to maintain a means by which, with proper authority e.g. a court order,  the contents and meta data associated with any particular message can be made available in plain text to the court or another appropriate agency. And enough of the talk of “back doors”.  No one I know wants them. Proper and transparent authority is what matters.

Moreover,  encryption comes in many forms and has many uses, most of them wholly benign. No way should anyone express blanket opposition to all forms of encryption everywhere and always. But in the realm of mass messaging services open to the public we need to insist companies explore, for example, the possibility of deploying tools which can scrutinise a message or content before it is encrypted. If alarm bells ring appropriate action can be taken. Here I am thinking in particular about csam but there could be other material of equal concern.

I understand there are flavours of strong encryption and ways of managing strong encryption which lend themselves more easily to the possibility of “peering in” to the encrypted tunnel to detect criminal behaviour. If that is true why would anyone want to use a flavour or a method that makes that impossible or appreciably harder?

Industry and Governments have created the climate or conditions that are fuelling the demand for encryption. We must not allow that climate to threaten the rule of law and neither should we allow it to put children in danger.

Posted in Child abuse images, Privacy, Regulation, Self-regulation

More shocking insights

Here’s another great piece from the New York Times. It’s the third in the series. The headline tells you what the focus is this time around.

Video Games and Online Chats are ‘Hunting Grounds’ for Sexual Predators

Children are the prey.

Once more the quality and depth of the research shines through, as does the amount of time and other resources the reporters devoted to smoking out the truth, talking to victims, their parents, law enforcement agencies and the companies themselves.

Internet usage among children in lots of countries is marching towards 100% for four year olds upwards. One in three of all internet users in the world is a child. This rises to over one in two in some places. Whatever else we might imagine, want or believe the internet to be it is unquestionably also a medium for children and families.

Paedophiles go where children go. Every internet company should have that fact always at the front of mind.  Very obviously, right now it isn’t.

Too many companies are hiding behind the laws which give them immunity from civil and criminal liability.  In fact the immunity creates a legal incentive for them to sit back. If the platforms did not have that immunity the services causing the problems for children would  be very different and almost certainly a lot safer.

You need to know who your customers are

On top of the immunity and part of the larger problem is the fact the same platforms are under no obligation to know who their customers are or verify any of the information they provide about themselves. It’s a lethal cocktail.

The platforms collect enough data to serve ads but not enough to keep children safe. They need to take greater responsibility for knowing who their customers are so that, if something bad happens, with proper authority the suspected wrong-doers can be swiftly and inexpensively identified. “Swiftly” and “inexpensively” are the key words there. If we can establish a new culture of accountability online crimes against children will reduce.

People who object to this idea cite the existence of totalitarian states as the reason why we need to defend the status quo.

Political problems in some parts of the world are therefore being used as a pretext for doing nothing or too little in all parts of the world. Children are an unfortunate sacrifice they are willing to pay. Not me.

And of course, the supreme irony is the people who benefit most from this are the shareholders of the very companies that created the problem in the first place. They created “surveillance capitalism” and stood by, or actively aided and abbetted, as it predictably morphed into a weapon of the “surveillance state”, vastly increasing the powers of oppressive regimes.

But not all regimes are oppressive. A great many do adhere to the Rule of Law. They do honour all the important human rights laws.

It is impossible to engage with someone who believes you cannot distinguish between the Governments of, say, Norway and North Korea.

Good news from the south

I have never met Annie McAdams but obviously we are soulmates. McAdams is  a personal injuries lawyer from the Lone Star State. She is trying to use product liability as a way of subverting the immunities the platforms have enjoyed hitherto. She has cases going in California, Georgia, Missouri and dear old Texas itself.

Facebook is fighting  them. But then they would, wouldn’t they?

Posted in Default settings, Internet governance, Privacy, Regulation, Self-regulation

More evidence about the dangers to children posed by encryption

Earlier this week the NSPCC published the results of a series of Freedom of Information requests it made to the police in England and Wales (so not the whole of the UK).

They  asked the police how many cases they had dealt with in the past year that involved  online grooming behaviour directed at a child or the distribution of child sex abuse material on either Facebook, Instagram or WhatsApp. These  services are all owned by one company: Facebook.

Only 32 out of 43 forces replied and of these some said they were unaware of the platform or service that was used. Let that pass for now.  These types of crimes are under-reported anyway but, of the cases that were reported, among those where the platform or service was identified by the police, from a total of 9,259 instances,  22% were on Instagram, 19% were on Facebook or Facebook Messenger and 3% were on WhatsApp. That works out at 11 cases per day.

WhatsApp is already encrypted. That probably accounts for the relatively low percentage but Facebook have announced they intend to encrypt everything on all three services.

What did the NSPCC have to say bout this?

“Instead of working to protect children and make the online world they live in safer, Facebook is actively choosing to give offenders a place to hide in the shadows and risks making itself a one stop grooming shop.

“For far too long Facebook’s mantra has been to move fast and break things but these figures provide a clear snapshot of the thousands of child sex crimes that could go undetected if they push ahead with their plans unchecked.

“If Facebook fails to guarantee encryption won’t be detrimental to children’s safety, the next Government must make clear they will face tough consequences from day one for breaching their Duty of Care.”

I could hardly have put it better myself.

 

 

 

 

Posted in Child abuse images, Privacy, Regulation, Self-regulation

Let there be light

In September the New York Times produced the first in a series of articles in which they focused on the internet industry’s response to the explosive growth in the detection of online child sex abuse material (csam).

They started with statistics supplied by National Center for Missing and Exploited Children  (NCMEC). In 1998  NCMEC received 3,000  reports of csam. 2018’s number was 18.4 million, referencing 45 million still pictures and videos of csam.

We were informed in a later article in 2013 fewer than 50,000  csam videos had been reported whereas in 2018 it was up to 22 million. Video has been the major area of growth. The Canadian Centre for Child Protection and the UK’s Internet Watch Foundation have witnessed similar  increases.

Shocking though these numbers are, probably what they demonstrate is simply the increased proactive deployment and effectiveness of tools used to detect csam by a comparatively small number of internet companies.

However, what the New York Times articles principally showed was the inadequacy of the wider internet industry’s response and indeed the inadequacy of the response of some of the industry’s leading actors. We have been led up the garden path.

If child safety and security was really embedded in a company’s culture, stories of the kind published by the New York Times would simply not be possible. Yet they have been appearing for years if never before with such forensic detail.

The Technology Coalition

In 2006  the Technology Coalition was established. Here is its stated mission

Our vision is to eradicate online child sexual exploitation. We have invested in collaborating and sharing expertise with one another, because we recognize that we have the same goals and face many of the same challenges.

This is the standard rubric. You hear it all the time. From everybody. It isn’t true.

Child Abusers Run Rampant as Tech Companies Look the Other Way

That was the headline for the second article in the New York Times series. It completely blows away the facade of an energetic, purposeful collective industry drive to get rid of csam from the internet.

Here are some extracts from the piece:

The companies have the technical tools to stop the recirculation of abuse imagery by matching newly detected images against databases of the material. Yet the industry does not take full advantage of the tools.

Specifically we were told

The largest social network in the world, Facebook, thoroughly scans its platforms, accounting for over 90 percent of the imagery flagged by tech companies last year, but the company is not using all available databases to detect the material….. (emphasis added).

Apple does not scan its cloud storage…. and encrypts its messaging app, making detection virtually impossible. Dropbox, Google and Microsoft’s consumer products scan for illegal images, but only when someone shares them, not when they are uploaded.

…other companies, including…. Yahoo (owned by Verizon), look for photos but not videos, even though illicit video content has been exploding for years. 

According to the Times

There is no single list of hashes of images and videos all relevant companies can use.

Google and Facebook developed tools for detecting csam videos. They are incompatible. A plan to create a process for sharing video “fingerprints” (hashes to speed up  detection) seemingly has “gone nowhere”.

There’s more

Tech companies are far more likely to review photos and videos and other files on their platforms for…. malware detection and copyright enforcement. But some businesses say looking for abuse content is different because it can raise significant privacy concerns.

Amazon, admittedly not a member of the Technology Coalition but the world’s largest provider of cloud services, scans for nothing.

A spokesman for Amazon…. said that the “privacy of customer data is critical to earning our customers’ trust,”…… Microsoft Azure also said it did not scan for the material, citing similar reasons.

At some point it will be interesting to deconstruct what “customers’ trust” really means.

And we know all this because…

How did we learn of all this? Did it emerge as the result of open declarations by tech companies?  Obviously not. Following careful analysis by a dedicated team of academics? No.  Has the truth been exposed by a law enforcement body, NGO or a governmental agency that finally decided omertà was not in the public interest? No.

We have gained these insights because the management of the New York Times decided to give two journalists, Michael Keller and Gabriel Dance, the space and the resources to pursue a self-evidently important story.

I met these guys for the first time when I visited the New York Times offices last Monday. However, I had first spoken to them in June. They had been investigating csam since February, flying around (literally), talking with a multitude of people, piecing things together from on the record and off the record sources.

It was a gigantic effort that made a commensurate splash on the paper’s front page. It seems to be having the desired effect.

A letter from five Senators

One immediate consequence of the New York Times articles emerged last week when five US Senators (two Democrats, three Republicans) wrote an impressively detailed letter to thirty six tech companies. They include all members of the Technology Coalition and plenty more besides. The Senators want answers by 4th December.

Let’s see  how the companies respond. The letter contains all the right questions. They are precisely the kind techbusinesses should be legally required to answer. Once the UK election is over let’s hope we can move swiftly to establish  a strong regulator who can ask them confident they will receive truthful replies. Any hesitation or refusal by the US companies to respond to the Senators’ letter will only add to a sense of urgency here.

The New York Times has helped children the world over

Children the world over owe Keller and Dance and their bosses a lot but it is little short of scandalous that it took a newspaper to let in the light. Where is the public interest body that has the resources and the ability to track and report consistently over time on matters of this kind? It does not exist. It should.

I have been arguing for ages we need a Global Observatory, among other things to do routinely what the New York Times just did as a one off.  Somewhere there needs to be a properly resourced independent agency that has children’s interests at its heart and high tech industries in its sights. But such a body needs to be sustainable over time. That is a big and expensive thing to do. I’m going to have another go at doing it.

Posted in Child abuse images, Internet governance, Regulation, Self-regulation

Good news from the EU

I have just heard the Finnish Presidency of the EU has decided NOT to press for a resolution of the ePrivacy Regulation before end of the year. I think we can say “hurrah!” This means the status quo remains i.e. no objection to continued use of PhotoDNA for known images or other tools with similar technology-based child protection aims in the messaging space. When the processes start again in the New Year, with a new Commission, we will need to dig in and make sure it does not go off the rails again.

In the medium to longer term we also need to address the issue of the evident lack, among privacy  lawyers and privacy professionals, of an understanding of how different parts of the technology space can impact on young people’s health and safety.

Posted in Child abuse images, Default settings, Internet governance, Privacy, Regulation, Self-regulation