More shocking insights

Here’s another great piece from the New York Times. It’s the third in the series. The headline tells you what the focus is this time around.

Video Games and Online Chats are ‘Hunting Grounds’ for Sexual Predators

Children are the prey.

Once more the quality and depth of the research shines through, as does the amount of time and other resources the reporters devoted to smoking out the truth, talking to victims, their parents, law enforcement agencies and the companies themselves.

Internet usage among children in lots of countries is marching towards 100% for four year olds upwards. One in three of all internet users in the world is a child. This rises to over one in two in some places. Whatever else we might imagine, want or believe the internet to be it is unquestionably also a medium for children and families.

Paedophiles go where children go. Every internet company should have that fact always at the front of mind.  Very obviously, right now it isn’t.

Too many companies are hiding behind the laws which give them immunity from civil and criminal liability.  In fact the immunity creates a legal incentive for them to sit back. If the platforms did not have that immunity the services causing the problems for children would  be very different and almost certainly a lot safer.

You need to know who your customers are

On top of the immunity and part of the larger problem is the fact the same platforms are under no obligation to know who their customers are or verify any of the information they provide about themselves. It’s a lethal cocktail.

The platforms collect enough data to serve ads but not enough to keep children safe. They need to take greater responsibility for knowing who their customers are so that, if something bad happens, with proper authority the suspected wrong-doers can be swiftly and inexpensively identified. “Swiftly” and “inexpensively” are the key words there. If we can establish a new culture of accountability online crimes against children will reduce.

People who object to this idea cite the existence of totalitarian states as the reason why we need to defend the status quo.

Political problems in some parts of the world are therefore being used as a pretext for doing nothing or too little in all parts of the world. Children are an unfortunate sacrifice they are willing to pay. Not me.

And of course, the supreme irony is the people who benefit most from this are the shareholders of the very companies that created the problem in the first place. They created “surveillance capitalism” and stood by, or actively aided and abbetted, as it predictably morphed into a weapon of the “surveillance state”, vastly increasing the powers of oppressive regimes.

But not all regimes are oppressive. A great many do adhere to the Rule of Law. They do honour all the important human rights laws.

It is impossible to engage with someone who believes you cannot distinguish between the Governments of, say, Norway and North Korea.

Good news from the south

I have never met Annie McAdams but obviously we are soulmates. McAdams is  a personal injuries lawyer from the Lone Star State. She is trying to use product liability as a way of subverting the immunities the platforms have enjoyed hitherto. She has cases going in California, Georgia, Missouri and dear old Texas itself.

Facebook is fighting  them. But then they would, wouldn’t they?

Posted in Default settings, Internet governance, Privacy, Regulation, Self-regulation

More evidence about the dangers to children posed by encryption

Earlier this week the NSPCC published the results of a series of Freedom of Information requests it made to the police in England and Wales (so not the whole of the UK).

They  asked the police how many cases they had dealt with in the past year that involved  online grooming behaviour directed at a child or the distribution of child sex abuse material on either Facebook, Instagram or WhatsApp. These  services are all owned by one company: Facebook.

Only 32 out of 43 forces replied and of these some said they were unaware of the platform or service that was used. Let that pass for now.  These types of crimes are under-reported anyway but, of the cases that were reported, among those where the platform or service was identified by the police, from a total of 9,259 instances,  22% were on Instagram, 19% were on Facebook or Facebook Messenger and 3% were on WhatsApp. That works out at 11 cases per day.

WhatsApp is already encrypted. That probably accounts for the relatively low percentage but Facebook have announced they intend to encrypt everything on all three services.

What did the NSPCC have to say bout this?

“Instead of working to protect children and make the online world they live in safer, Facebook is actively choosing to give offenders a place to hide in the shadows and risks making itself a one stop grooming shop.

“For far too long Facebook’s mantra has been to move fast and break things but these figures provide a clear snapshot of the thousands of child sex crimes that could go undetected if they push ahead with their plans unchecked.

“If Facebook fails to guarantee encryption won’t be detrimental to children’s safety, the next Government must make clear they will face tough consequences from day one for breaching their Duty of Care.”

I could hardly have put it better myself.

 

 

 

 

Posted in Child abuse images, Privacy, Regulation, Self-regulation

Let there be light

In September the New York Times produced the first in a series of articles in which they focused on the internet industry’s response to the explosive growth in the detection of online child sex abuse material (csam).

They started with statistics supplied by National Center for Missing and Exploited Children  (NCMEC). In 1998  NCMEC received 3,000  reports of csam. 2018’s number was 18.4 million, referencing 45 million still pictures and videos of csam.

We were informed in a later article in 2013 fewer than 50,000  csam videos had been reported whereas in 2018 it was up to 22 million. Video has been the major area of growth. The Canadian Centre for Child Protection and the UK’s Internet Watch Foundation have witnessed similar  increases.

Shocking though these numbers are, probably what they demonstrate is simply the increased proactive deployment and effectiveness of tools used to detect csam by a comparatively small number of internet companies.

However, what the New York Times articles principally showed was the inadequacy of the wider internet industry’s response and indeed the inadequacy of the response of some of the industry’s leading actors. We have been led up the garden path.

If child safety and security was really embedded in a company’s culture, stories of the kind published by the New York Times would simply not be possible. Yet they have been appearing for years if never before with such forensic detail.

The Technology Coalition

In 2006  the Technology Coalition was established. Here is its stated mission

Our vision is to eradicate online child sexual exploitation. We have invested in collaborating and sharing expertise with one another, because we recognize that we have the same goals and face many of the same challenges.

This is the standard rubric. You hear it all the time. From everybody. It isn’t true.

Child Abusers Run Rampant as Tech Companies Look the Other Way

That was the headline for the second article in the New York Times series. It completely blows away the facade of an energetic, purposeful collective industry drive to get rid of csam from the internet.

Here are some extracts from the piece:

The companies have the technical tools to stop the recirculation of abuse imagery by matching newly detected images against databases of the material. Yet the industry does not take full advantage of the tools.

Specifically we were told

The largest social network in the world, Facebook, thoroughly scans its platforms, accounting for over 90 percent of the imagery flagged by tech companies last year, but the company is not using all available databases to detect the material….. (emphasis added).

Apple does not scan its cloud storage…. and encrypts its messaging app, making detection virtually impossible. Dropbox, Google and Microsoft’s consumer products scan for illegal images, but only when someone shares them, not when they are uploaded.

…other companies, including…. Yahoo (owned by Verizon), look for photos but not videos, even though illicit video content has been exploding for years. 

According to the Times

There is no single list of hashes of images and videos all relevant companies can use.

Google and Facebook developed tools for detecting csam videos. They are incompatible. A plan to create a process for sharing video “fingerprints” (hashes to speed up  detection) seemingly has “gone nowhere”.

There’s more

Tech companies are far more likely to review photos and videos and other files on their platforms for…. malware detection and copyright enforcement. But some businesses say looking for abuse content is different because it can raise significant privacy concerns.

Amazon, admittedly not a member of the Technology Coalition but the world’s largest provider of cloud services, scans for nothing.

A spokesman for Amazon…. said that the “privacy of customer data is critical to earning our customers’ trust,”…… Microsoft Azure also said it did not scan for the material, citing similar reasons.

At some point it will be interesting to deconstruct what “customers’ trust” really means.

And we know all this because…

How did we learn of all this? Did it emerge as the result of open declarations by tech companies?  Obviously not. Following careful analysis by a dedicated team of academics? No.  Has the truth been exposed by a law enforcement body, NGO or a governmental agency that finally decided omertà was not in the public interest? No.

We have gained these insights because the management of the New York Times decided to give two journalists, Michael Keller and Gabriel Dance, the space and the resources to pursue a self-evidently important story.

I met these guys for the first time when I visited the New York Times offices last Monday. However, I had first spoken to them in June. They had been investigating csam since February, flying around (literally), talking with a multitude of people, piecing things together from on the record and off the record sources.

It was a gigantic effort that made a commensurate splash on the paper’s front page. It seems to be having the desired effect.

A letter from five Senators

One immediate consequence of the New York Times articles emerged last week when five US Senators (two Democrats, three Republicans) wrote an impressively detailed letter to thirty six tech companies. They include all members of the Technology Coalition and plenty more besides. The Senators want answers by 4th December.

Let’s see  how the companies respond. The letter contains all the right questions. They are precisely the kind techbusinesses should be legally required to answer. Once the UK election is over let’s hope we can move swiftly to establish  a strong regulator who can ask them confident they will receive truthful replies. Any hesitation or refusal by the US companies to respond to the Senators’ letter will only add to a sense of urgency here.

The New York Times has helped children the world over

Children the world over owe Keller and Dance and their bosses a lot but it is little short of scandalous that it took a newspaper to let in the light. Where is the public interest body that has the resources and the ability to track and report consistently over time on matters of this kind? It does not exist. It should.

I have been arguing for ages we need a Global Observatory, among other things to do routinely what the New York Times just did as a one off.  Somewhere there needs to be a properly resourced independent agency that has children’s interests at its heart and high tech industries in its sights. But such a body needs to be sustainable over time. That is a big and expensive thing to do. I’m going to have another go at doing it.

Posted in Child abuse images, Internet governance, Regulation, Self-regulation

Good news from the EU

I have just heard the Finnish Presidency of the EU has decided NOT to press for a resolution of the ePrivacy Regulation before end of the year. I think we can say “hurrah!” This means the status quo remains i.e. no objection to continued use of PhotoDNA for known images or other tools with similar technology-based child protection aims in the messaging space. When the processes start again in the New Year, with a new Commission, we will need to dig in and make sure it does not go off the rails again.

In the medium to longer term we also need to address the issue of the evident lack, among privacy  lawyers and privacy professionals, of an understanding of how different parts of the technology space can impact on young people’s health and safety.

Posted in Child abuse images, Default settings, Internet governance, Privacy, Regulation, Self-regulation

The bad news just keeps on coming

Swift on the heels of the New York Times piece the UK’s Daily Telegraph publishes, for the first time, figures provided by the IWF showing how social media and other services operating on the open web  in UK are failing to deal with csam. Twitter the worst offender.

Posted in Child abuse images, Regulation, Self-regulation

Voluntarism just isn’t working. Again.

Another excellent major story  has appeared in The New York Times. It details the failings of high tech businesses to address the scourge of child sex abuse still images and videos on the internet. The failures of voluntarism are again manifest.

If  child safety and security were really embedded in a company’s culture, if it was indeed a top priority, stories like these would simply not be possible. Yet they have been appearing for years.

Truly, when technology companies cannot get this right what confidence does this inspire in their ability to get anything right?

Too many companies appear only to shift when a judge or smart journalists finally nail them. The NYT deserves a medal for giving these journalists, Gabriel Dance and Michael Keller, the space and resources to pursue the story, which they have been doing for a great many months.

The only parallel I can think of in the UK is the  support given by The Guardian for Carol Cadwallader’s reporting on Cambridge Analytica and, earlier, on the work done around the Snowden revelations. Why is British journalism in this state? Because high tech businesses have been winning all the advertising revenues that previously supported solid journalism.   What you might call an unvirtuous circle.

Historically journalists have been an important pillar of democracy but if tech neuters or reduces the capacity of journalism where does that leave us? Who benefits?

 

Posted in Child abuse images, Facebook, Google, Microsoft, Regulation, Self-regulation

A sad sequel

I have every sympathy for Nicky Morgan MP  who, last night, announced she was standing down from Parliament, essentially because she has had enough of being a politician. The abuse and the toll it was taking on her family life were the principal reasons given. I completely get that. However, only two weeks ago the same Nicky Morgan, as Secretary of State at DCMS,  announced a delay in implementing age verification for commercial pornography web sites.

Children who could have been protected from viewing horrific, abusive images will now be exposed to them for maybe a further year, two years, no one knows, at any rate for longer than need have been the case.

That is entirely down to Nicky Morgan and now she is walking away. Something about that does not feel right. In the forthcoming General Election campaign I look forward to hearing Boris Johnson’s views on children’s exposure to pornography. How will he explain and justify his Government’s decision not to act when everything was ready to go?

Posted in Age verification, Pornography, Regulation, Self-regulation