Warriors win. Children win

Three days ago, on Wednesday 9th June, the Canadian Centre for Child Protection (C3P) published a brilliant report containing details of their work over the two year period 2018-2020. I blogged about it the next day.

In the two years covered in their report we learned C3P had seen and verified 5.4 million child sex abuse images and in respect of them issued take down notices to over 760 Electronic Service Providers in all parts of the world. 97% of all the images were hosted on the clear web, not hidden away anywhere and 48% of the notices issued related to images which had already been flagged at least once to the hosting company concerned.

We also learned, inter alia, the Canadians identified a single telecoms company, based in France, responsible for fully 48% of all the child sex abuse material referenced.They named the company. It is called “Free”.

On Thursday 10th June Forbes published an article based on the Canadian report. In the Forbes story they named French billionaire Xavier Niel and published his picture while informing us he owned 70% of Free’s parent group, a company called Iliad.

Free had hosted 1.1 million csam files in 2018-2020. These had triggered 2.7 million notices. The most likely root cause of the problem was Free’s policy of allowing anonymous users. Obviously there was no suggestion Niel personally or senior executives were aware of any of this but that hardly constitues a defence when you think of the pain and misery their lack of diligence had caused and continued to cause over many years.

Yesterday, 11th June I received an email from Wonder Woman, whose real world identity is CEO of C3P. Not many people know that so keep it to yourselves.

In the email I was informed

As of yesterday, Free’s file-hosting service no longer allows anonymous users to upload content — only Free account holders have access to the service now. We believe this has effectively eliminated this service as a means of online CSAM distribution. In addition to this, all 6,500 archives files, containing more than 2 million images/and over 35,000 videos, that were still live prior to the release of the report have been deleted from their file-hosting service”.

What can I say?

Could someone remind me why it is important to “play nice” and refrain from naming and shaming? Yet again we see truth is our most important weapon and truth loses its potency, power and purpose if it is kept hidden away.

This is not just a feather in the cap of C3P. It’s a whole Golden Eagle. And well done Forbes for riding shotgun. Children in general and survivors in particular are forever in the debt of both.

Posted in Child abuse images, Internet governance, Regulation, Self-regulation | Leave a comment

Canada puts another nail in the coffin of self-regulation

The Canadian Centre for Child Protection (C3P) is justly famous for many things. One of them is the quality of their research. It is based on two vital, interdependent pillars.

First is a deep understanding of the position and needs of survivors of child sex abuse, particularly those who have had the additional misfortune of appearing in a picture or video which depicts their abuse where the picture or video has also been circulated online.

Second is CP3’s exceptionally strong grounding in the technologies used to make, publish and distribute child sex abuse material (csam) over all parts of the internet.

More evidence of C3P’s top class work became available yesterday when they published their long-awaited report: “Project Arachnid: Online availability of child sex abuse material”. Its nearly 60 pages do not make easy reading (there is an Executive Summary as well) but it is essential reading for anyone engaged in trying to rid the world of the scourge of csam.

The period covered is 2018-2020. Not a vast span but that only serves to underline the scale of what we are facing. And when you look at the report see what C3P say about their backlog. Scary stuff.

Enormous numbers

In the two years under review C3P examined and verified 5.4 million images and issued take down notices to over 760 electronic service providers (ESPs) in all parts of the world.

Qu’est ce que vous allez faire Monsieur le Président ?

Astonishingly, C3P found that fully 48% of all the material identified was linked to a single, French telecommunications company. The G7 is starting in the UK today. President Macron will be there. I wonder if any journalists will tackle him on this and, if so, what will he say? I expect he will be absolutely horrified because there is no doubt his Administration has been making several moves in the right direction and we expect to see even more.

A dark web problem? Emphatically not

You see all kinds of people rolling their eyes and talking about the dark web, encryption and a variety of subterranean exotica as if they were therefore already resigned to being powerless to do anything about csam. But the unvarnished truth is 97% of the csam detected by C3P was in the clear web. So far from being intractable, online csam is highly tractable. What has been missing is the will on the part of far too many ESPs.

And a massive number of repeats

Perhaps even more schockingly 48% of all the take down notices issused by C3P related to images which they had already flagged as illegal to the same provider. That is truly shameful because the technology exists, and is widely available, which would allow any ESP to detect already known images and prevent them ever seeing the light of day again, at least not on a publicly accessible web page. Table 7 of the report (p 38) shows “recidivism” rates going up, not down. And clock Table 7.3 for the names of the companies involved.

Why don’t more companies use the technology that would allow them to detect csam in milliseconds? Because they don’t have to. No law requires it and this, maybe more than anything else, reminds us why self-regulation – voluntarism – has had its day.

Too slow, too slow

C3P says following the issue of a take down notice the median removal time for the item concerned is less than 24 hours. Bravo! But in 10% of the cases it took more than seven weeks for the illegal material to go.

That is utterly unacceptable and again is a product of voluntarism. And by the way it seems the delays are longest where the images concerned involve older adolescents. This conjures up several unpleasant thoughts about non-expert Sysadmins second-guessing child protection experts meanwhile leaving a child’s sexually abusive image on view until they conclude their own internal interrogation. Not on.

Change is gonna come

From page 48 onwards C3P’s recommendations will be instantly recognisable by children’s advocates in the UK and the EU and in many other parts of the world.

Big Tech obstructionism created space for too many bad guys to flourish

Any and every major thinker in the internet space has known this moment would arrive – the end of voluntarism – but too many industry leaders were determined to drag things out as long as possible to keep the mountains of cash rolling in. It was this obstructionism by Big Tech and their deliberate delaying tactics which created the space for myriad unsavoury characters to hide in the shadows. Until now. Thank you C3P. Keep it up.

Posted in Child abuse images, Internet governance, Regulation, Self-regulation | Leave a comment

Facebook must change its mind

Over 90% of the reports shown in the National Center for Missing and Exploited Children’s graph below came from Facebook owned properties, principally Facebook Messenger and Instagram Direct. Yet the company intends to avert its eyes from these Apps. It will do so by introducing strong encryption.

Facebook does not claim the act of closing its eyes will have any impact on the level of illegal activity. They acknowledge that, although if anything one might expect the level of illegal behaviour to increase as more people realise they can never be caught.

To excuse or justify their intentions Facebook are saying two main things.

One is that a lot of the publishing of the images which are then reported to NCMEC is not being done by “bad people”. Rather it is being done by people who are expressing their outrage or disgust, or they think it is some kind of joke.

Hmm. These are not defences known to law and for the victims it makes zero difference. Whether or not people should be arrested and prosecuted for a lapse in good taste or for being outraged or disgusted is a separate question. What never changes is the urgency of getting the images removed and stopping them from being circulated further.

Facebook will not be able to do that if it blinds itself.

The second is Facebook says it will take other steps to identify miscreants and intervene to kick them off the platform e.g. through analysing meta data and patterns of behaviour. Shouldn’t they be doing that anyway? It is not an alternative because it does not get around the fundamental point. Whereas now Facebook can see things, in future….

There may also have been a riff in a minor key about the number of duplicate reports, somehow implying the problem isn’t as large as it might otherwise first seem. Wrong. But anyway see the numbers in the graph.

Through their own appalling behaviour Facebook substantially created what they say is people’s new desire for greater privacy. The company has come up with the wrong answer. It is an answer which inevitably will harm children.

They need to find a better one. If they don’t I am afraid I for one will leave Facebook and all its Apps. It will be inconvenient, at least for a while, but I doubt I will be alone and I think it will become very difficult for children’s advocates or groups to continue thinking of or describing Facebook as a “partner”.

As the Victorians used to say, if it goes ahead Facebook will be “putting itself outside the boundaries of polite society” and no amount of largesse, grants or free flights to and high class hotels in exotic locations, no pr front will be able to cover that up.

I can see Facebook feel unjustly treated. They are uniquely exposed because of their size and because they have been transparent in the past whereas other platforms have escaped similar scrutiny and criticism because they are smaller or because they already deploy strong encryption so no reports or fewer reports reach NCMEC linked to their name. I get that. But it does not alter the basic facts.

Finally, let’s be clear: this decision will be taken by a single person. Mark Zuckerberg. I very much hope he makes the right one.

Posted in Apple, Child abuse images, Facebook, Regulation, Self-regulation | Leave a comment

Government in a muddle over porn

On the morning of 11th May the Queen’s Speech was delivered and published. In the afternoon, Caroline Dinenage MP appeared before the Communications and Digital Committee of the House of Lords. Ms Dinenage is the Minister of State responsible for what has now been renamed the “Online Safety Bill”. In response to a question from Lord Lipsey, she said the following (scroll to 15.26.50)

(the Bill) will protect children by not only capturing the most visited pornography sites but also pornography on social media sites”.

That is simply not true.

As currently drafted the Online Safety Bill applies only to sites or services which allow user interactivity, that is to say sites or services allowing interactions between users or allowing users to upload content. These are what are commonly understood to be social media sites or services. However, some of the “most visited pornography sites” either already do not allow user interactivity or they could easily escape the clutches of legislation written that way simply by disallowing it in the future. That would not affect their core business model in any significant way, if at all.

You could almost hear the champagne corks popping in Pornhub’s offices in Canada.

Now scroll forward to around 12.29.40 where the Minister also says

“(according to research published by the BBFC in 2020) only 7% of children who accessed pornography did so through dedicated porn sites….even children intentionally seeking out pornography did so predominantly through social media “

This too is simply untrue as this table shows

The above is taken from research conducted for the BBFC by Revealing Reality (and note what it says in the body of the report about children seeing porn online before they had reached the age of 11). Bear in mind the table shows the three key routes to children’s pornography access. They are not exhaustive or exclusive one of another. A child could have seen porn on or via a search engine, social media site and a dedicated porn site. Or they may have seen porn on social media once, but be visiting Pornhub every day. 

Other research published the week before the Queen’s Speech looked at the position of 16 and 17 year olds. It found that while 63% said they came across porn on social media, 43% said they had also visited porn web sites.

Part 3 of the Digital Economy Act 2017 principally addressed the “most visited pornography sites.” These are the commercial ones, the likes of Pornhub. In explaining why the Government did not implement Part 3 and now intended to repeal it, I was astonished to hear the Minister say it was down to Part 3 falling victim to the “speed of technological change” as it had not included social media sites.

Does the Minister truly believe the issue of porn on social media sites has only cropped up as a serious matter in the past four years or so? I’m almost tempted to say “if so I give up” .

When the Digital Economy Bill was going through Parliament the children’s groups and others lobbied for social media sites to be included but the Government flatly refused to countenance it. I will not mention at the time Part 3 received Royal Assent, Boris Johnson was a Cabinet Minister in the Conservative Government of the day. Nor will I allude to what I believe are the real reasons why the Tories did not want to proceed with any form of restriction to online porn before the Brexit General Election was out of the way.

Secretary of State and Julie Elliott to the rescue

Two days after the Minister of State appeared in the Lords, the DCMS Select Committee of the House of Commons met with Secretary of State Oliver Dowden MP. In her contribution (scroll forward to 15:14.10) Julie Elliott MP got straight to the point and asked Mr Dowden to explain why the Government had chosen to exclude commercial pornography sites from the scope of the Bill.

The Secretary of State said he believed the biggest risk of children “stumbling” over pornography was via social media sites (see above) but whether or not that is true “stumbling” is not the only thing that matters here, particularly for very young children.

He also said he “believed” the “preponderance” of commercial pornography sites do have user generated content on them so therefore they would be in scope. I have never seen any evidence to support that proposition but see above. A few mouse clicks by the site’s owner could remove interactive elements. Revenues are likely to remain substantially unaffected and in one bound the porn merchants would free themselves of the cost and trouble of having to introduce age verification as the only meaningful way of restricting children’s access.

How could this happen?

Were the Minister of State and the Secretary of State poorly briefed or did they just not grasp and understand the briefs they were given? Whatever the explanation it is a remarkable state of affairs given how much attention this subject has received in the media and in Parliament over several years.

But the good news was Dowden said if a “commensurate” way could be found to include the kind of sites that were previously covered by Part 3 then he was open to accepting it. He reminded us that such might emerge from the joint-scrutiny process which will shortly begin.

I am reaching for my commensurate pencil. I keep it in a special drawer.

Bravo Julie Elliott for getting the kind of clarity we all need.

Posted in Age verification, Facebook, Google, Internet governance, Pornography, Privacy, Regulation, Self-regulation | Leave a comment

Move slow and leave things broken

On 20th December, 2020, a new EU-wide law took effect. An unintended consequence appeared to call into question the legality of companies continuing to look for child sexual abuse materials being exchanged over messaging platforms. It also appeared to call into question the legality of companies identifying images which were likely to be child sexual abuse material or behaviour suggesting a child might be being groomed for a sexual purpose. Companies had been doing some or all of this on a voluntary basis since 2009 so it was a bit of a bombshell.

Over 90% of all the reports pertaining to these kinds of threats to children that were made to the authorities in the USA originated either on Facebook Messenger or Instagram Direct but the child victims involved came from every corner of the globe. Messenger and Instagram have enormous reach.

Facebook, the company that owns both platforms, has one of the worst reputations among Big Tech for law breaking or for contesting the law if it does not suit their business model. But not here. At the stroke of midnight on 19th December, 2020, everything ground to a halt. Eventually this lead to a 58% drop in reports coming from EU Member States. There was no suggestion any fewer children were being abused or exploited. The only difference now was the police had no possibility of intervening because the reports had dried up.

Other companies that had been in exactly the same boat did not stop looking. These included Google, Microsoft and Snapchat. Not exactly online minnows. Their level of reporting remained unaltered. As Sophie in’t Veld MEP pointed out during a debate in the European Parliament, after 20th December there was never any real legal risk attaching to any company carrying on protecting children as they had done before.

On 29th April the EU announced an “interim derogation” (temporary suspension to you and me) of the potentially troubling law. The expressly stated purpose of the suspension was to restore what everyone had understood to be the status quo ante.

Nearly two weeks on I enquired if Facebook had shown the same eager speed and determination to restart the measures they had abandoned so punctiliously.

The answer is not only “no they have not”, it is also the case that as of now they have no definite date by which they will. Why?

Several reasons were given to me. None of them are credible. First they have not yet seen the final, official text. That needs to be endorsed and published in the Official Journal of the EU. However, like the rest of us they have seen the text that appeared on the EU’s official web site.

Then, seemingly, they need to study the official text to ensure they are meetings its terms.

After that the company’s engineers have to be engaged because, while turning it off could be done by flipping switch, apparently turning it back on cannot.

Facebook’s unconvincing, ostensible turn to lawfulness, its abundance of caution, comes at a price. Children are paying it.

“Move slow and leave things broken”. How does that sound as a new company motto?

Have they really no shame?

Posted in Child abuse images, Facebook, Google, Internet governance, Regulation, Self-regulation | Leave a comment

Great news for children coming out of Brussels

Last night in Brussels it was announced that a political agreement had been reached on the interim derogation. In plain English what that means is, pretty much immediately, we can go back to the position we all thought we were in on 19th December 2020 (the day before the new, bad e-Privacy law kicked in).

The suspension is operative for up to three years, during which time we will all need to roll up our sleeves to formulate a longer-term framework. Watch this space.

Thus, the EU has paved the way to allow companies to recommence scanning messaging platforms for child sexual abuse material, grooming behaviour and the use of “classifiers” to detect images not yet determined to be child sexual abuse material but likely to be.

This is a great outcome. A huge pat on the back is due to everyone who had a hand in it. It reflects the enormous amount of work done by many MEPs, Commissioners, Commission staff, children’s groups and child advocacy organizations across the world.

If I have drawn one major conclusion from this whole unfortunate episode it is this: children’s groups and children’s advocates need to engage more closely with privacy lawyers in particular and privacy activists in general.

I share many of the privacy community’s concerns and worries – I think we all do – but a handful of ideologically motivated individuals with a talent for catching media headlines, showed they are not above resorting to outright lies and misinformation to achieve their desired end. In a world bedevilled with often quite intimidating legal and technical language, in a world of zero trust in Silicon Valley and declining trust in Governments, many people, too many people, fell for the scaremongering propaganda.

We cannot let that happen again. We need to find new and better ways to improve public and the media’s understanding of the issues because from that will flow a more grounded and sustainable understanding by policy makers.  Watch this space.

I won’t repeat everything in the Commission’s press release but here’s a first, quick look at the detail of last night’s announcement

  1. The definition of what constitutes qualifying child sexual abuse material or activities is explicitly aligned with the 2011 Directive.
  2. Companies and organizations need to have an appeals mechanism to cater for potentially erroneous decisions. It would be strange if they didn’t already but hey.
  3. There needs to be “human oversight” of the processing of personal data. Potentially problematic given the scale on which the systems operate on larger platforms but it depends how one defines “human oversight”. Expressly there is no requirement for prior authorization before reports can be made or illegal content is taken down.
  4. The tech used needs to be the least privacy intrusive.  That should already be the case.
  5. Companies and organizations need to consult data protection authorities on how they work in this area and the European Data Protection Board will issue guidelines to assist the data protection authorities. Fine, as along as these guys stop thinking along tram lines and learn how to speak in a language the majority of us can understand. They should embrace a mission to explain and heighten public understanding of privacy and not allow themselves to be manipulated into a position where they become identified in the public mind as enemies of common sense who provide shelter to criminals who harm children (and others).
  6. A public register will be established of public interest organizations with which online service providers can share personal data. Sounds OK.  Presumably it will be aligned with the GDPR and changes in Europol’s mandate.
  7. Annual transparency and accountability reports will be required. Hugely important but it cannot be left to companies to mark their own homework. Proportionality will be important here, as it is everywhere else, but so is the idea that everyone can have confidence that the transparency and accountability reports speak the relevant truth and nothing but the relevant truth. I am too polite to repeat the story about how you grow mushrooms.
Posted in Child abuse images, Default settings, E-commerce, Facebook, Google, ICANN, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

What can aeroplanes teach us?

The other day I was talking to the CEO of a tech company, expressing my frustration at the way scaremongering misinformation seems to have taken hold in relation to the way various child protection tools operate online.

We are talking about three types of tools:

PhotoDNA and similar detect known examples of child sex abuse material (csam). Every image in this category by definition is illegal and represents an egregious infringement of the right to privacy and human dignity of the child depicted.  

The second are so-called classifiers. These flag images which are likely to be csam.

The third address grooming behaviour, that is to say behaviour which is likely to lead to a child being sexually abused.

In essence the misinformation circulating about each of these tools implies or expressly states they “scan” all private communications thereby creating the impression that, duplicitously and hiding behind the name of child protection, the police or the security services, the companies themselves, and goodness knows who else, are reading everything you send or receive as a message. Or they could, if that took their fancy.

The simple truth is if any illegal reading or examination of messages is taking place it has nothing whatsoever to do with any of the child protection tools I have mentioned.

Howsoever or wheresoever it originated the miasma of falsehood enveloping the child protection tools is proving to be astonishingly tenacious. Why?

Like many conspiracy theories and other lies that get read and repeated over the internet, the smokescreen of misinformation has been able to take hold because it exploits an underlying lack of trust in or suspicion of “them”.  In this case “them” are some of the major actors in the drama: Big Tech, Governments, law enforcement and the security services.

But there is another set of actors playing an important role in this tragedy. I am referring to parts (stress parts) of the tech community and privacy activists who think each of the interests I listed are as bad as the others. Noblesse oblige they alone therefore have a self-proclaimed and claimed unique responsibility to look out for the rest of us.

Anyone who objects or takes a different view is pitied, marginalized or completely ignored because they obviously don’t understand the complexities of the issues. It’s a kind of techno evangelical paternalism. “Forgive them for they know not what they do.”

And those aeroplanes?

Back to my CEO. He compared the emergence of the internet with the emergence of international air travel. Aeroplanes were unquestionably a new and revolutionary technology that changed the world. Initially air travel was the preserve of a small, rich elite but as technology advanced and prices fell it became a global industry which in turn fed and helped create a whole number of others, not the least of which was tourism.

Then came a prolonged spate of terrorist hijackings. These destroyed consumer confidence in air safety. Tourism collapsed, planes were empty or did not fly. Relatively rapidly the world got together and agreed international standards and systems to make air travel safer. Did it stop all terrorist hijackings? No. But the new system of checks at airports self-evidently reduced the number of hijackings very substantially, and acted as a major reassurance to people waiting in line to catch a flight. Consumer confidence returned. Planes started going up again.

What was the magic ingredient that did the trick at airports and has now been extended to a great many public and other buildings around the world? Metal detectors.

Can they be fooled? Yes. Do they seem to work well enough? Yes. Does anyone feel their privacy is being invaded by having to pass their body or their bags through a detectorist arc or by having a wand passed over them? No. Can the wand or arc operatives see what, if any, underwear you have on? Can they make any other deductions or infer anything else from your movement, or your suitcase’s or briefcase’s movement, past or through the detector? No.

Yet that’s exactly how the child protection tools work. They look for something which says “metal is present take a closer look.” Nothing more. Nothing less. If the bleeper bleeps someone opens the potentially offending item. If no threats to children are found everything carries on as before and as intended. The idea that you only use a metal detector if a suspect comes into your building or your airport is absurd.

The challenge is how do we convince people that what I have described is the case? The even larger challenge is how do we create systems of accountability and transparency which will give all stakeholders – and here we must include parents and children- the confidence that that is all that is happening. Nothing more. Nothing less.

Posted in Uncategorized | Leave a comment

Money counts. Children don’t. And 163

From around 2009 various online platforms voluntarily started using smart technical tools to detect, delete and report actual or likely child sex abuse images and detect and address potential paedophile behaviour.

When the European Electronic Communications Code took effect on 20th December 2020, an unknown number of companies stopped doing it. This was an unintended consequence of parts of the Code becoming law.

In July 2020, a few months before the December deadline, having realised what was going to happen, the European Commission announced their intention to propose an “interim derogation” (temporary suspension) of the relevant clauses. In September they published a legislative proposal which would have achieved that.

Had the proposal been accepted, what everyone believed to be the  status quo ante would have been restored, without a blip or a hitch. There was a widespread expectation this would happen, rooted in the equally widespread belief that no substantial interest wanted to overturn or change the existing, longstanding arrangements.

How wrong we were. Nine months later reports of threats to children coming out of EU Member States have fallen by 51%.

Why?

Under the co-decision legislative processes of the EU all three elements – the Commission, Council of Ministers and  the European Parliament – have to agree a single text. The Council of Ministers substantially supported the Commission’s text. Not the Parliament.

The LIBE Committee had and still has lead responsibility for handling this matter on behalf of the European Parliament.

At a meeting of the LIBE Committee on 4th February, 2021 the Committee’s Rapporteur, Birgit Sippel of the German SPD, acknowledged (at 15:40) there was a procedure which would have allowed the process to be speeded up but she went on to say it is  “normally only used” for more technical matters and, if I understood her correctly, because it would have  entailed “giving away all the powers of political groups and individual MEPs” there was no support for it from other political groups on LIBE. Later Sippel spoke vigorously in defence of the “democratic hard work” of MEPs and about “not calling into question the legitimate rights and duty of this house to properly scrutinise  proposed legislation.”

We may never know why, at any point in the previous twelve years, Sippel or these same political parties failed to stir themselves sufficiently in relation to the very issues they suddenly said they were so concerned about. This makes the current, lamentable state of affairs look more like an opportunist power grab.

For LIBE the need to restore the status quo ante to preserve a child’s right to safety took second place to the (self-evidently) pick-and-choose rights of political parties and individual MEPs.

Throughout her leadership on the derogation Ms Sippel has been vocally supported in her stance by a member of a German far left party who is also on LIBE (Cornelia Ernst) and by the only member of the German Pirate Party in the entire Parliament (Patrick Breyer). He too is on LIBE. I’ll come back to this. Soon.

The tourism industry fared differently

Last week the Commission produced a proposal to establish a system of “vaccination passports”. It was tabled in the Parliament on Thursday.

Manfred Weber, Chair of the EPP Group asked for the proposal to be put on a fast track, as did the Commission. They both invoked Rule 163 of the European Parliament’s Rules of Procedure. Sippel spoke against adopting 163  suggesting her Committee should be left to do the job. She assured her colleagues they would complete the work by June.  If only children could be so lucky.

However, by more than 2:1 in plenary session Sippel’s objections to using the emergency procedure were ignored. She was defeated. Vaccination passports will be fast-tracked.

Why?

The Governments of places like Greece, Spain, Italy and Portugal moved straight away to impress on all their MEPs how badly their local tourist industries need vaccine passports. This would give them at least some chance of welcoming back visitors in the Summer. Money counts. Children don’t.

But really another obvious question is why did no one from the Commission or any of the several  qualifying groups seek to invoke Rule 163 for the interim derogation?

This is what Rule 163 says

“A request to treat a debate on a proposal submitted to Parliament pursuant to Rule 48(1) as urgent may be made to Parliament by the President, a committee, a political group, Members reaching at least the low threshold, the Commission or the Council. Such requests shall be made in writing and supported by reasons.”

Is it now too late  for this do be done so as to bring this tragi-farce to an end? On a straight vote in the Parliament I am pretty sure I know who would win.

How many times did the word “German” appear above?

It is very striking how three of the most energetic and vocal obstructionists on the child protection agenda  – the ones ensuring children remain in danger – are all from German political parties. I wonder if Sippel’s position is not, therefore, in some way related to internal German politics? Is this the reason she could not get agreement to the fast tracking she referred to on 4th February?

If there is anything in this theory it makes the matter even more disgraceful than it already is. It would mean the whole Parliament  – the whole might of the European Union – has not found a way to exert itself to overcome what is, in effect, an internal argument taking place within a narrow spectrum of the politics of a single country.

And children in the EU are paying the price. Not children in any other part of the world. Only in the EU’s 27 Member States. Shame. Shame.

Posted in Child abuse images, Default settings, Privacy, Regulation, Self-regulation | Leave a comment

Time to vaccinate against porn-fuelled violence against women

I am pleased to welcome guest blogger Baroness Tanni Grey-Thompson who speaks about the  threat posed to women by the violent porn which is commonplace on the internet and about the British Government’s failure to address it. This is particularly apposite today because of a vote which will be taken in the House of Lords this afternoon.

 

 

It’s time to vaccinate society against the porn-fuelled pandemic of violence against women

 

We are dealing with another pandemic – one that also spreads in the open air and in the home.  That pandemic is violence by men against women and girls.

We are not as good at sequencing the genome of the causes of this abhorrent behaviour as we have been for the Wuhan, Kent or South African strains of Covid-19, but if we take all a step back,  it is crystal clear that there is a very short list of influences on the behaviour of some men in our society which lead to assault and tragically even murder, and widespread access to extreme, violent pornography is at or near the very top of that list. One grandparent got in touch to tell me about the experience of their young grandchild and how they had been exposed to listening to another child talk about incredibly graphic violent pornography.

We saw on Monday how quickly the Prime Minister promised, rightly, to take action on stalkers, following our vote in the House  of Lords to put them on an offenders’ register.  Again today, peers have the opportunity to urge the Government to be even quicker in making a practical difference by enforcing a law which is already on the statue books, to deal with what is a well-documented driver of the attitudes of some men towards women, girls and sex, and that is extreme pornography.  The government itself published research only a month ago proving that this kind of nasty pornography is associated with domestic violence.

Parliament passed the Digital Economy Act four years ago, to give the British Board of Film Classification the power to block access in the UK to websites which host the sort of extreme pornography the BBFC would never allow to be sold from an adult sex shop, let alone be shown in a cinema with an Unrestricted rating, which is what the internet is.

Two years  ago, the government quietly dropped this plan.  Had Ministers come back to Parliament and asked us to repeal that legislation, and instead to wait for three, four or even five years more for a new law they hope will be a bit more effective by tackling social media as well as porn sites, but which we now know may not even apply to a large proportion of the websites in question because of the way the government plans to draft it, they would have been sent packing.

So the Government did not do that.  It just quietly shelved it, and has now had to come up with arguments for why it did so – but these simply do not stand up to the sort of scrutiny the House of Lords applies.

Ministers have made a technical argument that changes in how we navigate the Internet might make blocking websites harder at some time in the future by encrypting some web traffic.  But women want action now, and those changes are still years away.  Nor do these changes excuse internet service providers from their responsibilities to help block access to violent pornography.   We know that site blocking is possible now and will still be possible in the foreseeable future.  And given we accept that this is only an interim measure, to be applied while we wait for a new Online Safety Bill over the next few years, then that can replace the existing law in plenty of time to deal with technical evolution.

The evidence of how compulsive use of internet pornography can affect the brain and decision-making faculties of a compulsive user over time is something that we have to take seriously. I know there is no single cause of violence towards women but there is a short list of variants of this terrible virus and today we have the opportunity to administer a vaccine which has already been developed in the Digital Economy Act of 2017.  As Baroness Benjamin put it so clearly when she proposed today’s amendment, “we have to stop creating a conveyor belt of sexual predators who commit violence against women and girls.”

In time, we may develop a better vaccine that may be more comprehensive and deal with more variants, as the Government claims its new Online Safety Bill will, but that is not a good reason not to give society a jab now that will help to stop the spread of this deadly disease, be that in the open air in a park or in within a family home.  That’s why We Can’t Consent To This, CEASEUK and Women’s Aid all support this action.

This vaccine is ready to go now, and could be rolled out within a few months simply by re-designating the BBFC as an interim regulator until Ofcom is ready to take over.  It is nothing short of immoral not to use the vaccine we have available today in the hope of a better vaccine which we have yet to even see designed at some point in the future

If the government truly wished to take some action, rather than generate spurious arguments that it will take 27 months to implement an existing law, they could do it within weeks by re-starting where they left off.

Let’s start our vaccination programme against the virus of violence towards women and girls today by restricting access to extreme pornography right away.

Posted in Age verification, Internet governance, Pornography, Regulation, Self-regulation | Leave a comment

Trends and Facebook on manoeuvres

Last Wednesday the USA’s National Center for Missing and Exploited Children (NCMEC) published its numbers for 2020. 16.9 million reports received in 2019  grew to 21.7 million in 2020. That’s up over 25%. Messaging platforms remain the largest source.

21.4 million of the 2020 reports came directly from online businesses themselves, the balance from members of the public. The latter represents a threefold increase on 2019. Strikingly, there was a year-on-year increase of almost 100%  in reports of online enticement. A consequence of large scale lockdowns around the world? Probably.

The 21.7 million reports, among other things, contained 31,654,163 video files and 33,690, 561 files containing still pictures. A single report can reference more than one item.

Thus, within the total number of reports there is an overwhelming focus on dealing with illegal images of one kind or another but the 120,590 “other files”  shown in NCMEC’s chart also represent serious threats to children.

With 2,725,518 reports India, once again, heads the country list. The Philippines, Pakistan and Algeria come next, a long way behind but still all above the 1 million mark.

Good news or bad news? 

People opposed to proactive scanning for child sex abuse on messaging platforms sometimes point to these numbers and say because they are always going up this proves scanning is not a useful deterrent. Some say we should even call the policy “a fail”.

Because criminals steadfastly refuse to complete annual returns faithfully declaring what they did last year while outlinging their plans for the coming 12 months, we have never known and can never know just how much csam is, has been or is likely to be out there or how many attempts  have been or will be made to engage children online in a sexually abusive way. NCMEC’s new numbers could therefore simply be telling us we are getting better at detection. What they definitely do not do is provide a mandate to abandon this area of crime-fighting,  deserting the victims, declaring victory for child abusers and the unmanageability of the online space.

The tools we have at our disposal today are just better than they used to be and are being more widely and energetically deployed. And of course there are more internet users this year than there were last year. There is bound to be a part of the increase which is attributable solely to this sort of organic growth.  It can be expected to continue for some time as  the availability of wifi and broadband expands and more and more of the world goes online.

In any and every area of crime, detecting and addressing criminal behaviour after the event  is  or ought always to be only one part of a larger strategy in which prevention through education  and awareness raising are always to be preferred. But the idea that you should refuse to try to mitigate the effects of criminal behaviour wherever and whenever you can, particularly where children are concerned, is both heartless and an insult to the victims. Actions speak louder than words and no action speaks louder still.

Meanwhile in the EU

The previous week NCMEC published statistics showing reports received from EU Member States were down by 51% since the December, 2020 date when the European Electronic Communications Code took effect.

Set against an overall global rise, the fear must therefore be that by reporting a percentage fall in reports from EU Member States European kids may be faring even worse than children in other parts of the world. Commissioner Johansson pointed out, in the EU, 663 reports per day are not being made that otherwise would have been. That would be true if the  level of reporting had remained constant. Evidently that is not so. The real number of absentee reports will probably be north of 663.

And still the European Parliament paralyses the process of reform.

Facebook on manoeuvres

Let us recall last December when the new European Electronic Communications Code kicked in Facebook, a notoriously litigious, combative company, decided it would break ranks with industry leaders by stopping scanning for child sex abuse. Facebook could have fought it or, like their colleagues, ignored it. They didn’t do either.

Cynics have suggested the company’s decision to roll over like an obedient puppy dog was inspired by a desire to pave the way for their long declared ambition to introduce strong encryption to Messenger and Instagram Direct. If there is no legal way to scan messaging platforms whether or not the platforms are encrypted almost ceases to matter.

Facebook’s  December decision certainly appeared to legitimise opposition from groups who have always been against scanning for content and behaviour that threatens children.

The effrontery of the most privacy abusing  business in the history of the Planet Earth performing a complete volte face , and doing so at the expense of children and law-abiding citizens generally, takes your breath away. No warm words can wash that away.

Hold that thought for a moment.

A matter of timing?

Facebook has recently conducted research into child sex abuse activities on their platforms. The results have just been published in a blog.

There were two  separate studies. They both raises doubts about or question the value of proactive scanning to protect children.

This is a radical break with Facebook’s past.  They proudly and repeatedly used to declare their commitment to proactive scanning for content and activity which threatens children. In fact to their credit they have continued scanning for signs of people likely to engage in  self-harm and suicide although quite how they square that with what they are doing in relation to child sex abuse momentarily eludes me.

Who could be against research? Not me. But the same cynics I referred to earlier were not slow to point out that the timing of the release of this research does make one wonder if it was done with the purest of motives (see below). Did the people who actually did the work or who decided when to publish pause to wonder if they were being manipulated?

A surprise

The first of the two studies found that in October and November of 2020 90% of all the content found on their platform and reported to NCMEC concerned material that was identical or very similar to previously reported  material.

I’m guessing those of us who have worked in the field for a long time might be surprised it was as low as 90%. I had always understood the percentage of repeats would be in the very high 90s. High percentages show the proactive tools are doing their job. This is why their continued use is so important, particularly to the victims depicted in the images. The fact that an image is repeated only underlines and magnifies the harm being done to the child. Most certainly it does not diminish it.

In asserting their legal right to privacy and human dignity, victims want every instance of the image gone, no matter how many times or where it  appears.

Publishing a number like “over 90%” without explaining this kind of context is likely to lead an ill-informed observer e.g. someone in a hurry with lots of papers to read, to wonder what all the fuss is about?

If you note in NCMEC’s  report they refer to having received reports of 10.4 million unique images, specifically distinguishing them from the repeats which we are asked to believe make up 90% of  the payload in Facebook’s research.

More potentially misleading impressions

In the same blog and referring to the same study Facebook goes on to tell us “only six” videos were responsible for more than half” of all the reports they made to NCMEC.  Apart from being left to speculate about how many videos made up the other half the obvious question is “And your point?” 

My guess is what will stick in busy people’s minds is “six”.  Six and 90%. Headline numbers. Watch out for them being repeated by, well you know who by.

The second study

Taking a different timeframe (why?),  July-August, 2020 and January 2021, and a different, much smaller cohort (only 150 accounts)  we are told of the people who uploaded csam that was reported to NCMEC 75% did so without apparent “malicious intent”.  To the contrary the reserach suggests the individuals committing the crime of uploading csam acted out of a “sense of outrage” or because they thought it was funny. “75%”.  That’s another headline number that will stick and be repeated.

Maybe there is a paper somewhere which explains how Facebook concluded there was no “malicious intent”. I cannot find it but it is not hard to work out the net effect of Facebook’s  various self-serving timely manoeuvres.

The target audience is politicians and journalists

Facebook wants people – and by that I mean principally politicians and journalists- in Europe, the USA and elsewhere, to start thinking the problem of online child sex abuse is  different from and a lot smaller than they might previously have believed and that it is substantially down to (excusable?) human idiocy.

Yet the unalterable truth is the images need to be gone. That’s the beginning and end of it. If we have the means to get rid of illegal images of children’s pain and humiliation, why wouldn’t we?  Why would we, instead, deliberately hide them? Money is the only answer I can come up with and it is not good enough.

Poor substitutes

In the third part of the same blog Facebook tells us about other things it plans to do to address people’s apparent lack of good taste in jokes or their stupidity.

So far they have come up with two pop-ups. Bravo.  Facebook should put them out anyway. Neither gets anywhere close to compensating for their plans on encryption. In any other walk of life if a group of people combined to hide evidence of crimes my guess is they would be arrested and charged with conspiracy to obstruct the course of justice.

Facebook’s numbers in 2020

The results of Facebook’s research came out in the middle of  the row in the EU and right up against the publication of NCMEC’s new numbers.

In 2019 NCMEC received 16,836,694 reports of which 15,884,511 (94%) came from Facebook owned platforms, principally Messenger and Instagram. In 2020, of the 21.7 million, 20,307,216 came from the same places (93%).

Although I am extremely critical of Facebook we should not forget two important qualifiers: they are by far the biggest platform in the social media space and we only know so much about them because data are available.  This is because Messenger and Instagram Direct are not (yet) encrypted.

You therefore have to wonder what is happening on other messaging platforms that are already encrypting their services and so can produce almost no data. Actually, we  need not wonder all that much.

A glimpse behind an encrypted door

Last Friday The Times  revealed in 2020 UK policing received  24,000 tip offs from Facebook, meaning mainly Messenger and Instagram but only 308 from WhatsApp, which is already encrypted.

With 44.8 million users the UK has the third highest number of Facebook customers in the world behind India and the USA.  All of the 44.8 million will have Messenger because it is integrated into Facebook but on top Instagram has 24 million  UK users. Obviously there is likely to be a large overlap between Messenger and Instagram. WhatsApp has 27.6 million users in the UK.

It’s impossible to say what the WhatsApp number “should have been” – too many imponderables- but the ratio of 308:24,000 looks a little off. If anything you would expect the traffic in illegal images to be greater on WhatsApp precisely because it is already encrypted. Think about that.

 

Posted in Child abuse images, Facebook, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized | Leave a comment