Great news for children coming out of Brussels

Last night in Brussels it was announced that a political agreement had been reached on the interim derogation. In plain English what that means is, pretty much immediately, we can go back to the position we all thought we were in on 19th December 2020 (the day before the new, bad e-Privacy law kicked in).

The suspension is operative for up to three years, during which time we will all need to roll up our sleeves to formulate a longer-term framework. Watch this space.

Thus, the EU has paved the way to allow companies to recommence scanning messaging platforms for child sexual abuse material, grooming behaviour and the use of “classifiers” to detect images not yet determined to be child sexual abuse material but likely to be.

This is a great outcome. A huge pat on the back is due to everyone who had a hand in it. It reflects the enormous amount of work done by many MEPs, Commissioners, Commission staff, children’s groups and child advocacy organizations across the world.

If I have drawn one major conclusion from this whole unfortunate episode it is this: children’s groups and children’s advocates need to engage more closely with privacy lawyers in particular and privacy activists in general.

I share many of the privacy community’s concerns and worries – I think we all do – but a handful of ideologically motivated individuals with a talent for catching media headlines, showed they are not above resorting to outright lies and misinformation to achieve their desired end. In a world bedevilled with often quite intimidating legal and technical language, in a world of zero trust in Silicon Valley and declining trust in Governments, many people, too many people, fell for the scaremongering propaganda.

We cannot let that happen again. We need to find new and better ways to improve public and the media’s understanding of the issues because from that will flow a more grounded and sustainable understanding by policy makers.  Watch this space.

I won’t repeat everything in the Commission’s press release but here’s a first, quick look at the detail of last night’s announcement

  1. The definition of what constitutes qualifying child sexual abuse material or activities is explicitly aligned with the 2011 Directive.
  2. Companies and organizations need to have an appeals mechanism to cater for potentially erroneous decisions. It would be strange if they didn’t already but hey.
  3. There needs to be “human oversight” of the processing of personal data. Potentially problematic given the scale on which the systems operate on larger platforms but it depends how one defines “human oversight”. Expressly there is no requirement for prior authorization before reports can be made or illegal content is taken down.
  4. The tech used needs to be the least privacy intrusive.  That should already be the case.
  5. Companies and organizations need to consult data protection authorities on how they work in this area and the European Data Protection Board will issue guidelines to assist the data protection authorities. Fine, as along as these guys stop thinking along tram lines and learn how to speak in a language the majority of us can understand. They should embrace a mission to explain and heighten public understanding of privacy and not allow themselves to be manipulated into a position where they become identified in the public mind as enemies of common sense who provide shelter to criminals who harm children (and others).
  6. A public register will be established of public interest organizations with which online service providers can share personal data. Sounds OK.  Presumably it will be aligned with the GDPR and changes in Europol’s mandate.
  7. Annual transparency and accountability reports will be required. Hugely important but it cannot be left to companies to mark their own homework. Proportionality will be important here, as it is everywhere else, but so is the idea that everyone can have confidence that the transparency and accountability reports speak the relevant truth and nothing but the relevant truth. I am too polite to repeat the story about how you grow mushrooms.
Posted in Child abuse images, Default settings, E-commerce, Facebook, Google, ICANN, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment

What can aeroplanes teach us?

The other day I was talking to the CEO of a tech company, expressing my frustration at the way scaremongering misinformation seems to have taken hold in relation to the way various child protection tools operate online.

We are talking about three types of tools:

PhotoDNA and similar detect known examples of child sex abuse material (csam). Every image in this category by definition is illegal and represents an egregious infringement of the right to privacy and human dignity of the child depicted.  

The second are so-called classifiers. These flag images which are likely to be csam.

The third address grooming behaviour, that is to say behaviour which is likely to lead to a child being sexually abused.

In essence the misinformation circulating about each of these tools implies or expressly states they “scan” all private communications thereby creating the impression that, duplicitously and hiding behind the name of child protection, the police or the security services, the companies themselves, and goodness knows who else, are reading everything you send or receive as a message. Or they could, if that took their fancy.

The simple truth is if any illegal reading or examination of messages is taking place it has nothing whatsoever to do with any of the child protection tools I have mentioned.

Howsoever or wheresoever it originated the miasma of falsehood enveloping the child protection tools is proving to be astonishingly tenacious. Why?

Like many conspiracy theories and other lies that get read and repeated over the internet, the smokescreen of misinformation has been able to take hold because it exploits an underlying lack of trust in or suspicion of “them”.  In this case “them” are some of the major actors in the drama: Big Tech, Governments, law enforcement and the security services.

But there is another set of actors playing an important role in this tragedy. I am referring to parts (stress parts) of the tech community and privacy activists who think each of the interests I listed are as bad as the others. Noblesse oblige they alone therefore have a self-proclaimed and claimed unique responsibility to look out for the rest of us.

Anyone who objects or takes a different view is pitied, marginalized or completely ignored because they obviously don’t understand the complexities of the issues. It’s a kind of techno evangelical paternalism. “Forgive them for they know not what they do.”

And those aeroplanes?

Back to my CEO. He compared the emergence of the internet with the emergence of international air travel. Aeroplanes were unquestionably a new and revolutionary technology that changed the world. Initially air travel was the preserve of a small, rich elite but as technology advanced and prices fell it became a global industry which in turn fed and helped create a whole number of others, not the least of which was tourism.

Then came a prolonged spate of terrorist hijackings. These destroyed consumer confidence in air safety. Tourism collapsed, planes were empty or did not fly. Relatively rapidly the world got together and agreed international standards and systems to make air travel safer. Did it stop all terrorist hijackings? No. But the new system of checks at airports self-evidently reduced the number of hijackings very substantially, and acted as a major reassurance to people waiting in line to catch a flight. Consumer confidence returned. Planes started going up again.

What was the magic ingredient that did the trick at airports and has now been extended to a great many public and other buildings around the world? Metal detectors.

Can they be fooled? Yes. Do they seem to work well enough? Yes. Does anyone feel their privacy is being invaded by having to pass their body or their bags through a detectorist arc or by having a wand passed over them? No. Can the wand or arc operatives see what, if any, underwear you have on? Can they make any other deductions or infer anything else from your movement, or your suitcase’s or briefcase’s movement, past or through the detector? No.

Yet that’s exactly how the child protection tools work. They look for something which says “metal is present take a closer look.” Nothing more. Nothing less. If the bleeper bleeps someone opens the potentially offending item. If no threats to children are found everything carries on as before and as intended. The idea that you only use a metal detector if a suspect comes into your building or your airport is absurd.

The challenge is how do we convince people that what I have described is the case? The even larger challenge is how do we create systems of accountability and transparency which will give all stakeholders – and here we must include parents and children- the confidence that that is all that is happening. Nothing more. Nothing less.

Posted in Uncategorized | Leave a comment

Money counts. Children don’t. And 163

From around 2009 various online platforms voluntarily started using smart technical tools to detect, delete and report actual or likely child sex abuse images and detect and address potential paedophile behaviour.

When the European Electronic Communications Code took effect on 20th December 2020, an unknown number of companies stopped doing it. This was an unintended consequence of parts of the Code becoming law.

In July 2020, a few months before the December deadline, having realised what was going to happen, the European Commission announced their intention to propose an “interim derogation” (temporary suspension) of the relevant clauses. In September they published a legislative proposal which would have achieved that.

Had the proposal been accepted, what everyone believed to be the  status quo ante would have been restored, without a blip or a hitch. There was a widespread expectation this would happen, rooted in the equally widespread belief that no substantial interest wanted to overturn or change the existing, longstanding arrangements.

How wrong we were. Nine months later reports of threats to children coming out of EU Member States have fallen by 51%.

Why?

Under the co-decision legislative processes of the EU all three elements – the Commission, Council of Ministers and  the European Parliament – have to agree a single text. The Council of Ministers substantially supported the Commission’s text. Not the Parliament.

The LIBE Committee had and still has lead responsibility for handling this matter on behalf of the European Parliament.

At a meeting of the LIBE Committee on 4th February, 2021 the Committee’s Rapporteur, Birgit Sippel of the German SPD, acknowledged (at 15:40) there was a procedure which would have allowed the process to be speeded up but she went on to say it is  “normally only used” for more technical matters and, if I understood her correctly, because it would have  entailed “giving away all the powers of political groups and individual MEPs” there was no support for it from other political groups on LIBE. Later Sippel spoke vigorously in defence of the “democratic hard work” of MEPs and about “not calling into question the legitimate rights and duty of this house to properly scrutinise  proposed legislation.”

We may never know why, at any point in the previous twelve years, Sippel or these same political parties failed to stir themselves sufficiently in relation to the very issues they suddenly said they were so concerned about. This makes the current, lamentable state of affairs look more like an opportunist power grab.

For LIBE the need to restore the status quo ante to preserve a child’s right to safety took second place to the (self-evidently) pick-and-choose rights of political parties and individual MEPs.

Throughout her leadership on the derogation Ms Sippel has been vocally supported in her stance by a member of a German far left party who is also on LIBE (Cornelia Ernst) and by the only member of the German Pirate Party in the entire Parliament (Patrick Breyer). He too is on LIBE. I’ll come back to this. Soon.

The tourism industry fared differently

Last week the Commission produced a proposal to establish a system of “vaccination passports”. It was tabled in the Parliament on Thursday.

Manfred Weber, Chair of the EPP Group asked for the proposal to be put on a fast track, as did the Commission. They both invoked Rule 163 of the European Parliament’s Rules of Procedure. Sippel spoke against adopting 163  suggesting her Committee should be left to do the job. She assured her colleagues they would complete the work by June.  If only children could be so lucky.

However, by more than 2:1 in plenary session Sippel’s objections to using the emergency procedure were ignored. She was defeated. Vaccination passports will be fast-tracked.

Why?

The Governments of places like Greece, Spain, Italy and Portugal moved straight away to impress on all their MEPs how badly their local tourist industries need vaccine passports. This would give them at least some chance of welcoming back visitors in the Summer. Money counts. Children don’t.

But really another obvious question is why did no one from the Commission or any of the several  qualifying groups seek to invoke Rule 163 for the interim derogation?

This is what Rule 163 says

“A request to treat a debate on a proposal submitted to Parliament pursuant to Rule 48(1) as urgent may be made to Parliament by the President, a committee, a political group, Members reaching at least the low threshold, the Commission or the Council. Such requests shall be made in writing and supported by reasons.”

Is it now too late  for this do be done so as to bring this tragi-farce to an end? On a straight vote in the Parliament I am pretty sure I know who would win.

How many times did the word “German” appear above?

It is very striking how three of the most energetic and vocal obstructionists on the child protection agenda  – the ones ensuring children remain in danger – are all from German political parties. I wonder if Sippel’s position is not, therefore, in some way related to internal German politics? Is this the reason she could not get agreement to the fast tracking she referred to on 4th February?

If there is anything in this theory it makes the matter even more disgraceful than it already is. It would mean the whole Parliament  – the whole might of the European Union – has not found a way to exert itself to overcome what is, in effect, an internal argument taking place within a narrow spectrum of the politics of a single country.

And children in the EU are paying the price. Not children in any other part of the world. Only in the EU’s 27 Member States. Shame. Shame.

Posted in Child abuse images, Default settings, Privacy, Regulation, Self-regulation | Leave a comment

Time to vaccinate against porn-fuelled violence against women

I am pleased to welcome guest blogger Baroness Tanni Grey-Thompson who speaks about the  threat posed to women by the violent porn which is commonplace on the internet and about the British Government’s failure to address it. This is particularly apposite today because of a vote which will be taken in the House of Lords this afternoon.

 

 

It’s time to vaccinate society against the porn-fuelled pandemic of violence against women

 

We are dealing with another pandemic – one that also spreads in the open air and in the home.  That pandemic is violence by men against women and girls.

We are not as good at sequencing the genome of the causes of this abhorrent behaviour as we have been for the Wuhan, Kent or South African strains of Covid-19, but if we take all a step back,  it is crystal clear that there is a very short list of influences on the behaviour of some men in our society which lead to assault and tragically even murder, and widespread access to extreme, violent pornography is at or near the very top of that list. One grandparent got in touch to tell me about the experience of their young grandchild and how they had been exposed to listening to another child talk about incredibly graphic violent pornography.

We saw on Monday how quickly the Prime Minister promised, rightly, to take action on stalkers, following our vote in the House  of Lords to put them on an offenders’ register.  Again today, peers have the opportunity to urge the Government to be even quicker in making a practical difference by enforcing a law which is already on the statue books, to deal with what is a well-documented driver of the attitudes of some men towards women, girls and sex, and that is extreme pornography.  The government itself published research only a month ago proving that this kind of nasty pornography is associated with domestic violence.

Parliament passed the Digital Economy Act four years ago, to give the British Board of Film Classification the power to block access in the UK to websites which host the sort of extreme pornography the BBFC would never allow to be sold from an adult sex shop, let alone be shown in a cinema with an Unrestricted rating, which is what the internet is.

Two years  ago, the government quietly dropped this plan.  Had Ministers come back to Parliament and asked us to repeal that legislation, and instead to wait for three, four or even five years more for a new law they hope will be a bit more effective by tackling social media as well as porn sites, but which we now know may not even apply to a large proportion of the websites in question because of the way the government plans to draft it, they would have been sent packing.

So the Government did not do that.  It just quietly shelved it, and has now had to come up with arguments for why it did so – but these simply do not stand up to the sort of scrutiny the House of Lords applies.

Ministers have made a technical argument that changes in how we navigate the Internet might make blocking websites harder at some time in the future by encrypting some web traffic.  But women want action now, and those changes are still years away.  Nor do these changes excuse internet service providers from their responsibilities to help block access to violent pornography.   We know that site blocking is possible now and will still be possible in the foreseeable future.  And given we accept that this is only an interim measure, to be applied while we wait for a new Online Safety Bill over the next few years, then that can replace the existing law in plenty of time to deal with technical evolution.

The evidence of how compulsive use of internet pornography can affect the brain and decision-making faculties of a compulsive user over time is something that we have to take seriously. I know there is no single cause of violence towards women but there is a short list of variants of this terrible virus and today we have the opportunity to administer a vaccine which has already been developed in the Digital Economy Act of 2017.  As Baroness Benjamin put it so clearly when she proposed today’s amendment, “we have to stop creating a conveyor belt of sexual predators who commit violence against women and girls.”

In time, we may develop a better vaccine that may be more comprehensive and deal with more variants, as the Government claims its new Online Safety Bill will, but that is not a good reason not to give society a jab now that will help to stop the spread of this deadly disease, be that in the open air in a park or in within a family home.  That’s why We Can’t Consent To This, CEASEUK and Women’s Aid all support this action.

This vaccine is ready to go now, and could be rolled out within a few months simply by re-designating the BBFC as an interim regulator until Ofcom is ready to take over.  It is nothing short of immoral not to use the vaccine we have available today in the hope of a better vaccine which we have yet to even see designed at some point in the future

If the government truly wished to take some action, rather than generate spurious arguments that it will take 27 months to implement an existing law, they could do it within weeks by re-starting where they left off.

Let’s start our vaccination programme against the virus of violence towards women and girls today by restricting access to extreme pornography right away.

Posted in Age verification, Internet governance, Pornography, Regulation, Self-regulation | Leave a comment

Trends and Facebook on manoeuvres

Last Wednesday the USA’s National Center for Missing and Exploited Children (NCMEC) published its numbers for 2020. 16.9 million reports received in 2019  grew to 21.7 million in 2020. That’s up over 25%. Messaging platforms remain the largest source.

21.4 million of the 2020 reports came directly from online businesses themselves, the balance from members of the public. The latter represents a threefold increase on 2019. Strikingly, there was a year-on-year increase of almost 100%  in reports of online enticement. A consequence of large scale lockdowns around the world? Probably.

The 21.7 million reports, among other things, contained 31,654,163 video files and 33,690, 561 files containing still pictures. A single report can reference more than one item.

Thus, within the total number of reports there is an overwhelming focus on dealing with illegal images of one kind or another but the 120,590 “other files”  shown in NCMEC’s chart also represent serious threats to children.

With 2,725,518 reports India, once again, heads the country list. The Philippines, Pakistan and Algeria come next, a long way behind but still all above the 1 million mark.

Good news or bad news? 

People opposed to proactive scanning for child sex abuse on messaging platforms sometimes point to these numbers and say because they are always going up this proves scanning is not a useful deterrent. Some say we should even call the policy “a fail”.

Because criminals steadfastly refuse to complete annual returns faithfully declaring what they did last year while outlinging their plans for the coming 12 months, we have never known and can never know just how much csam is, has been or is likely to be out there or how many attempts  have been or will be made to engage children online in a sexually abusive way. NCMEC’s new numbers could therefore simply be telling us we are getting better at detection. What they definitely do not do is provide a mandate to abandon this area of crime-fighting,  deserting the victims, declaring victory for child abusers and the unmanageability of the online space.

The tools we have at our disposal today are just better than they used to be and are being more widely and energetically deployed. And of course there are more internet users this year than there were last year. There is bound to be a part of the increase which is attributable solely to this sort of organic growth.  It can be expected to continue for some time as  the availability of wifi and broadband expands and more and more of the world goes online.

In any and every area of crime, detecting and addressing criminal behaviour after the event  is  or ought always to be only one part of a larger strategy in which prevention through education  and awareness raising are always to be preferred. But the idea that you should refuse to try to mitigate the effects of criminal behaviour wherever and whenever you can, particularly where children are concerned, is both heartless and an insult to the victims. Actions speak louder than words and no action speaks louder still.

Meanwhile in the EU

The previous week NCMEC published statistics showing reports received from EU Member States were down by 51% since the December, 2020 date when the European Electronic Communications Code took effect.

Set against an overall global rise, the fear must therefore be that by reporting a percentage fall in reports from EU Member States European kids may be faring even worse than children in other parts of the world. Commissioner Johansson pointed out, in the EU, 663 reports per day are not being made that otherwise would have been. That would be true if the  level of reporting had remained constant. Evidently that is not so. The real number of absentee reports will probably be north of 663.

And still the European Parliament paralyses the process of reform.

Facebook on manoeuvres

Let us recall last December when the new European Electronic Communications Code kicked in Facebook, a notoriously litigious, combative company, decided it would break ranks with industry leaders by stopping scanning for child sex abuse. Facebook could have fought it or, like their colleagues, ignored it. They didn’t do either.

Cynics have suggested the company’s decision to roll over like an obedient puppy dog was inspired by a desire to pave the way for their long declared ambition to introduce strong encryption to Messenger and Instagram Direct. If there is no legal way to scan messaging platforms whether or not the platforms are encrypted almost ceases to matter.

Facebook’s  December decision certainly appeared to legitimise opposition from groups who have always been against scanning for content and behaviour that threatens children.

The effrontery of the most privacy abusing  business in the history of the Planet Earth performing a complete volte face , and doing so at the expense of children and law-abiding citizens generally, takes your breath away. No warm words can wash that away.

Hold that thought for a moment.

A matter of timing?

Facebook has recently conducted research into child sex abuse activities on their platforms. The results have just been published in a blog.

There were two  separate studies. They both raises doubts about or question the value of proactive scanning to protect children.

This is a radical break with Facebook’s past.  They proudly and repeatedly used to declare their commitment to proactive scanning for content and activity which threatens children. In fact to their credit they have continued scanning for signs of people likely to engage in  self-harm and suicide although quite how they square that with what they are doing in relation to child sex abuse momentarily eludes me.

Who could be against research? Not me. But the same cynics I referred to earlier were not slow to point out that the timing of the release of this research does make one wonder if it was done with the purest of motives (see below). Did the people who actually did the work or who decided when to publish pause to wonder if they were being manipulated?

A surprise

The first of the two studies found that in October and November of 2020 90% of all the content found on their platform and reported to NCMEC concerned material that was identical or very similar to previously reported  material.

I’m guessing those of us who have worked in the field for a long time might be surprised it was as low as 90%. I had always understood the percentage of repeats would be in the very high 90s. High percentages show the proactive tools are doing their job. This is why their continued use is so important, particularly to the victims depicted in the images. The fact that an image is repeated only underlines and magnifies the harm being done to the child. Most certainly it does not diminish it.

In asserting their legal right to privacy and human dignity, victims want every instance of the image gone, no matter how many times or where it  appears.

Publishing a number like “over 90%” without explaining this kind of context is likely to lead an ill-informed observer e.g. someone in a hurry with lots of papers to read, to wonder what all the fuss is about?

If you note in NCMEC’s  report they refer to having received reports of 10.4 million unique images, specifically distinguishing them from the repeats which we are asked to believe make up 90% of  the payload in Facebook’s research.

More potentially misleading impressions

In the same blog and referring to the same study Facebook goes on to tell us “only six” videos were responsible for more than half” of all the reports they made to NCMEC.  Apart from being left to speculate about how many videos made up the other half the obvious question is “And your point?” 

My guess is what will stick in busy people’s minds is “six”.  Six and 90%. Headline numbers. Watch out for them being repeated by, well you know who by.

The second study

Taking a different timeframe (why?),  July-August, 2020 and January 2021, and a different, much smaller cohort (only 150 accounts)  we are told of the people who uploaded csam that was reported to NCMEC 75% did so without apparent “malicious intent”.  To the contrary the reserach suggests the individuals committing the crime of uploading csam acted out of a “sense of outrage” or because they thought it was funny. “75%”.  That’s another headline number that will stick and be repeated.

Maybe there is a paper somewhere which explains how Facebook concluded there was no “malicious intent”. I cannot find it but it is not hard to work out the net effect of Facebook’s  various self-serving timely manoeuvres.

The target audience is politicians and journalists

Facebook wants people – and by that I mean principally politicians and journalists- in Europe, the USA and elsewhere, to start thinking the problem of online child sex abuse is  different from and a lot smaller than they might previously have believed and that it is substantially down to (excusable?) human idiocy.

Yet the unalterable truth is the images need to be gone. That’s the beginning and end of it. If we have the means to get rid of illegal images of children’s pain and humiliation, why wouldn’t we?  Why would we, instead, deliberately hide them? Money is the only answer I can come up with and it is not good enough.

Poor substitutes

In the third part of the same blog Facebook tells us about other things it plans to do to address people’s apparent lack of good taste in jokes or their stupidity.

So far they have come up with two pop-ups. Bravo.  Facebook should put them out anyway. Neither gets anywhere close to compensating for their plans on encryption. In any other walk of life if a group of people combined to hide evidence of crimes my guess is they would be arrested and charged with conspiracy to obstruct the course of justice.

Facebook’s numbers in 2020

The results of Facebook’s research came out in the middle of  the row in the EU and right up against the publication of NCMEC’s new numbers.

In 2019 NCMEC received 16,836,694 reports of which 15,884,511 (94%) came from Facebook owned platforms, principally Messenger and Instagram. In 2020, of the 21.7 million, 20,307,216 came from the same places (93%).

Although I am extremely critical of Facebook we should not forget two important qualifiers: they are by far the biggest platform in the social media space and we only know so much about them because data are available.  This is because Messenger and Instagram Direct are not (yet) encrypted.

You therefore have to wonder what is happening on other messaging platforms that are already encrypting their services and so can produce almost no data. Actually, we  need not wonder all that much.

A glimpse behind an encrypted door

Last Friday The Times  revealed in 2020 UK policing received  24,000 tip offs from Facebook, meaning mainly Messenger and Instagram but only 308 from WhatsApp, which is already encrypted.

With 44.8 million users the UK has the third highest number of Facebook customers in the world behind India and the USA.  All of the 44.8 million will have Messenger because it is integrated into Facebook but on top Instagram has 24 million  UK users. Obviously there is likely to be a large overlap between Messenger and Instagram. WhatsApp has 27.6 million users in the UK.

It’s impossible to say what the WhatsApp number “should have been” – too many imponderables- but the ratio of 308:24,000 looks a little off. If anything you would expect the traffic in illegal images to be greater on WhatsApp precisely because it is already encrypted. Think about that.

 

Posted in Child abuse images, Facebook, Internet governance, Privacy, Regulation, Self-regulation, Uncategorized | Leave a comment

The shame continues

Last Thursday afternoon the LIBE Committee of the European Parliament met. The proceedings are viewable.

The meeting was scheduled to take two hours.  It had several items to discuss. The earlier ones overran. When the question of the derogation and children being sexually abused on the internet was reached the Chair of the Committee opened with the following words

“Let’s see if we can manage in 10/15 minutes the three points left on our agenda”.

The discussion of the derogation began at 15.35.  It ended at 15.51 so it got more than its fair share of what was supposed to be the remaining time but, even so, many will feel after all the energy that has gone into this issue, to get 16 minutes was, well, disappointing. But perhaps no more disappointing than the reasons why the discussion was necessary in the first place.

How did we get here?

Back in 2018 the EU adopted the European Electronic Communications Code (the Code). The law was due to take effect on 20th December, 2020.

In the early part of 2020, Commission officials realised some might interpret its provisions to mean it was no longer lawful for companies to continue voluntarily scanning messaging services looking for child sex abuse content or activity. 

Why? Because it became apparent the Code would be subject to the overarching provisions of the GDPR.  This raised the possibility that end-user consent was therefore required before the relevant data processing could take place. Alternatively, as a condition precedent, to allow the child protection tools to continue being used as before there also needed to be a much more detailed and broader legal framework put in place. 

Both these points are contested, but the key question here is had anyone in the Parliament, Council of Ministers or Commission spotted and considered any of this when the 2018 measure was being drafted, debated and finally adopted? No.

Children were forgotten or overlooked. Out of sight. Out of mind.  Simple as that. And not for the first time.

Did the European Data Protection Supervisor or his predecessor step in at any point during the co-legislative process or very soon after it concluded specifically to draw attention to these matters? No.

If anything, when the Supervisor did make an appearance, he contrived to make things worse by issuing an Opinion in which he omitted  to mention a child’s right to privacy or Article 24 of the EU’s Charter of Fundamental Rights which, just to remind you, reads as follows

  1. Children shall have the right to such protection and care as is necessary for their well-being…..
  2. In all actions relating to children, whether taken by public authorities or private institutions, the child’s best interests must be a primary consideration.  

There are several EU-specific and other legal instruments which likewise make clear children are in a separate and special class, that they do require extra care and attention which would not apply to other classes or groups. This fact appears to have escaped the Supervisor who, far from considering the particular position of children, felt free to say

“The EDPS wishes to underline that the issues at stake are not specific to the fight against child abuse but to any initiative aiming at collaboration of the private sector for law enforcement purposes. If adopted, the Proposal, will inevitably serve as a precedent for future legislation in this field.”

Actually the issues at stake are specific to the fight against child abuse. Children should not be lost in a muddled conflation.

In other words, at the relevant time, all the European Institutions, including the political parties within the Parliament, failed to consider the Code’s impact on children’s rights and welfare.  They also failed to consider the express legal burden placed upon them to take account of the unique position of children. If any “precedent” was to be set it would be one about children. Nothing else. The scrutiny processes which are said to be fundamental to the way new laws take shape in a democracy let children down. Badly.  

Protecting children through scanning was not being done covertly

Please bear in mind none of the companies scanning messaging services to keep children safe made any secret of it. On the contrary they spoke about it openly and often.  They were proud of it and wanted people to know.  They thought it reflected well on them. They were right. It did.

The companies were frequently praised for their efforts by the most senior figures in our European Institutions, including in the Parliament.

In some jurisdictions these facts alone would entitle the companies to invoke the doctrine of estoppel. And if it didn’t do that it would almost certainly provide a strong defence against any or many forms of hostile legal action.

Unintended, unforseen and unmentioned

There is no getting away from the fact, for the reasons just given, this is all a terrible mess. A legislative accident. As far as I am aware absolutely nobody wanted us to be in the position we are now in. When accidents happen people generally pull together, at least to get things back to where they thought they were before.

Not here.

Let’s not forget, when the Commission proposed a solution, the temporary derogation, they were not seeking to close down discussion or debate. They were not seeking to make a final, irrevocable decision. They only wanted to create a breathing space so as not to interrupt or give anyone an excuse to interrupt child protection work which had in some cases been going on since 2009.

But the LIBE Committee would not co-operate. They would not help to put things back where everyone thought they were before.  And they did it in the name of defending the rights of political parties to comment and scrutinise. See above for how well that worked last time out.

The result?

A 46% drop in reports of child sex abuse followed this robust assertion of the rights of political parties.

Two wrongs do not make a right

True enough two wrongs do not make a right. Just because EU institutions botched the process in the run up to the 2018 decision, and since, it is no reason to allow them to botch it again, if that is what you think  happened. But that would only be  true if LIBE was being asked to endorse a permanent or longer term position.  Which they weren’t. They were only being asked to agree to a stopgap.

It is clear why, under existing arrangements,  LIBE took the leading role in shaping the Parliament’s decision, nevertheless they are not the most obvious place one would look  to first when a matter so intimately connected with children’s rights was at issue.

Perhaps the Parliamentary authorities could reflect on that. Could the Intergroup on Children’s Rights be converted into a full Committee of the Parliament in an effort to ensure nothing like this can happen again? This might help guarantee every legislative measure is considered expressly in terms of its impact on children. 

Threat to money: we fight. Threat to kids: we don’t fight 

At the LIBE meeting Sophia IN’T Veld made one of the most telling –  I would say crushing – points when she criticised the transparent hypocrisy of Facebook. 

On 21st December Facebook immediately stopped scanning.  IN ‘T Veld pointed out that on a great many previous occasions, with its enormous war chest and great phalanx of lawyers, Facebook had gone to court at the drop of a hat to contest a point if it looked like it might interfere with their ability to make money.

When it came to defending the good work they had been doing for years to protect children they rolled over without a whimper on Day 1.

Preparing the way for end-to-end encryption

A great many people saw Facebook’s decision to stop scanning as being linked to their larger ambition to introduce end-to-end encryption. If there are no tools which can be legally used to scan, it renders otiose the discussion on encryption.  Despite their protestations to the contrary, by saying they agreed the Code made the use of the tools illegal, Facebook were providing legitimacy, succour and comfort to forces who want to see the tools banned forever. 

Simply to believe Facebook could have made a calculation of that kind tells you a lot about how low people’s opinion of them has sunk. In that sense whether it is actually true or not barely matters. And we still do not know when, or indeed even if the decision to stop scanning was discussed internally at the so-called “Safety Advisory Board”. Maybe they should rename it the “Danger Enabling Board”. Just a thought.

IN ‘T Veld contrasted Facebook’s decision with that of Microsoft, Google, Roblox, Linkedin and Yubo. These companies decided the legal risk was minimal to non-existent so they carried on scanning as before.

Complying with the industry standards we like. Disregarding the others

Appearing recently before British Members of Parliament,  Facebook told the MPs one of the reasons they wanted to proceed with end-to-end encryption is because it is now an “industry standard” . Really? Well what about the industry standard established by Microsoft, Google, Roblox, Linkedin and Yubo?

I would dearly like to know who in Facebook made the decision to stop scanning. One has to imagine it was Zuckerberg personally, and if it was then I would say the prosecution can rest its case.  In his Harvard dorm he had a great idea and manifestly Zuckerberg is talented at making decisions which generate tons of cash. But he is plainly not fit to run one of the world’s largest and most important companies as a personal project, empowered as he is by his majority shareholding. 

Posted in Child abuse images, Consent, Facebook, Google, Internet governance, Microsoft, Privacy, Regulation, Self-regulation, Uncategorized | Leave a comment

Canadians, ethics and porn

A great many people in Canada are extremely angry and more than a little embarrassed at the fact that they provide a home and operational base for MindGeek, owners of the world’s largest pornography web sites, in particular Pornhub. That anger rose to a crescendo following the publication of a major article in the New York Times in December. The title of the article was “The Children of Pornhub”.

Self-evidently the good northern folk are not standing idly by gazing into the middle distance, twiddling their thumbs. They are now scrutinising MindGeek from a variety of different angles. And if you need reminding why, just listen to and watch the testimony of two people who appeared earlier this week before the Canadian Parliament’s Standing Committee on Access to Information, Privacy and Ethics. The topic of the hearing? “Protection of Privacy and Reputation on Platforms such as Pornhub”. Roll forward to around 12:28.

The hearing opens with the harrowing evidence of an exceptionally brave young victim, Serena Fleites. Her case was prominently set out in the New York Times piece.

Now 19 years old, when Serena was 14 the person she thought of as her (first ever) boyfriend bullied her into making and sending a sexual video. Then he betrayed her. The video ended up going all around the school, neighbouring schools, and finally on to Pornhub. This was unquestionably an illegal video, containing child sex abuse material. Its publication had catastrophic consequences for Serena. But she did not retreat into the shadows. She fought back. Boy did she fight back.

Disgraceful behaviour

After a protracted initial battle by Serena to get the video removed it kept getting reuploaded. Again and again she had to go through the same hoops to have it taken down.  Each time, the delays in removing the material were unconscionable but, as we  learn later, often Pornhub might remove the video but they left the tags on so anyone searching on the wider internet for that kind of thing would be brought back to their site. I’ll leave you to think about the “ethics” of that.

Serena reached out to Mike Bowe, a lawyer. He also appears before the Committee, giving detailed, broadly based forensic evidence which he put together, documenting years of deceitful if not illegal conduct by MindGeek.

Bowe’s evidence starts at 12:40. It is utterly astonishing and depressing. Why depressing? Because we also learned later, or were reminded, by the company themselves, they have been in existence since 2007 yet it is only since the article appeared in the New York Times a couple of months ago that they appear, or at any rate say, they are getting serious about dealing with illegal content.

Why anyone would believe anything these guys say is another matter and that applies equally to any so-called “transparency reports” they claim they are going to produce.

If ever there was a case for a strong, independent, legally-based public interest regulator to oversee online companies’ affairs it was made out in the Canadian Parliament this week. Well done to the MPs for keeping at it and well done to the Canadian Centre for Child Protection and others who never dropped the baton.

MindGeek speak for themselves

Mindgeek executives appeared before the Committee yesterday. I note in passing they say they employ 1,800 people (1,000 in Canada) and that 50% of their income is derived from ads. They frequently mention how many people, especially how many Canadians, visit their site as if this somehow gets them off whatever hook the Honourable Members of the Committee might think they are on. Of course it doesn’t. Or maybe MindGeek were threatening the MPs?

Dudes. We are very popular with your voters . Back off.”

Here is what appears in a newspaper to be a rather full account of what transpired when Mindgeek appeared at the Committee but, please  go back to the earlier link, tune in and listen. In particular listen to the MPs’  interrogation. Mr Erskine-Smith absolutely skewered them by asking  (around 13:20) one of the simplest and most obvious questions:

“How many times in 2020, 2019, 18 or 17 did individuals reach out to you to ask for a video to be taken down because it had been uploaded without their consent?”

Neither of the MindGeek executives who appeared before the Committee said they knew the number “off the top of their heads”. They could not or would not confirm that Serena Fleites had ever approached them.

They went to a public hearing of the Canadian Parliament and showed complete contempt for its proceedings.

The MindGeek executives made several references to how many social media platforms faced problems with child sex abuse content being uploaded.  That is true, they do, but none of the ones mentioned are established with the sole purpose of providing explicit sexual content which inevitably takes you to the edges of legality.

All social media companies have an ethical obligation to guard against illegal content, but if you are a porn based social media company it seems to me the obligation on you is that much higher. It is an obligation MindGeek did not meet.

My scribblings

I had submitted a brief note to the Canadian Committee. In it I focused not on the ethics of porn itself but on the ethics of setting up in and continuing as an online porn business. In the great scheme of things not particularly important, I suppose, but I reproduce it below in case any of you find any of its arguments useful in your ongoing battles.

“To many people the very idea of discussing whether or not a porn site can operate within an ethical framework will seem absurd because the porn industry itself is founded on unethical premises. I do not discuss that point. I limit myself to discussing the behaviour of porn companies, from an ethical standpoint.

It is well established that not everything that is legal is also necessarily ethical. In the case of  PornHub, and all commercial online porn companies of which I have any knowledge, what we typically see are investors realising there was a gap or an ambiguity in the law and in public policy then deciding to exploit it.

Many of the investors or their advisers were already in the porn business but all that does is underline the knowing nature of their ethical transgression. This is because,  and here my comments apply to amateur and commercial sites alike, they all knew or ought to have known that if they set up as porn providers in the physical world they would have been caught by and would have had to comply with long established and (relatively) effective, enforceable rules.

Thus,  by choosing to operate over the internet,  they were intentionally or recklessly ignoring and by-passing those limitations and norms. In effect they were smirking and saying “Catch me if you can”.

Cynically latching on to the new libertarian spirit of the internet age they, equally cynically, often donned the cape of free speech and artistic expression when really, all along, at least for the larger commercial players such as Mindgeek who are the main concern of your Committee,  it was just about making money. Alternatively porn publishers might claim, or it was claimed on their behalf, they had a valuable role to play in providing sex education.  It would be difficult to conjure up a more grotesque proposition.

The policy and legal gap or ambiguity which porn companies exploited emerged solely because the speed of technological change had completely outpaced the capacity of public policy makers and law makers to keep up. Some of the more far thinking porn merchants likely calculated that, eventually, public policy and the law would draw level but they would make a lot of hay while the sun continued to shine.

So while, in most jurisdictions,  porn providers may not have behaved illegally, they most certainly behaved unethically.

The reasons which lie behind the previously established  real world rules about access to porn did not vanish, nor were they reduced or materially altered just because the mode of delivery changed. On the contrary, the way the internet massively increased indiscriminate and unlimited access rather added to the ethical burden, a burden porn companies failed to discharge.

In the UK we had an analogous child protection problem in the late 1990s and early 2000s when  children started using online gambling web sites.  The legal age limit for gambling was and is 18. The same as it is to buy porn or go to a public cinema showing porn. Instances of a child being able to place a bet at a racecourse, at a football match or in a bookie’s shop were extremely rare for the simple reason the child could be seen and proof of age demanded. Penalties for failure to comply were severe.

When the internet arrived and gambling companies set up in cyberspace, every one of them acknowledged they were aware that children were using their services, placing bets via debit cards banks issued to account holders aged 12 or above. The gambling companies all said they were “very concerned” about the problem but actually almost all of them did nothing until the law compelled them to introduce age verification.

Once they introduced age verification we never heard of another case of a child simply ticking a box to say they were an adult and proceeding to gamble. The fact that a handful of gambling sites did take some steps to limit children’s access e.g. by disallowing debit cards they knew could be used by children,  rather amplified the ethical shortcomings  of the majority, who did nothing, claiming that asking everyone to tick the box showed they were doing their best.

The law requiring age verification to be introduced on gambling sites was passed in 2005 and became operative on 1st September 2007. Since that moment no company, or indeed any business providing “adult content” or age restricted goods such as  alcohol, tobacco and the like, had an ethical leg to stand on, at least not in any of the many countries where data sources exist which are similar to those in the UK.

Where such data sources do not exist porn companies and others could have invested in creating them as a prior condition of establishing or continuing to do business. Alternatively they could have ceased trading until they had developed an ethically sound system for keeping children away from their sites, and for preventing  adults from accidentally landing on their home page.  They did not do that. Like the UK’s gambling companies, the world’s porn companies are waiting to be forced to improve their behaviour. I hope Canada succeeds in bringing that about.”

 

 

 

 

Posted in Child abuse images, Consent, E-commerce, Privacy, Regulation, Self-regulation | Leave a comment

Regrets? I’ve had a few….

Last week (20th January) the UK Parliament’s Home Affairs Select Committee interviewed representatives of Facebook, WhatsApp, Twitter, Google, Snap and Tik Tok.

The Chair of the Home Affairs Select Committee is Yvette Cooper, an intellectual heavyweight of the first water. You had to feel a modicum of sympathy for the hapless folk the companies fielded. But only a modicum. A mini modicum.

Inevitably, on Inauguration Day in the USA, much of the Committee’s focus was on Trump, Trumpism and the post-truth world that helped create and sustain both. 6th January figured large.

To their credit none of the company representatives sought to deny or minimise the role  social media businesses played leading up to and including 6th January. The air was full of regrets for not acting sooner or differently. Phrases like “we are still learning”, “we know we must do better”, peppered the replies to MPs’ questions. All this put me in mind of Professor Sonia Livingstone’s aside to me in correspondence about the importance of

“breaking the cycle of

  1. Putting a new product or service into the  market
  2. Waiting for civil society to spot the problems and families to experience them
  3. Taking belated action.”

I might have added

4. Then being ready with self-deprecating comments like “we know we must do better” and “we are still learning“.

The disarming humility and contrition doubtless are genuinely meant at the time by the people speaking for their employers but humility and contrition butter no parsnips. Particularly when similar things keep on keeping on. There is a limit to the price societies can be expected to pay to allow companies the “freedom to innovate” . We are about to find out where that boundary lies. s230 is heading for the exit.

Facebook and end-to-end encryption

Yvette Cooper and others also raised questions about Facebook’s plans to introduce end-to-end encryption (E2E). In particular Cooper wanted to know what impact Facebook themselves  thought this would have on their own ability to detect child sex abuse images currently being exchanged via Messenger and Instagram Direct.

Monica Bickert’s reply was certainly truthful, in a literal sense, but it was also incomplete to the point of being deceptive. Her answer to Cooper’s question was

“I don’t know but I accept the numbers will go down”

Future hypotheticals

Bickert added that she thought the numbers would probably go down anyway because of other measures the company was taking. In other words the drop in numbers that is coming if things go ahead as planned may partly be down to Facebook simply being more effective in discouraging illegal behaviour which threatened or harmed children. Cooper exposed this as self-exculpating baloney.

Turns out it largely hinges or depends on planned educational initiatives designed to help children avoid abusive individuals and situations in the first place.  Not exactly mind-blowing or revolutionary. In fact it is the kind of stuff they are already doing and if all Bickert is saying is they will do more of it or better then bring it on. It is welcome even though a tad oblique as compared with straightforward detection, deletion and reporting, which was the main thrust of Cooper’s questioning. Cooper was not asking about images that might not be created or exchanged or paedophiles who might be avoided.

46% decline in 21 days

Cooper referred to numbers published some time ago by NCMEC. These suggested if Facebook went ahead with E2E there could be a 70% drop in images being detected, deleted and reported. That’s globally.

What Cooper evidently did not know, but Bickert  must have, was the day before the Select Committee meeting NCMEC had published new data showing the known or actual effect of Facebook ceasing to be able to detect child sex abuse in the manner they had hitherto.

Because of the fiasco with the European Electronic Communications Code, on 20th December in all EU Member States Facebook stopped scanning  for child sex abuse materials. Stopping scanning has exactly the same effect as introducing E2E.

On 19th January, NCMEC’s new published numbers showed in the 21 days immediately following 20th December there had been a 46% drop in reports from EU countries.

Excluding the UK, in the three weeks prior to 20th December NCMEC received 24,205 reports  linked to EU Member States. In the three weeks afterwards it dropped to 13,107. We will never know which children were in the 11,000 images that weren’t  picked up.  How many were new images, never seen before, with all that that entails?

So when Cooper asked, as she did twice, about the likely effect of introducing end-to-end encryption Bickert was truthful when she said she couldn’t say but she might have at least mentioned the numbers NCMEC had just published. Then she could have explained why a 46% drop, or worse, concretely, not hypothetically, is a price worth paying.

Facebook blames their customers

Cooper persistently challenged Bickert as to why they were going ahead with E2E at all when they knew it will mean more children will be put in harm’s way, more perpetrators will go un-caught and un-punished. Bickert’s answer was, er, “surprising”. 

Bickert referred to a survey of British adults who, seemingly, listed privacy related concerns as their “top three”. I am not sure which survey Bickert had in mind, she didn’t say, but if it was the 2018  Ofcom one she might have read a little further and seen “the leading area of concern” is the protection of children. But even if that was not the case,  whether or not children were “listed” in the top 50 concerns expressed by adults, teens or stamp collectors for that matter, what was  Bickert really saying?

“Don’t blame us. We’re only doing this because it’s what the dudes want and our job is to give it to them.”

An industry standard?

Bickert and her colleague from WhatsApp shifted their ground a little saying “strong encryption is now the industry standard” as if this was the key justification for going ahead with or retaining E2E.  Cooper pointed out that Facebook was a major part of the industry so that amounted to rather transparent, self-serving circular reasoning. Moreover in other areas Facebook has  repeatedly shown it is willing to strike out alone and not just follow the herd. They cannot now shelter behind the actions of others.

The underlying reasons?

Suggesting  something is an “industry standard” is simply a less vulgar or less pointed way of saying “our revenues will likely be badly impacted if we don’t do this”. It’s a variation on the dudes theory expounded earlier. In other words it is about money.

Secondly, how did we get to a point where the dudes seemingly feel they need to have E2E? Isn’t it because of the previous actions and admitted failures of companies like Facebook?

So first they create the problem and then they come up with the wrong answer to it. Chutzpah on stilts.

Facebook’s “pivot to privacy” is alliteratively admirable but not in any other way. It is about Facebook trying to repair its appalling image in the privacy department, based on its history of not respecting or taking sufficient care of its users’ privacy. It is acting now in order to continue generating gigantic quantities of moolah.

Towards a very dark place

We may never know what role encrypted messaging services played in organizing and orchestrating the events of 6th January but few can doubt that the unchecked growth of strongly encrypted messaging services is taking us towards a very dark place. A place where child abusers as well as fascist insurrectionists feel safe.

In and of itself strong encryption is not a bad thing. Indeed it is now essential in many areas.  But in the wrong hands, used for the wrong purposes, it can facilitate a great deal of serious damage. We have to find a way to ensure that does not happen. If companies like Facebook do not find a way of doing that, they will have one thrust upon them. The Silicon Valley experiment has run its course. It will soon look different.

Posted in Child abuse images, Facebook, Google, Internet governance, Regulation, Self-regulation | Leave a comment

Adding insult to irony

If, like me, you were brought up a Catholic or in another Christian denomination, likely you will know 6th January is celebrated by the faithful as the “Feast of the Epiphany. Well, in a secular sense, 6th January 2021 definitely was an epiphany for us all, meaning a moment of profound revelation.

Witness five dead bodies in or around the US Congress and a televised attempt to frustrate the outcome of an election in order to preserve in office a liar and a cheat who openly incited violence while actively seeking to undermine the Constitution.

Of course the events of that day did not come out of nowhere. They reflect a deeper malaise and divisions. Rooted in disillusionment, the American Dream is not delivering for them, a great many angry people found confirmation bias in the constant stream of falsehoods and distortions fed to them by Trump and his fellow conspirators.  Never have the consequences of allowing a “Post-Truth” society to emerge and grow been more clearly in evidence.

Can there be any real doubt about the role social media companies played in creating, sustaining and amplifying the societal fissures that brought us to 6th January? Let’s not get into the practical, organizing role social media also played in orchestrating the murderous assault. That’s for another day. Will we ever know how much was done through strongly encrypted channels? Probably not.

It doesn’t stop there

The aftermath of 6th January 2021 then saw private entities, companies, silencing the President of the United States and effectively shutting down a speech app (Parler) altogether or very substantially.  In so doing Silicon Valley added insult to irony.

They gave Trumpism a megaphone and, in the name of free speech, timorously stood back, letting it blossom as the dollars rolled in. It was only when Trump went almost foaming-at-the-mouth insane and the scenes of 6th January were televised, that the inescapable and repeatable logic of the laissez faire s.230 nightmare was fully, unavoidably exposed.

Then some of the same companies decided to shut Trump up. It’s hard to think of this either as a step too far or as a step in the right direction because in a better and more rational world the need for it to be taken at all would never have arisen. What started as a benign experiment with technology brought the USA, and therefore the world, to the edge of disaster.

The amount of sympathy I have for Trump or Parler can be measured only in large minus quantities. That is not the point. What is the point is the egregious presumption of private bodies deciding to make public policies in areas of fundamental importance to our whole way of life.  De haute en bas they float above us mere mortals and tell us when we meet their  exacting standards and punish us when we don’t. Sadly they are constantly deflected by the desire to earn money. This clouds their vision from time to time.

Fundamentally this is a failure of governance

A great many idealistic people who were disgusted with the shortcomings of mainstream politics either in their own country or globally, or both, saw the internet as a way of establishing a whole new set of possibilities.

Then the money moved in. The money saw different opportunities and did something really smart. Cynical, but smart. Not only did they get s. 230 adopted in the USA and copied elsewhere, they also managed to implant in people’s minds the idea that the absence of regulation was the same as “freedom”.  Any attempt to regulate the  internet  (meaning them or their businesses) was portrayed as an actual or potential attack on  “freedom”.  Politicians  and judges stepped back. Unsure of themselves. In truth the absence of regulation was just another way of creating room to make more cash.

After the money came the totalitarians. They learned a lot from what they observed elsewhere. In particular they learned from surveillance capitalism. Often the very same companies and engineers that helped Palo Alto were now helping Pyonyang.

Meanwhile we have a UN body called the Internet Governance Forum which, since 2006,  has pretended to have some influence on matters of the kind discussed here. I predict it is not long for this world.  It has been coming for a while. 6th January sealed its fate. That’s a shame in many ways because the Forum has great strongpoints.

Mozilla’s plans to encrypt DNS queries in Firefox 

What has the main argument I am making in this blog got to do with child protection? Everything. If you doubt that just read a consultation document published by Mozilla. In particular look at this sentence:

“Numerous ISPs today provide opt-in filtering control services, and  (we intend) to respect those controls where users have opted into them.”  (emphasis added).

To put that slightly differently,  Mozilla has decided not to “respect” those controls where users have not “opted into them”.

A self-appointed techno-priesthood  has decreed that one approach to child protection is acceptable and another is not. Can I resist pointing out Mozilla’s global HQ is in a place called “Mountain View”? No I cannot. I do so as a service for those wondering where the latterday Olympus is to be found.

Inertia is at the root of many evils in the internet space, particularly among the less literate and  less knowledgeable, people who are often also among the most vulnerable e.g. children. Whatever an individual ISP may have lawfully decided to do, Mozilla seem to be willing to expose children to the risk of harm unless and until their parents get their act together and choose to opt in to protective filters. Wrong answer.  By a mile.

Mozilla’s  consultation document was written before 6th January, 2021. What it truly shows is Zeus needs to go back to the drawing board.

Posted in Internet governance, Regulation, Self-regulation | Leave a comment

Absurdities and atrocities

Voltaire famously said “Those who can make you believe absurdities, can make you commit atrocities.” History is littered with examples of this and last week in the USA we saw the same thing played out again.

Lies and travelling trousers

In the age of the internet never was it more true that a lie can be half way around the world before the truth has got its trousers on. The more fanciful or ridiculous (absurd) the lie the faster it is likely to spread through social media platforms. Eyeballs mean money and money is the name of their game.

We need urgently to get over the initial, marvellous hippy notion that in the internet we created something that enables everyone to be a publisher, a journalist, a doughty warrior concerned only to make the world a better place. That is true, we have.

But it is now abundantly clear we have also created something  which threatens that very idea. Last week was the proof, played out on TV.

How far are we willing to go to defend the world that emerged from and through the post-War settlement? The fate of Weimar should not be forgotten or what followed.

Oh the irony!

I am not the first person to note or comment on the irony. Governments have threatened to regulate social media platforms but now we see social media platforms doing something that looks very like regulating Governments.

Of course in a narrow way you could argue depriving Trump of his Twitter account or banning him from Facebook until he ceases to be President is not directly regulating a Government as such, but it is so close you would be hard-pressed to insert a Rizzla paper between the two.

Too little too late

Obviously, I approve of what Twitter and Facebook did but that isn’t the point. One might ask why they didn’t do it a lot sooner. But the larger questions are how it ever came to this in the first place and could it happen again?

Trump and his cronies incited the mob in an assault on democracy, but he and they could only get to a point where that was possible because social media platforms and elements of the mainstream media helped build him up. The USA is now on a national alert because of fears similar acts will be repeated in State Capitols on 20th January. Inauguration Day.

The intimacy, immediacy and scale of the internet made a “post-truth” society possible. We have had lying politicians and lying campaigners before, but  in modern times we have never had lying politicians or campaigners who had the financial backing and tools such as the internet, with its handmaiden, profiling, to enable dangerous demagogues to reach and manipulate, the lumpen, the alienated-dispossessed, the angry and the frightened.

Preserving liberal values is about preserving decency

So, yes, of course, we should have serious discussions about what free speech means in the age of the internet but if liberal democracy and liberal values are threatened where does it say we must stand by and let them die because we are paralyzed by anxiety and by laws drawn up for entirely different times? We need a legal framework which comprehends and embraces life in the early 21st Century. And we need it sooner, not later.

Posted in Facebook, Google, Internet governance, Privacy, Regulation, Self-regulation | Leave a comment