Once more unto the breach

The dust has settled on the European elections. The President of the Commission has been appointed, congratulations to Germany’s Ursula von der Leyen. However, we don’t know who the individual Commissioners will be, much less how the portfolios will be separated or combined before being distributed. The process of appointing the Commissioners starts soon but nobody officially takes up their position until 1st November. Meanwhile Euro-business is getting going again with expert working groups busying themselves tying up loose ends from the last Commission or preparing the ground for expected initiatives.

The e-Privacy Regulation swings back into view 

A new draft text has been issued for our old friend the  “Proposal for a Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communications).

It’s like the famed curate’s egg. There are good bits and not so good bits.

Page 4, paragraph 6, is a good bit. It says:

“With regard to the child imagery issue, the Presidency has introduced in art. 29(3) a provision [3(b) iv] that allows providers to continue processing electronic communications data for the sole purpose of detecting, deleting and reporting material constituting child pornography(sic), if they started such processing before the entry into force of the ePrivacy Regulation and the technology used fulfils a number of conditions listed in the provision. The end date for this provision is to be discussed…”.

I wish the EU would find a way to drop that horrible phrase – “child pornography” – nobody who works in the area who has any sensitivity or understanding of the issues uses it. We are talking about child sex abuse material (csam).

Thus, going forward, at the point when the Regulation comes into force, companies already engaged in attempting to detect, report and delete csam using hash-based technology e.g. PhotoDNA, will be allowed to continue so doing. In previous versions a blanket ban was proposed. The latest text therefore represents progress. Bravo.

Unintended and unacceptable consequences

But I doubt I am the only person feeling puzzled.

If relevant companies are not engaged in trying to “detect, delete and report” material constituting csam before the Regulation comes into force, shame on them, but why prevent them from choosing to do so after the Regulation has become operative?

We shouldn’t be talking about this as if we were looking for a compromise to protect existing investments. We should be talking about a matter of principle and the principle is simple: if technical tools are available which can help a business detect, delete and report csam then the business should be free to use them. No ifs, not buts. Some might argue they ought to be required  to use them.

The proposal as it stands opens up the possibility of unevenness and divergence in respect of children’s safety as between otherwise identical online services, the only difference being when an online business saw the light. What might be the unintended consequences of that? Criminals shifting from old school platforms to new ones?

And they are already contemplating its demise

Elsewhere, Article 29 of the draft Regulation (see page 87) addresses the timing of when the Regulation comes into force and its application. But it also discusses when this part of it will end. The date is as yet unspecified.

I note, though, in para 2 on the same page there is a crossed out provision suggesting at one point somebody was thinking it might only endure for 24 months. That is alarming.

Moreover it does not square with Article 28 which speaks about evaluating the measure’s effectiveness on a three yearly basis. That is a great idea and very welcome but, naturally, one would expect the systems to have been working for at least three years otherwise a proper evaluation will not be possible, at least not within this frame.

Effectiveness matters

Obviously it is right that the continuation, amendment  or even termination of the new law, maybe any law, should be based on the outcome of an evaluation of the effectiveness of its operation. By the same token it means the idea of fixing a forward date for ending it, without knowing how well it has worked, does not add up.

A further amendment is needed

There is more.

Article 29 (3)(b) (iv) forbids anyone storing “….. electronic communications data, except in the cases where material constituting child pornography (sic) has been detected by virtue of a hash.”

I assume this is intended to provide a legal basis for retaining any data necessary to allow the appropriate authorities to investigate a crime and bring a case to court and for companies to assist in that activity. If so that is good but it needs widening.

Technologies are available now which, using Artificial Intelligence, can identify and flag items which are very likely to be csam but are waiting in a queue to  be confirmed as csam (or rejected because they are not csam). Google’s system is called a “Content Safety AI”. Facebook is known to have its own tool for doing something similar although it is not yet available outside the company.

Whether or not an image is found, reported and confirmed as csam depends on a great many variables. But, that is only the first step in a process. After being confirmed as csam, the image has to be hashed and, crucially, entered into a usable database.  It is this database of hashes that allows others to use it for the purposes of “detecting, deleting and reporting material constituting child pornography (sic)”. Until now, adding to, managing and distributing hashes has been the product of a substantial, international effort.

But this second stage is also dependent on a number of factors,  one of the most important of which is the availability of human resources.

That is because those responsible for administering the major hash databases insist, before the hash of an image can  be  introduced to the database, the image itself has to be seen by three sets of human eyes i.e. two sets of eyes additional to those provided by the original confirming organization. This is to ensure consistency and quality control.

Enormous backlogs

At the moment there are enormous backlogs of images in the queue. For example, the Canadian Project Arachnid so far has captured 10.9 million images that were initially thought to contain csam or other  sexualised material harmful to individual children.

Via a collaborative endeavour by a network which includes several hotlines, including some in the EU, so far only 850,000 images have been through the “three  sets of eyes process”. Of these  nearly 300,000 meet the INTERPOL baseline definition of csam and therefore have been transferred into a usable database of hashes. Around 10 million images are still waiting.

Another unintended consequence?

Is it the Commission’s intention to make the operation of such systems illegal? Does the Commission intend to make it unlawful for companies or other organizations to try to detect and store or otherwise process images which are very likely, eventually to go into a database of csam hashes but which are not, at a given moment, confirmed  as a csam hash? That is how I fear the current proposal will be interpreted.

The Commission surely does not want to stop organizations looking for  material which is likely to be csam then store it pending an evaluation which would determine whether or not it is, in fact, csam?

Wouldn’t this mean only companies or organizations outside the reach of EU law could do this kind of work? Companies or organizations inside the EU would be made to wait until someone else did the work of finding qualifying images, converting them to a hash and putting them in a database? Surely this is another unintended and in this case also a ridiculous consequence?

We need some new words to clear this up. Urgently.

Posted in Child abuse images, Facebook, Google, Privacy, Regulation, Self-regulation

Something to think about

All of the early thinking about the internet – thinking which determined how it was set up at a fairly fundamental level – was framed by a belief that overwhelmingly the future users would be highly educated, therefore literate and numerate, responsible adults. However, now we know across the world one in three of all human internet users is a child. This rises to about one in two in many lower income countries and these have large populations. In the UK and other richer nations the proportion hovers around one in five.  Whichever way you look at it, everywhere children are a substantial and persistent body of internet users.

The 5 Rights Foundation is doing a great job in getting us all to think about the data protection implications of children being such a huge constituency. In doing so it points out they are only following what, in Recital 38, the GDPR declared to be self-evident: “Children merit specific protection with regard to their personal data”.

In their latest publication 5 Rights tells us these matters have become urgent. Why? Because the UK’s Information Commissioner has issued its proposed “Age Appropriate Design Code”, as required by the Digital Economy Act. 2017.

There has been a lot of industry push back in respect of parts of it but 5 Rights suggests this is to some degree a product of confusing issues about age appropriate content with issues of age related data protection standards. Is this a distinction which is easy to see at a conceptual level but which, in reality, is hard to implement at a practical one?

Thinking caps on please. 5 Rights have proposed an interesting way out.

Posted in Advertising, Age verification, Consent, Default settings, E-commerce, Internet governance, Privacy, Regulation, Self-regulation

Predatory paedophile behaviour – a supplementary

Within one hour of my last blog going up – addresssing the emergence of “DDLG” as a threat to children’s well-being, someone sent me a link to the case of Dominic Nielen-Groen, a 39 year old divorced father of two. He travelled 167 miles from Wolverhampton to Devon to meet up with a 15 year old girl.

They got to know each other initially courtesy of an Instagram “community” (?) called  “Daddy Dom Little Girl”  (DDLG) where Nielen-Groen presented himself as “Papa Bear”.

Their real life rendezvous in Devon was in a play park. After 40 minutes of hugging and talking Nielen-Groen tried to persuade the child to put a dummy in her mouth. When police later went to his home they found a collar with the words “Little Girl” written on it and another dummy. He was convicted of a grooming offence.

They also found “a 50 Shades style contract setting out rules and punishments. The punishments included spanking, self spanking, slaps, cold baths, being tied up, as well as time out and no television or internet. Some rules and penalties were overtly sexual, including bans on masturbation and taking part in sex acts at the behest of ‘Daddy’.”

Nielen-Groen will be sentenced next month. The judge has told him he can expect a substantial prison term.

What also emerged at the trial is that the now 16 year old first joined the group because she was lonely and wanted to find new friends. She ended up with over 3,000 followers and, having incorporated PayPal into her profile, she also made money selling pictures of her “chest and bum”.  This is on Instagram. Not some obscure social media outlet established in a lawless land far, far away.

With links to groups with the same or similar DDLG theme I have also been sent a series of images of very young people posing, wearing medical appliances. The sort one would normally expect to see only in the Accident and Emergency Department of a large hospital or in the room of a patient being treated following a bad fall or a car accident.

I rest my case.

Sometimes I wonder if the world is ready for the internet.

Posted in Child abuse images, Facebook, Privacy, Regulation, Self-regulation, Uncategorized

The new currency of predatory paedophiles

Not quite every day, but almost, I thank my lucky stars I don’t have to look at csam or any other form of illegal content. The people who do so in the name of protecting children or others are among the unsung heroes of my world. However, sometimes I get sent stuff which, while not obviously illegal, nevertheless makes your stomach churn.

One such category goes by the acronym DDLG: Dominant Daddy Little Girl There are variants of DDLG such as CG/L: CareGiver/Little, but they share a common rubric as a subset of BDSM (Bondage Dominance Sadomasochism). Children are being drawn or enticed to it.

Pathetic sexist stereotypes

The pathetic, sexist stereotypes within DDLG I will leave on one side for now. One web site provides a pictogram to explain the basic idea. Someone is dominant – the Daddy figure – and someone is submissive,  typically the “little girl”, although it could be DDLB if a young boy is involved.

The images I have seen almost invariably involve a young woman who might be hovering around 18 years old but some look way younger so there is no “adult agency” here.

The female is dressed up as a child, or as a baby. Frequently there will be a “pacifier” (“dummy” to Brits) in her mouth, secured by a bondage gag. Or  the woman/child might be wearing a nappy (“diaper”).  Whatever is going on the woman/child is either receiving the seemingly affectionate attention of an older man or is inviting an older man to embrace her and act out “affectionate dominance” (did I just write that?).

Sexualising childhood

There is no question in my mind DDLG is an extreme form of sexualising childhood. It is likely to draw children towards producing sexual images of themselves. As tokens of “friendship”, presents or money are being offered in return for submission in all things, including sex. Who knows how many children might already be caught up in it, now too scared  or embarrased to come forward to find a way out. A perfect trap for predators.

There is a growing number of reports to hotlines which link the use of DDLG images directly with grooming activity by paedophiles. If you go to Google Trends you will see DDLG has been moving sharply upwards globally since late 2014 and in the UK likewise since 2016.

DDLG is the new currency of predatory paedophiles and more children are being ensnared. There is even a page containing a so-called “kink test” inviting you to take the “BDSM test” which we are told “ will be great for beginners who are looking to enrich their erotic lives and introduce new kinks to it”.

Children’s profiles on social media sites are being “decorated” with extracts from  this so-called “test”.  Memories of Cambridge Analytica, I wonder what is happening to the data that is being collected?

DDLG has a page on Facebook and there appears to be several groups devoted to the subject. I just did a search and found, for example, “DDLG Forever – welcome all Doms and Subs” and another which proclaims the group is for 18+  before proceeding to be very explicit about its sexual nature. I am not going to repeat its language here.

Facebook, let me remind you, is open to people who are 13 years old and we know children well below that age are on it. There is no age verification taking place. Instagram is a sister company. In the EU their (not verified) lower age limit is 16 but outside of the EU it remains 13 in most jurisdictions (why the difference?). Yesterday I did a search for DDLG  on Instagram and got 3.4 million hits. I found DDLG on YouTube and Twitter, both open to 13 year olds. This is just not good enough.

The internet is no longer an adults only playground

Let me repeat a point I have made many times. I have no interest in what consenting adults get up to with their bodies. It’s none of my business, but this type of content and access to groups that propagate it should be confined to adults. It should not be hosted or visible on public platforms that allow 13 year olds or indeed any children to be members. It should only be accessible behind an age verification gateway.

The images are not illegal so they stay up.

Because the images are not illegal hotlines receiving reports, the police and the courts are in no position to require them to be removed. But companies are not powerless and knowing the way these images are being used by predatory paedophiles  and thinking about the corrosive messages they project, why would any social media platform choose to do nothing? Tumblr didn’t. They cleared it all out. Bravo.

Defining what is harmful content that should not be on unrestricted public view certainly has the potential to throw up difficult or edge cases but I humbly submit DDLG is not one of them. The internet is an open space where 1  in 3 of all human users in the world is a child. We have to think of everything we do online against that backdrop.

Companies should either follow Tumblr’s lead and ban this stuff altogether or they should insist that it goes behind an age gateway.  If they cannot or will not do the latter they should do the former. And they should do it now.

Posted in Uncategorized | 1 Comment

Facebook, Google and data about porn

Facebook and Google have very strict rules about porn. Essentially it is banned from both platforms. Here is what Google says

Sexually Explicit Material

“Do not distribute sexually explicit or pornographic material. Do not drive traffic to commercial pornography sites”. (emphasis added)

Here is Facebook’s policy

Adult nudity and sexual activity

“We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content. Additionally, we default to removing sexual imagery to prevent the sharing of non-consensual or underage content. ” (ditto)

And yet

Leaving aside Facebook’s absurd, transparently phoney use of “our community”, these policies are reasonably clear yet, as research published last week shows, they do not seem to have stopped either company collecting data on a significant scale from porn sites. The data are collected via trackers they themselves put there.

I cannot imagine many, possibly any, users of a pornographic site knowingly consenting to Facebook or Google picking up information about their porn habits. On the contrary, if they thought there was any possibility those data could be linked to other aspects of their online lives, particularly their online life with Facebook and Google, they would vigorously object. If these companies know this, why do they do it? On what legal or ethical basis? I cannot imagine it is happening within the EU (see below) and I will ask both companies to confirm that is the case, but should it be happening in any jurisdiction? No.

As you will see, by a country mile Google is the largest collector of data of this kind though, to be fair, they are probably the largest collector of data across every category of web sites.

I’m sure I won’t be alone in wondering, given their stated policies, what Google and Facebook actually do with data they collect from such expressly forbidden places?

Have pyschoanalytics reached a point where knowing a person’s sexual interests or the details of the frequency and timing of their visits to particular types of sexual sites, allows one to infer that they are likely to respond to advertisements for scuba diving holidays or cookery books? Answers on a postcard please to the usual address.

New Scientist reveals all!

Maybe I could have chosen a better sub-heading? Being my own editor is not easy.

I digress.

An article in this week’s New Scientist caught my eye with this rather striking headline “Most online pornography sites leak user data” (the headine in the online article is different – it says “Thousands of pornography sites leak data to Google and Facebook”). Not sure “leak” is the right word if trackers are in place. I mean Facebook and Google are not hacking.

Being aware that New Scientist has not always been a reliable witness on the question of porn on the internet I went to the original source , a research article published by Jennifer  Henrichsen of the University of Pennsylvania, Timothy Libert of Carnnegie Mellon and Elena Maris of Microsoft Research. The research was carried out in March, 2018 using a computer based in the USA. That was pre-GDPR but anyway since the test machine was in the USA it would not have applied.

Here is the opening Abstract

“This paper explores tracking and privacy risks on pornography websites. Our analysis of 22,484 pornography websites indicated that 93% leak user data to a third party  (ditto). Tracking on these sites is highly concentrated by a handful of major companies, which we identify. We successfully extracted privacy policies for 3,856 sites, 17% of the total. The policies were written such that one might need a two-year college education to understand them.

Our content analysis of the sample’s domains indicated 44.97% of them expose or suggest a specific gender/sexual identity or interest likely to be linked to the user. (ditto) We identify three core implications of the quantitative results: 1) the unique/elevated risks of porn data leakage versus other types of data, 2) the particular risks/impact for vulnerable populations, and 3) the complications of providing consent for porn site users and the need for affirmative consent in these online sexual interactions.

Not so incognito 

Brace yourself for the authors’ introductory paragraph

“One evening, ‘Jack’ decides to view porn on his laptop. He enables ‘incognito’ mode in his browser, assuming his actions are now private. He pulls up a site and scrolls past a small link to a privacy policy. Assuming a site with a privacy policy will protect his personal information, Jack clicks on a video. What Jack does not know is that incognito mode only ensures his browsing history is not stored on his computer.  The sites he visits, as well as any third-party trackers, may observe and record his online actions. These third-parties may even infer Jack’s sexual interests from the URLs of the sites he accesses. They might also use what they have decided about these interests for marketing or building a consumer profile. They may even sell the data. Jack has no idea these third-party data transfers are occurring as he browses videos.”

Sexual privacy

“Sexual privacy sits at the apex of privacy values because of its importance to sexual agency, intimacy, and equality. We are free only insofar as we can manage the boundaries around our bodies and intimate activities… It therefore deserves recognition and protection, in the same way that health privacy, financial privacy, communications privacy, children’s privacy, educational privacy, and intellectual privacy do.”

That’s a quote cited in the main article. There’s a lot in it that makes sense but does “sexual privacy” truly sit at the apex of privacy concerns? Maybe not, but it definitely should rank equal with the others mentioned. In fact in the EU it probably already does. Unless someone has given “express consent”, under Article 9 of the GDPR  collecting or otherwise processing information about someone’s “sex life or  sexual orientation” is prohibited. The researchers appear to approve of the GDPR’s provisions but note (a) they do not apply worldwide and (b)  it is still too early to say what impact they will have.

Where does this leave age verification?

When the UK children’s organizations’ began their campaign to advance children’s well-being by restricting under 18s’ access to porn sites,  one of the arguments most frequently trotted out by the anti-age verification (av) lobby was that, inevitably, av would lead to “Ashley Madison” scenarios.  People with minority or very particular sexual appetites would be rendered especially vulnerable.

These suggestions were based on the idea that  porn companies themselve or hackers could and would make unauthorised linkages between data rendered to an av supplier and data collected by porn publishers.   And if the porn publisher and the av supplier appeared to have any sort of business or other connection with each other then, well, what more needed saying? A whole profile of your sexual preferences could be built, with potentially terrible consequences even if Ashley Madison never reappeared.

The fact that making such linkages is illegal in the EU and probably many other places, was glossed over or ignored.  As was the fact that with some of the available av solutions  – perhaps the ones that will come to dominate the av market – such linkages will be technically impossible even if anyone tried.

Where were those  same voices before we started trying to defend children by campaigning to get av introduced? Where was the searching critique of the status quo? Everything was fine with porn sites until we hoved into view? Porn sites as they exist today speak of liberty and liberalism? We are the forces of reaction? I don’t think so. Even if nothing else changed, how exactly would av make things worse than they are now and have been for very many years?

If you value your privacy stay away from porn sites

The great majority of porn sites describe themselves as being “free”. They aren’t. You just pay in a different way. You pay with your data, not cash upfront.  As the research shows, 93% of sites are collecting and passing on information about your porn consumption. I am surprised 7% of sites seemingly aren’t but either way the porn consuming public will be shocked at what the research shows.

If you value not only your “sexual privacy”, but privacy of any kind, porn sites are probably the last places you should go. They are selling you, if not down the river, then  certainly to entities paddling in its watery and muddied margins.

Approached correctly, av not only offers to protect children it could also open a pathway to a greater degree of user privacy than has ever existed for people who visit porn sites. That’s never been one of my major objectives in life but then it’s funny how things can turn out.

What is to be done?

In descending order of threat to the existing, data driven business model of porn sites, perhaps they could be required to run large, unmissable banner headlines on their landing page, with reminders every 5 minutes, telling viewers, if it is the case, that on this “free” site information is being collected about what they are looking at, making clear that it may be used to build or add to an advertiser’s profile of them. It could be argued this should happen on every web site that  is linked to sensitive data, and I’d be OK with that.

Perhaps porn companies could be required to provide a prominently displayed one-click tool as an option to prevent any personally identifiable information being transferred to or collected by anyone. Either of these could destroy or radically reshape the current predominant business model. I sense there is a certain inevitability about it and the smart purveyors of porn will already be working out what to do next to stay alive.

Posted in Age verification, Consent, E-commerce, Facebook, Google, Internet governance, Pornography, Privacy, Regulation, Uncategorized

More detail on that FTC decision on Facebook

When the UK Government published its White Paper on Online Harms one of the provisions which caused hearts to start beating faster, either from anxiety or joy depending on where you stood, was the possibility of Directors of internet companies being made criminally or otherwise liable for the actions of their companies.

I guess this is only an extension of the legal maxim “ignorantia juris non excusat”. It is not hard to guess what those Latin words mean: “ignorance of the law is no excuse”. Here Her Majesty’s Government was simply updating the concept:  “ignorance of what is  being done online in your name is no excuse”.  

Is there a Bolshevik cell buried inside DCMS (the lead Ministry for the White Paper within HMG)? Did they somewhow manage to worm their way into the team responsible for writing the White Paper with the express intention of underming one of the capitalist world’s greatest success stories in recent times? If there is then the cell also has a counterpart in the FTC. Just look at some of the words they used in their now confirmed US$5 billion fine of Facebook:

“To prevent Facebook from deceiving (emphasis added)  its users about privacy in the future, the FTC’s new 20-year settlement order overhauls the way the company makes privacy decisions by boosting the transparency of decision making (ditto) and holding Facebook accountable via overlapping channels of compliance.

The order creates greater accountability at the board of directors level.  (ditto). It establishes an independent privacy committee of Facebook’s board of directors, removing unfettered control by Facebook’s CEO Mark Zuckerberg (ditto) over decisions affecting user privacy. Members of the privacy committee must be independent and will be appointed by an independent nominating committee. Members can only be fired by a supermajority of the Facebook board of directors.

….. Facebook will be required to designate compliance officers who will be responsible for Facebook’s privacy program. These compliance officers will be subject to the approval of the new board privacy committee and can be removed only by that committee—not by Facebook’s CEO or Facebook employees. Facebook CEO Mark Zuckerberg  (ditto) and designated compliance officers must independently submit to the FTC quarterly certifications that the company is in compliance with the privacy program mandated by the order, as well as an annual certification that the company is in overall compliance with the order. Any false certification will subject them to individual civil and criminal penalties (ditto)”.

Wow.  I guess that’s pretty clear. Mr Zuckerberg and pals have had their  knuckles rapped and their wings clipped, big time. They are also on notice that they could face prison or at any rate a criminal record if they do not up their game in the truth department.

I’d say whoever wrote and approved those words has a pretty low opinion of the company’s relationship with  veracity or the care it takes with it.

Yet there will still be people in the global, jet-setting fantasy world of internet governance and the institutions it has spawned who say  self-regulation wrapped around multistakeholderism is the best possible way to proceed.

There are none so blind as those whose wages depend on not seeing.

 

Posted in Facebook, Internet governance, Privacy, Regulation, Self-regulation

Alarming noises fom Google

My clever radio alarm clock normally wakes me me up by blasting out the BBC’s “Today” programme. This is the UK’s  prime, agenda-setting current affairs programme. Pretty much the whole of our political class tunes in. And me.

Four days ago, in a still befuddled state I was certain I heard a reporter say that, rather than comply with some of the aspects of internet regulation foreshadowed in the Online Harms White Paper, Google were putting it about they might withdraw from the UK, or any rate YouTube could cease operations here.

I had heard something like it said before, a while ago, but on that occasion I dimissed it as airy bluster from someone who was momentarily unhinged. Now I know there’s more than one person at YouTube who has lost touch with reality.

While Mark Zuckerberg and Nick Clegg appear to be demanding “more help” policing  harmful content,  Ben McOwen Wilson, YouTube’s UK boss, appears to be concerned he might be about to be offered too much.

YouTube’s Press Officers or Government Relations people probably won’t let Wilson out on his own again. I wouldn’t if I was them.

He did an interview with the BBC and his replies to the journalist gave rise to those alarming comments which dragged me from the arms of Morpheus.

You can watch the whole interview here if you wish. It’s not very long but I have plucked out a few bits.

Referring to the Online Harms White Paper, the interviewer asked

“What for you in a regulatory sense is the worst case scenario that  the UK Government could come up with?”

Wilson’s reply? First he abandoned his big, global capitalist corporate hat and adopted the mantle of “the British public” or the  “listeners and viewers”. A mistake. Schoolboy error, but let’s press on.

He is anxious about

“…decisions being taken in a darkened room by unnamed individuals around who gets a right to speak and make their thoughts and views available to the public. I think that is what we should all be afraid of. We might end up in those places  and at that point we would have to take a decision on whether it would be appropriate to continue to operate.” (emphasis added)

There was no roll of drums or shrieks in the background as the “darkened room and unnamed individuals” were conjured up. Maybe that can be inserted in a future edit.

And just in case you wondered what all this might mean, where Wilson thinks we could be heading in the “worst case scenario” he fears “could” be contrived by a British Government, he goes on to say

“We’re not available in China. We’re not available in Sudan.” (emphasis not added here because it is utterly superfluous).

Seemingly these are examples of “markets” (note) where YouTube is not available because they’ve been unable to agree rules about “censorship”. 

Where to begin deconstructing this?

Today Boris Johnson became Leader of the Conservative Party which means tomorrow he will become Prime Minister. Fighting off my deeply depressed state of mind I must acknowledge that not even he is going to turn the UK into another China or Sudan in terms of free speech and the comparison or suggestion that such a thing is even on the furthest of horizons will be seen by many as deeply insulting, and arrogant beyond words.

In the larger interview, reported elsewhere on the BBC, referring to the outcome of the discussions now taking place on the Online Harms White Paper, Wilson digs a deeper hole when he says

“There are regimes out there who will mirror – in their own ways – the position that they view the UK has taken.”  (emphasis added). He calls that a “risk”.

You hear this kind of nonsense a lot. But that’s all it is. Nonsense. The idea that Britain only has permission to defend its children or other classes of citizens in the way it thinks fit when it can be confident North Korea can’t or won’t twist our words and use them in their own ways to justify the oppression of the people of North Korea is just facile.

 

Posted in Google, Internet governance, Regulation, Self-regulation, Uncategorized