All of the UK’s mobile phone networks run a system of filtering to keep web-based adult content away from children who access the internet via a mobile phone handset. The policy was first introduced voluntarily back in January 2004. It is still in place as a voluntary measure. The filter is applied by default to the great majority of mobile phone accounts. I call it an “adult bar”. I am not wedded to that description. I’d happily call it something else if it helped make clear the filtering that the networks do is not just about pornography. It never has been.
Earlier this week the Open Rights Group and the LSE Media Policy Group (LSE ORG) published a report with a pointedly dramatic and large title: “Mobile censorship: What’s happening and what we can do about it.” The report discussed, or rather it traduced, the adult bar.
That said, much of the smaller print is fine. The report acknowledges that filtering software can play a legitimate part in helping to protect children online. The authors’ main gripes seem to be about the quality of the software i.e. how well it performs its stated tasks, and about when and how it gets turned on. All perfectly reasonable points.
The LSE ORG document also contains several genuinely new and original findings. It shines a light on and makes a number of trenchant observations about the lack of transparency and consistency in relation to the way the adult bar has been implemented on some networks. The networks should address these as soon as they possibly can.
Thus, while I strongly endorse some of the LSE ORG report’s comments and detailed recommendations, I regret to say its principal allegation, that the mobile phone networks are involved in censorship, is demonstrably false. This alone fatally undermines its credibility but, misery upon misery, on top of that it contains other manifest and not insignificant weaknesses and errors.
In the bulk of the remainder of this blog I’m going to focus mainly on those weaknesses and errors. As a work of scholarship the LSE ORG report misses the mark by quite a distance. Even as a campaigning pamphlet, which is what I think it really is, it fails because its flaws are so numerous and so easy to expose.
I will not go through every point with which I disagree or that I believe to be inaccurate or incomplete. This response is way too long as it is. What follows are my comments only in respect of what I think are the LSE ORG report’s more egregious shortcomings.
I am trying to set the record straight.
Ours not to reason why?
One very obvious lacuna is the lack of any mention of the reasons why the mobile networks introduced the adult bar in the first place. And why did the fixed line ISPs not follow suit? There is a history. There is a context. But LSE ORG do not relay it.
The mobile phone networks’ reasoning was important at the time and it remains relevant today. But you are left to guess at what it might have been. Clearly a desire to protect children was part of it, however that was not the whole story, not by a long chalk.
Bad language
Perhaps even more remarkable is the absence of any definition of what is meant by the word “censorship”. This is no small failing in a document that uses such a highly charged, loaded, attention-grabbing word in its masthead.
Using your favourite search engine take a look at most of the coverage the report received in the immediate aftermath of its publication. The idea that the UK’s mobile phone companies practice censorship has been successfully broadcast around the world. One has to assume this was done wilfully and deliberately. What a shame it isn’t true.
I double checked with the offline Oxford Shorter and the online edition of the Oxford English dictionary. When the word “censor” is used as a verb it means
Examine (a book, film, etc.) officially and suppress unacceptable parts of it.
An example of what the lexicographers had in mind is given
In the national interest the letters she received were censored.
When used as a noun, i.e. to describe “a censor” the definition runs as follows
An official who examines books, films, news, etc. that are about to be published and suppresses any parts that are considered obscene, politically unacceptable, or a threat to security
Again a specimen sentence is provided to illustrate the intended import
The report was approved by the military censors.
My guess is these definitions accord with what most people think censorship is about. The idea carries with it a great deal of baggage, none of it good. It conjures up sinister images of tin pot dictatorships, zealots, soldiers, clerics, repression. If you use the “c” word you are playing into that space and those emotions.
The underlying reasons for engaging in censorship typically are purposeful and deliberate e.g. they are connected to the censor’s religious or political beliefs, their desire to preserve a group’s grip on power or authority, their view of what is in the national interest or what is acceptable in terms of taste and decency.
Through cutting, redaction, altering or refusing to release, censorship is about keeping images, words, films, radio broadcasts, books, speeches, articles, whatever, from reaching the eyes and ears of the general population over which the censor exercises control. The effects of the censor’s actions are usually meant to be permanent or at any rate to last for a long time, or cover a particular period the censor thinks is relevant.
What the mobile phone companies do is nothing like this. It is not even close.
A special meaning?
In the context of the report is the word “censorship” meant to have a special meaning, one that is outside the normal frame of reference or common understanding? If it is you wouldn’t know it. The matter is not discussed. Elision is the order of the day.
Right at the beginning of the report, in the Introduction on page 5, we are offered a definition of the word “block” which, in the very next sentence, effortlessly transforms into “filter”. This pattern repeats. “Censor” merges with “block”, melds with “filter”.
Thanks to Word’s ability to count such things I can tell you that “censorship” or “censor” appears ten times in the report, including the title and contents page. However, in the body of the text “filter” or “filtering” or “block” and “blocking” appear 116 and 139 times respectively. Filtering is blocking is censorship is the insistent beat of LSE ORG’s drum.
No attempt to convey any nuanced shades of difference is attempted. In the Daily Star this would not be surprising. Yet “filter” and “block” can have quite distinct and particular meanings. These meanings are not coterminous with each other much less with the common usages of either “censorship” or “censor”. Terms such as these should not be mixed up in the way they are.
More bad language
In the fifth paragraph of the Introduction the authors say
Mobile internet filtering blocks too much content, and applies to too many people, meaning it effectively adds up to a system of censorship across UK networks. (bold added by me for emphasis)
So is it censorship or is it something else which, though different, amounts to the same thing? Without a definition it’s hard to know what sort of qualification “effectively” is meant to convey. We are on shifting sands.
On page 8, last sentence, the report puts “censor” between inverted commas, indicating perhaps that we should not read it too literally. But since we have no idea what literal meaning the authors want us to have in our heads to begin with it is difficult to know what aspect of that meaning we should disregard. Are the authors suggesting that what they are talking about isn’t really censorship at all (which indeed it isn’t)? If that is the case, ought they not to have been more careful with their choice of title and much of the rest?
What the mobile phone companies do
LSE ORG tell us on page 5 what they mean by “block” i.e. blocking is what happens when a user is prevented from connecting to a given site. Er, that’s it. This appears to be the platform on which a great deal of their edifice is built.
But what LSE ORG do not say at this point is that this inaccessibility is a contingent state. We are not told that it can be reversed, removed altogether almost in the twinkling of an eye. The adult bar is not an iron curtain. It’s more a silken veil on a well lubricated draw string. We have to wait until page 9 before we are told about this important qualifying fact. Even then it is briefly, baldly and crisply stated and swiftly left. No elaboration. No discussion. Nothing.
Why do I say that what the mobile phone companies do is not censorship as it is normally understood? Precisely because anyone can get access to anything and everything that is behind the adult bar. They merely have to express a wish to do so and prove they are over 18. This is common practice in many other areas of British life and has been for countless years. No one is saying that any web site or any content should be suppressed or redacted or that anything is immoral, obscene or a threat to national security. All that is being said is that kids should not have ready access to it.
At worst it could be a minor irritation
At worst, or at best depending on where you are coming from, all you can say is that the mobile phone networks have put a few bumps in the road, some chicanes or traffic control measures in place. These are no more than a minor inconvenience or an irritation. Once you have negotiated them you are out on the highway with no restrictions of any kind.
Once in your life
And by the way you only have to do this negotiation once in your life. Not every time you use your phone to connect to the internet. Typically it will take about 30 seconds, perhaps a tad more if the network is running slowly that day. This is because your adult/not adult status is linked to the SIM card, not the handset. Thus unless you change your telephone number or your mobile phone network you will never have to go through this process again. Ever. It might have been helpful if this had been explained by LSE ORG. It wasn’t.
The process is reversible. If you had the adult bar lifted once and you want to have it reimposed you can. You could then get it lifted again later although, and I now need to amend my earlier statement, it will cost you another 30 seconds to repeat and complete the age verification process.
Quite why someone might want to have the adult bar reimposed and lifted again later is not obvious. I suppose they could be in the habit of lending their phone to someone else and that someone else is a child. But while changing handsets or passing handsets on to a third party e.g. following an upgrade, might be quite common, I wonder how frequently people lend out or temporarily abandon their telephone number? Not very often I imagine. Again this point might, with profit, have been mentioned by LSE ORG. It wasn’t.
I know many adults who are perfectly well aware that the bar is in place on their phone. They know they could easily get the bar lifted but they choose to leave it in place. They like the idea that, without them having to do more, porn, violence and the like cannot get through because of an inadvertent slip up when typing something into Google.
How does this compare?
In many other parts of their lives, in order to gain access to particular types of content, places or services, adults accept much more onerous encumbrances than those imposed by the mobile networks in relation to this area of policy. Apart from a few grumpy old curmudgeons these hurdles are generally accepted as part of a series of socially negotiated policies designed to shield children from exposure to potentially harmful experiences. I’m thinking about the rules surrounding entrance to licensed premises, to cinemas showing 18 rated movies, TV watersheds, access to sex shops, gaining a driving licence. Opening a bank account can involve an enormous amount of time and paper work yet we all submit to it at least once.
To portray what the mobile companies are doing as being in any way sinister or illiberal is tendentious and completely unwarranted. It smacks of propaganda which is ideologically rooted in another era, when the internet was something very different from what it is now.
Selective quotes
The report selectively quotes the Byron Review and the Bailey Review. It misses out, for example, the following extract taken from paragraph 46 on page 38 of Bailey
….. it is not acceptable to expect parents to be solely responsible for what their children see online, and industry must take greater responsibility for controlling access to adult material online in the same way as they do when providing this sort of content through other channels, such as cinema, television, DVDs or adult magazines. We believe that there is no logical reason for not bringing internet-enabled devices into line with other platforms in order to protect children from inappropriate material.
Or this from paragraph 4.60 on page 94 of the 2008 Byron Review
For these reasons I do not recommend that the UK pursue a policy of blocking non-illegal material at a network level at present. However, this may need to be reviewed if the other measures recommended in this report fail to have an impact on the number and frequency of children coming across harmful or inappropriate content online.
Note the date. Four years ago.
The Ofcom Review? The BlackBerry Affair? The IWF?
Bizarrely the LSE ORG report makes no reference to the 2008 Ofcom Review of the mobile networks’ operation of the adult bar. The Ofcom review runs completely counter to the impression given elsewhere in the LSE ORG document that the whole policy is being run by a bunch of out of control cowboys who do what they like with no accountability to anyone or any thing.
Equally the lessons learned from the “BlackBerry Affair” might have been worth a sentence or two, and ditto in relation to the system for blocking illegal content that is in place linked to the IWF’s list. Nobody can ask for or get the IWF list lifted because it contains only illegal content. Neither is this censorship because the items on the list are illegal. You cannot “censor” items that should not be published in the first place.
None of the three topics featured in this section’s heading were mentioned in the LSE ORG report. Why?
The policy does not apply only to pay-as-you-go phones
Another niggle, but not a small one: on page 5, second paragraph of the Introduction, the report suggests the adult bar is applied only to pay-as-you-go phones. It isn’t. It never has been. The policy applies almost as comprehensively to monthly accounts as well. Of all the mobile networks in the UK only “3” does not now apply the adult bar by default to monthly account holders. In the case of 3 if you have a monthly account and you want it you have to ask for the adult bar to be applied. This was also the case with Orange until they merged with T Mobile to form Everything Everywhere. Orange has already or soon will fall in line with the majority in relation to monthly accounts.
Evidence from Down Under? I don’t think so
Now I turn to some of the “evidence” which is adduced in LSE ORG’s report. It purports to address the question of mistakes.
The Queensland newspaper the “Courier-Mail” is not a source which I imagine features regularly in the footnotes of academic publications. Yet it makes an appearance here in footnote 9 on page 11 linked to the following paragraph:
When the wrong content or site is blocked by a filtering system, it is called “over-blocking”. In Australia, for example, it was reported that a “Queensland dentist, a tuck shop convenor and kennel operator have been included in a secret ‘blacklist’ of sites to be banned by Australia’s communications watchdog.
Before looking at this (non) evidence more closely I want to refer briefly to page 15, third paragraph where LSE ORG have a little dig at how “current” the mobile networks’ policy framework and documentation are. The authors are hinting that in the high-tech world it is important to be up to date yet here they suggest it is clear the networks probably aren’t.
Now back to Oz and the Queensland Courier-Mail. You wouldn’t know it from the main body of the LSE ORG report but anyone clicking on the link provided in the footnote would see that the story they refer to actually appeared in March, 2009. What they still wouldn’t see or probably know, because the newspaper does not reveal this either, is the incident which gave rise to the coverage concerned something that happened in early 2008.
A lot of bits and bytes have flowed under countless virtual bridges since then but let that pass. I still cite Cicero on occasions. Ancient history has its place but it would be good to know, upfront, that that is what it is if it isn’t obvious from the context, which in this case it isn’t. I appreciate that only a super-nerd like myself actually clicks on footnotes and reads them. I apologise for being a swot, but there you go.
LSE ORG correctly describes what was reported in the Queensland Courier-Mail. What LSE ORG neglected to do, however, was establish and report the whole truth. That is not to be found on the pages of the Queensland Courier-Mail.
The Australian communications watchdog referred to in the article, the one that constructed the list of web sites being complained of, is the Australian Communications and Media Authority (ACMA). I contacted them. I’ll happily forward a copy of their reply to anyone who asks for it.
The upshot is this: the three URLs referred to had indeed been blacklisted. At the relevant time the website owners were not using the individual names. This left them ripe for hijacking. And that’s what happened. They did contain child abuse images at the time they were investigated. In May 2008 when they were re-checked by ACMA the child pornography had disappeared so the URLs came off the blocking list. The sites were then free of any and all restrictions. I’d say that was an important bit of the context or background to the Courier-Mail “revelation”.
In the UK the IWF administers a similar system. It blacklists URLs found to contain child abuse images. Sites on that blacklist are blocked. The IWF checks every 12 hours to see if the status of a blocked URL which contained illegal content has changed. If the illegal content has gone the URL comes off the list. I don’t know if ACMA now does something similar in Oz but various agencies around the world do. It is now regarded as best practice. If it was a problem back in early 2008 it needn’t be now. Couldn’t this dimension have been mentioned?
“Over 60” blocked sites
The LSE ORG report presents other evidence to support its argument.
They created a special web site, www.blocked.org.uk. The landing page is that of the “Open Rights Group Campaign” where we are told
Network operators are default censoring the mobile Internet, in case children access adult material. But often the wrong websites are blocked. Help us report when that happens.
As we shall see, that’s an interesting use of the word “often”.
The site seeks information from the public about URLs that they think have been wrongly made inaccessible. During the period January – March 2012 they received “over 60” such pieces of information. OK I know they could have received 59 on one day and only one more during the whole of the rest of the period but I’m going to live on the edge a bit here and venture that they received an average of 20 per month. I trust that relying on that number will not lead me wildly astray.
Whilst it is accepted that even a single instance of mistaken, unjustified or otherwise erroneous blocking is regrettable and ought to be investigated, and generally I am not a numbers-game-person, at 20 per month or even 60 per month I am still struggling to get my head around the scale of the outrage being suggested.
I have had a little difficulty hunting down the following numbers so I will explain exactly what I did. Experian, one of the UK’s largest credit reference agencies, runs a web site called “Hitwise”. It monitors and reports on web traffic.
According to Hitwise analyst James Murray Facebook receives “over 1.3 billion visits per month” from UK based internet users. And in December, 2011, visits to Facebook amounted to 15% of “all UK internet page views”. My Babbage difference engine tells me that means altogether in the month of December 2011 there were roughly 8.5 billion internet page views that originated with requests made from within the UK. The number of attempts to reach URLs obviously was very likely to be a lot higher than that but let’s stick with what we know for now.
According to another excellent web site run by a company called Tecmark, at the end of July 2011 about 12.6% of all web traffic could be traced to mobile phones.
Tecmark’s site is showing exceptionally rapid growth in the use of mobiles to access the internet so 12.6% almost certainly understates the real percentage for the relevant period. By January-March 2012 it would have been higher but I’m happy to work with the lower percentage even though it will hurt rather than help my case.
Assuming 12.6% won’t be very far off the mark it suggests that, on average round that time, out of the UK per month over 1 billion visits to web sites were made using a mobile phone browser. Again what I have not been able to discover, perhaps to set Blocked.org. uk’s 20 per month in its proper context, is how many attempts on mobiles fail anyway for any of several possible reasons, probably the most common being caused by mistyping the address, timeouts arising from network congestion or server overload. Whatever the number of failures is going to be it will have to motor a little to make 20 seem significant. 20 seems not so much like small beer as sub-atomic. When dealing with those kinds of volumes 20, or “over 60”, barely compute.
The one does not explain or justify the other
I hear the howls. One doesn’t explain or excuse the other. The fact that phones fail to connect for technical reasons is no justification for ignoring or allowing another series of failures to be layered on top, especially if they are policy based, can be avoided or they are evil. Moreover LSE ORG did not claim to be doing a quantitative study of the rates of failure to connect which were due to inappropriate decisions about classification. They were merely gathering in examples of the same.
I buy all that but nonetheless LSE ORG are still presenting us with a rather slender reed. Can it bear the weight of the enormous ask they are making of the whole of the British mobile phone industry? LSE ORG are suggesting they abandon something they have been doing for over 8 years, something they chose to do voluntarily and for good reason. The industry spent tens of millions of pounds to put the measures in place. It might cost the same or more to pull them out. On this point too, cost, the report is silent.
And since 2004, when all this began, has the quality of democratic, literary, artistic, or political life in Britain in any detectable way been worsened by the policy? By other things, maybe. By this policy? No. But some children, perhaps many children, have been spared exposure to things that no one of their age should have to or be able to witness. “Over 60” mistakes does not substantiate the charge of censorship.
Pyongyang is put in charge of internet policy
One of the more ludicrous suggestions in the report is that by using filtering here in the UK we somehow provide succour and comfort to tyrants overseas. If I really thought that Kim Jong-un or any of his illustrious predecessors gave a flying banana about what the UK’s mobile phone networks do to protect children…It is too ridiculous for words.
We cannot hand over control of policy on the internet to the most barbaric, reactionary, anti-democratic governments on Earth. Neither can we give them a veto over how we best protect our children. Either you agree with this line of policy or you don’t. It stands or falls on its own merits, or lack of them. Enough already with intellectually bankrupt arguments about non-existent slippery slopes.
But herein lies a tension within the report. On page 21 the following passage appears
Furthermore, if online censorship is widespread and accepted with little opposition as a way to implement a broad range of public policy issues, it becomes far harder to argue for Internet freedom elsewhere. Other governments and companies around the world use the same technologies to restrict access to online material and offer the same arguments about taste, decency and citizens’ safety.
I’m tempted to add that the task is not made any easier by people who misrepresent what is being done to protect children by falsely calling something censorship when it is not. The report at this point is getting pretty close to saying all filtering is bad or pointless or both, a view that I know is widely held in some quarters.
Networks’ inconsistencies and incoherence
As mentioned earlier, what the LSE ORG report revealed about the inconsistencies, indeed the incoherence of the way some of the mobile phone networks appeared to operate parts of their filtering policy, particularly around the practice of age verification and appeals against wrongful inclusion on an adult list, are genuinely new and important. Strike that. They are very important and I will be acting on the basis of what they have revealed.
If LSE ORG had limited themselves to saying only that they thought the mobile companies should take more care to explain the basis of their policy (to everyone), took more care to explain how to get the adult bar lifted, ran age verification policies which were applied properly and consistently, and if they had simply urged the networks to get better at swiftly adjudicating on any claims of unjustified or mistaken classification, correcting them where necessary, then I would be alongside them, agreeing with them.
We want good and better filtering. We are against bad filtering. We should be seeking to build on the systems we already have in place, not simply throwing our hands up and abandoning the task altogether, saying all filtering is rubbish and always will be. To be fair, LSE ORG appear to be agreeing with me when they call for more granular filtering e.g. on page 22.
Filtering should never be used, either overtly or covertly, to deny anyone their human rights, including children. No rational person could want children not to be able to access sites which might provide them with guidance about sexual health or indeed any information about what’s going on in the world. Everyone I work with encourages children to assert themselves, to know or find out about and claim their rights. The web, the internet, is a perfect vehicle for doing that. Freedom of association and expression are cardinal values which again are greatly enhanced by what the internet makes possible, for young and old alike.
Only a fool would ever say filtering alone is enough to keep a child safe when they go online. Teaching children how to use the internet, helping parents to understand how young people use the technology so they, in turn, can engage constructively with their children, these are the preferred routes and they are the best routes for most children.
But education and awareness will not work for everyone. Technical measures can help, often help a lot, particularly with younger children. Perhaps, in the end therefore, the difference between me and the authors of the LSE ORG report is in relation to what I consider to be a tolerable level of inefficiency in filtering when set beside the advantages it offers.
Techies often take the view that unless something works with 100% certainty all of the time it is “broken”. I don’t. I think we have to be more pragmatic. While we struggle on in the quest for perfection, for 100% certainty all of the time, I want to know how we can make things better now.
With LSE ORG I share the aim of wanting zero instances of incorrect classification. I also want zero instances of adults having to live with the adult bar for a moment longer than is necessary, consistent with the underlying aim of the policy. But you can get all that without abandoning what the networks are currently doing or turning it completely on its head.
An even bigger target
It is pretty clear why LSE ORG have jumped into this debate at this particular moment. We get to it around page 29 of the report where Jeremy Hunt’s speech to the Royal Television Society in September 2011 is quoted. Here he promised, inter alia, to bring forward new measures to protect children online. The authors are fearful that, in the wake of the report of Claire Perry MP, the current discussions on “Active Choice” will move towards default filtering on fixed line services, perhaps installed at network level, thereby copying what the the mobile networks do.
I can see there is little point publishing a paper designed to influence policy twelve months after all the decisions about the policy have been taken. But if you are going to step into a fight of that kind you need to make sure you’ve done your homework and your thinking a little more thoroughly than LSE ORG appear to have managed on this occasion.
Gains and losses
The authors do not at any point discuss how you balance out the acknowledged, if minor, inconvenience or irritation which the mobile phone companies’ measures might cause an adult for approximately 30 seconds once in their lifetime, against the potential benefits claimed for the safety of children.
Furthermore the related debate about the link between risk and harm does not get an airing. My own view is, if you are a parent, that discussion can be reduced to nought if your child has just been harmed, especially if he or she has been badly harmed. It is very hard to judge what constitutes an acceptable level of risk but it is very easy to judge what is an acceptable level of harm: nil. Telling a parent after the event that the risk was quite low so, in effect, their traumatised child was just unlucky I suspect will butter no parsnips.
Feminists’ objection to the way pornography objectifies women and contributes to the over- sexualisation of society, tending to reduce or demean the status of women as well as contributing to the coarsening of society, certainly is partly linked to concerns about the impact on children, particularly boys, but that is not the sole or principal driver.
Consequently even if you were not wholly convinced that the advantages of default filtering are real or significant or would necessarily help with any of the points just made, isn’t there nonetheless enough surrounding uncertainty to convince you that there could be something in the claims being made and therefore since, other things being equal, all that is at stake is about 30 seconds of your whole life, where’s the beef? You probably waste 30 seconds every day just wondering which version of soduko to play after lunch.
As a matter of fact I am convinced about the potentially serious risk of harm that exposure to certain types of pornography and violent images can do to children, particularly younger children, and I can see the case for restricting children’s access to sites promoting alcohol, tobacco and similar. That is why I favour default filtering, as one measure among many others. But I am also convinced there is unlikely ever to be a level of settled, universally accepted or mathematical certainty about the impact these things have on young people’s pyscho-social development, at least not such as would satisfy every academic or politician on the planet.
In this area of policy everything hinges on the context and the single most important variable in that context is the child himself or herself. How can you legislate for that? It’s simple. You can’t. But that does not support a position where you end up doing nothing. Default filtering provides a minimal safety net, no more than that.
As the internet becomes a more mature and ubiquitous presence in our lives it is vital that we continue to track its impact on society, which means its impact on people in all their glorious and inglorious varieties. The more we know the more we can do to refine policy and make it better. At least that’s the hope although there are some questions, and this may be one of them, which are so overlaid with cultural, political, historic, legal, religious, technical and economic considerations that I sometimes wonder if my belief in the value of research is too naively optimistic.
Perhaps all we can ever do is go with our best estimates and judgements based on the evidence we have available to us at the time and what we think is right or what we think will do the least amount of harm to “non-believers” consistent with achieving a given social policy goal? But who decides how this balance is to be struck?
Do we need a new institution?
In a recent blog Professor Sonia Livingstone, also of the LSE, suggested maybe what we need is a trusted independent body of some kind to step up e.g. an Ofcom, a BBFC or similar, a body that is used to dealing with complexity and weighing competing claims against each other, reaching a considered view with which (almost) everyone is prepared to live. Leaving it to the rough house of headline writers, financially motivated calculations in company Board Rooms, religious leaders or free speech activists, yes even children’s rights campaigners, is no way to run a brewery much less decide a key facet of public policy.
I buy that. Even though I completely reject the idea that what the mobile companies are doing could in any way properly be called censorship I do accept that people can have reasonable apprehensions about the possibility of computer based systems getting things wrong, whether by accident or otherwise.
My anxieties about the potential political manipulation of such a set up by a UK Government of the day are close to zero because that is not part of our tradition, but even so I am not stupid enough to think everyone sees things that way. “Trust me I’m a politician” is not a winning line. The Government should have no involvement in the operational side of anything like this although, in order to do the work, Parliament may need to add to the powers of an existing body or create a new one.
Livingstone’s idea therefore has a great deal of merit. Such an organization would need to draw on people from all of the stakeholder communities, of whom the free speech and civil liberties lobby is undoubtedly a major one.
All families and all children are not the same
Earlier I said “education and awareness will not work for everyone”. This brings me to another of my major criticisms of the LSE ORG report. It is blithely written as if, in one key respect at least, all families were the same, all children were the same.
In the world that LSE ORG inhabits all parents were or could be equally adept at installing or triggering filters. Looked at through their rose tinted glasses every family gathers around a cosy hearth after dinner of an evening to have genial, measured discussions, led by a confident techno-aware parent, about how to behave if not generally then certainly specifically in relation to the internet. The complexity of the technology, the power of the Leviathans of Silicon Valley hold no terrors for them. They have seen through the hegemonic, libertarian zeitgeisty razzmatazz that the rich people on the West Coast have so assiduously promoted. Likewise their children are capable of understanding all parental pronouncements which they appreciate have been determined with only their best interests at heart, underpinned by the wisdom of Solomon.
I am tempted to say “Give me a break”, but that might seem a bit peremptory or rude.
If we know with absolute certainty, and here I think we do, that “out there” there is an enormous range of personal circumstances, levels of self-confidence about dealing with technology, and that these are more or less randomly distributed across all families and all children, what is our responsibility? Is it to ensure that Guardian readers in NW3 (of whom I am one) are not discommoded to any degree?
The authors of the LSE ORG report say they would be happier if the mobile networks took off all the controls and put more effort into informing parents how to turn them on or use them. Why not do it the other way around? Leave the controls on and put more effort into ensuring people know how to get the controls removed? Internet devices should be as safe as they can be by default. People should have to jump through hoops to render products less safe, not safer. This is more or less how it works in every other part of the consumer electronics market.
Now that the internet is part of that market why should it be any different? People “buy the internet” in supermarkets these days, along with the baked beans. Two of our biggest ISPs are essentially TV companies that bundle the internet with the Disney Channel.
The precautionary principle
To conclude, if there was any hard evidence at all that the mobile phone companies’ policy might actually do some harm to children, that would be a serious point against the current practice. No such evidence exists or has ever been produced.
Other objections to the policy of default filtering are often couched in terms of people’s worries about how the policy impacts on adults’ rights and sensibilities, or about how it could be deliberately manipulated for unstated, dark reasons. Glib, and in the Australian case inaccurate, or unsubstantiated references are sometimes made about such policies’ potential to interfere with economic activity or innovation.
Of course these latter points are not unimportant considerations but, even if they were was any evidence to support them, which there isn’t, civilized society typically always gives the benefit of the doubt to its young.
In other words unless adults’ rights or sensibilities were to an appreciable degree being injured or compromised, or unless it could be shown that economic activity or innovation were being materially harmed or reduced, I’m afraid in my book they cannot trump a child’s right to protection. A 30 seconds wait is not a big ask.