With steam coming out of my ears

Today was a terrible day for children in the UK and the Government is 100% to blame.

However, because of the drama and uproar over Brexit what the Government has done may struggle to get adequate attention in the media tomorrow and in the next few days. Some say that is precisely why the Government chose now to make the announcement.

Following a Manifesto pledge and publication of a Government Bill, Parliament said it wanted to inaugurate a system to compel commercial online pornography companies to introduce age verification so as to keep children off their sites.

The porn companies didn’t like it, tried to stop it, but the Government’s view prevailed. It became law in the Digtial Economy Act 2017.

Since then a new Regulator has been putting everything in place to ensure the policy works smoothly. They spent millions gearing up and they are ready to go now. Today.

The porn industry likewise spent millions getting ready. They are ready to go now. Today.

New age verification companies and older ones spent millions getting  ready. They are ready to go now. Today.

What did Secretary of State Nicky Morgan do today? She called a halt and kicked the whole thing into the long grass.

Morgan says the policy needs wrapping up with the response to the Online Harms White Paper (OHWP). This means nothing will happen for two, more likely three, or  even four years, conceivably more.

In other words Nicky Morgan has condemned Britain’s children to being exposed to horrific scenes of sexual violence for a further two, three, maybe four  or more years.

Morgan could havetaken a different course by laying the necessary orders before Parliament so they would become operative 40 days from now. She didn’t.

Once the policy was working, if adjustments needed to be made they could have been made as part of the roll out of OHWP but, no.

Weeks into her new job Morgan decides to ignore and override years of work done by other people with a huge amount of knowledge and expertise. You have to ask who got at her? And why?

This is an absolute disgrace. She must be persuaded to change her mind. The children’s organizations are livid and I imagine a lot of other people will be.

The press is already full of suggestions the Government  did this because it no longer thinks the policy will work.  Either someone in Government is briefing against the official policy – not unheard of – or it is the usual mischief makers indulging themselves. What a pity the Government gave them the opportunity.

 

Posted in Age verification, Regulation, Uncategorized

Is the Internet Governance Forum to be reborn?

At the moment there is a “High Level” review going on within the United Nations.  It was commissioned by the Secretary General of the UN  who appointed Melinda Gates and Jack Ma as co-chairs. With this type of backing it ought to have great potential. The dynamic duo have produced: “the age of digital interdependence“, otherwise known as the “Report of the UN Secretary-General’s High-level Panel on Digital Co-operation” (the Report).

Inevitably the Report covers a lot of territory but in one key section it looks at the future of the Internet Governance Forum (IGF).

The origins of the IGF lie with the earlier Word Summit on the Information Society (WSIS) process, the UN’s initial major foray into internet policy. The first meeting of the IGF was in Athens in 2006. It has met annually since. I wasn’t in Athens but have not missed an IGF meeting since. Some of the thinking behind the IGF initiative contained noble ideas that were very much of their time. Yesterday.

Comments are being sought on the Report (see link above). Below is an edited version of the ones I submitted on behalf of the European NGO Alliance for Child Safety Online.  These comments focus principally on the internet governance aspects.

Introduction

From the perspective of children’s usage, the internet we have today is barely recognisable when compared with the internet as it existed at the time of WSIS and the beginnings of the IGF. The upside of the growth and the changes in the internet which have taken place since then are readily apparent, but so too is the downside which mutes, dilutes and deflects from what should otherwise have  been a glittering success story.

The problems and difficulties faced by children  in the context of the modern internet can be set out under three broad headings:

  1. Those which pose a direct threat to their well-being or unfairly exploit them.
  2. Those which deny them their legal rights to privacy, to  be heard and to participate in processes which result in decisions on matters which affect them.
  3. Too many children still do not enjoy good quality access to the internet linked to its vital companion, media literacy.

The Report

While the Report is very definitely welcome we are afraid its overall tone and much of its content are rooted in a historic approach which has demonstrably failed children in a number of fundamental respects.

The responsibility of companies

In 2019 the overall impression most people are left with is that while a comparatively small number of companies and their shareholders have profited enormously by building large and successful businesses off the back of the internet explosion, these actors have not devoted anything like the same amount of attention or ingenuity to the problems which are now manifest, including those that are a by-product, an unintended consequence, of their success.

Does the present state of affairs exist because of inherent, even insurmountable technical difficulties? What part do the values and hence the priorities of the owners of the businesses play in determining such matters? Is it all about money?  Put simply are the economic incentives not properly aligned to draw businesses away from their current ways of working without external prompting e.g. via binding rules and Regulation?

The responsibility of Governments

A similar comment might be made in respect of the Governments and public institutions in jurisdictions which have the largest concentrations of the successful  tech companies within their borders. They have benefitted from the tax take on profits and the wider boost to their economies in terms of the jobs created, but otherwise they have been bystanders as online perils to children developed and multiplied. Have they just yielded to an obvious conflict of interests?

The responsibility of both

In the end it has to be said that while companies, and the internet eco-system of decision making bodies which they dominate, must bear a substantial responsibility for where we are today, Governments, inter-governmental agencies and other public institutions do not escape criticism either, because they failed to find an alternative course of action which would have obliged or led to different and better outcomes.

Multistakeholderism

Multistakeholderism is referred to many times in the Report, but not sufficiently critically. There was a time when multistakeholderism, linked to a belief in and support for the superiority of self-regulation as a way of tackling any emerging difficulties with the new technology, was the only option available. Few politicians, civil servants and police officers and only a small number of civil society organizations and policy makers had any kind of deep understanding of how these new exciting cyber businesses operated. And they, by which I really mean “we” were dazzled by its apparent promise. Or should that be “blinded by the light”? (with apologies to Manfred Mann and Bruce Springsteen).

The cool disrupters who didn’t wear suits

At the beginning of the mass consumer internet, layered on top of the challenges public bodies and others faced in understanding it, the companies at the forefront of the internet revolution somehow managed to identify with a counter cultural, insurgent liberal spirit. They promoted themselves as wholly different types of ventures, principally driven by social goals rather than more traditional commercial ones. They wanted to make life better, overturning old-fashioned clunky, time-consuming and expensive ways of doing things. Since many tremendous products some of the leading firms were providing at that time appeared to be “free” to the end user at the point of use, this helped cement a benign, almost philantropic view of the internet in the public’s and the media’s consciousness.

The new orthodoxy

The new orthodoxy consequently centred on a belief that the only important thing was to keep Governments out of the way. Multistakeholderism meant everyone, all the “stakeholders”, would talk to each other, a consenus would emerge but that was it. Regulation became a dirty word. Innovation  and market forces would take care of everything. This would be a wholly virtuous circle. Industry was not only  given pretty much a free hand, states even went as far as to give them special exemptions from certain types of liability e.g. the EU’s e-Commerce Directive and the USA’s s.230, CDA, 1998.

Multistakeholderism looks good but isn’t working 

The Report remains strongly wedded to the idea of multistakeholderism. Its theoretical attractions are clear but the actual experience of it is a long way from being satisfactory. Multistakeholderism without concrete and deeply embedded measures to ensure a greater equality of arms  between the participants is simply another way of creating a platform which allows those with the deepest pockets to shout loudest and block or delay change while the cash keeps rolling in.

Particularly for children

Turning more specifically to the position of children, there are several excellent references in the Report,  but save in respect of a passing comment  about “children’s agency” (page 17) the document as a whole makes no explicit mention of the importance of children’s rights to participate and their right to be heard in respect of matters affecting them. This subject deserves a much larger exposition, not least because children now constitute one in three of all human internet users.

NetMundial did not even mention children

It is unfortunate that the Report notes with approval the NetMundial statement, a statement in which children are not referred to even once. How did that happen? The same way it normally happens in an unequal multistakeholder environment. When the Netmundial statement was being drafted and adopted nobody was in the  virtual or physical room with a specific brief to watch out for and advance children’s interests. Obviously this does not mean everyone else engaged in the process was hostile towards children or children’s interests. They just weren’t in the process with children’s interests uppermost in their minds or they lacked the expertise, knowledge or confidence to make the case for children in the context of digital technologies in general or internet governance institutions in particular. This must change.

Not just about children’s groups in lower income countries

The Report makes no explicit mention of the practical difficulties of engaging with multistakeholder institutions and environments and how this affects not just groups in the lower and middle income countries but also groups in higher income countries.

Children’s groups usually are faced with a choice. Do they spend scarce time or money helping a child or a family in need of immediate help,  or do they buy an airline ticket to a distant location with expensive hotels so they can visit a conference  centre where they will sit cheek by jowl with representatives of some of the world’s richest companies, against the possibility that somewhere down the line, maybe never or ten years from now,  a digital behemoth might tweak an algorithm? There is only one possible decision a typical children’s group can take. They stay at home.

Similar comments might be made in respect of the deluge of correspondence, conference calls at strange times of the day or night to engage with people you have never met and will never know. All these are part of multistakeholderism in the online space.  Many of the commercial companies that take part hire lobbyists and lawyers or employ staff dedicated solely to such matters.  For the reasons given earlier children’s groups cannot do that.

Money talks

Even Governments and inter-governmental institutions can be at a severe disadvantage as compared to the commercial entities which have a major stake in the business opportunities presented by the internet.

It is unlikely there will ever be a completely level playing field as between governmental and inter-governmental bodies, civil society and business, but at present the field is tilted so far in favour of business interests it makes a mockery of the very idea of multistakeholderism.

The Multistakeholder Advisory Group (MAG)

The MAG is the body charged with organizing the annual IGF meeting. There are two major flaws in the arrangements currently pertaining to its selection and operation.

  1. The selectors favour individuals  or organizations already “dug in” to the IGF environment. The complexity and arcane nature of the language used does not help attract new people or groups. Neither does the financial and time costs of participation.
  2. The process of selecting members of the MAG is obscure and the credentials of those seemingly there to represent particular constituencies appear often not to be scrutinized with any great care. Neither is the efficacy of MAG members’ reporting back to or working with their constituency.

COGOV

The document devotes a section to the “Distributed Co-governance Architecture” (“COGOV).  While its recommendations in respect of  how future arrangements  might be better reconfigured are welcome they are nevertheless imbued with a profound sense of unreality. Elements of COGOV are absolutely central to many of the issues the report discusses elsewhere. They are not in any sense marginal or minor.

While the Report notes how difficult it is to trace any concrete connection between the IGF and any real world consequences for the way the contemporary internet is run, that is absolutely not the case with, for example, the IETF, ICANN and IEEE.

IETF

Decisions they take can have very direct and immediate real world consequences yet there is little doubt their decision making processes are very heavily influenced by the commercial interests that engage with them. Look, for example, at the way DNS over https evolved within the IETF.

ICANN

The way ICANN has intentionally downgraded the importance of maintaining the accuracy of WHOIS data suggests they never had any real intention of honouring the promise they made when they signed the Affirmation of Commitments in 2009.

While there are many ways in which the DNS can be exploited by bad actors undoubtedly one of them relies upon the ease with which they can acquire a sub-domain without having to render any robust proof of their real world identity and contact details. Whatever the rules might be about how and by whom WHOIS data are accessed, it is hard to imagine a single sub-domain would be used to distribute or promote child sex abuse material if the owner or person linked to it knew their true real world details had been captured and stored by anyone, anywhere on the planet.

In 2012 ICANN decided to allow the creation of new gTLDs .Bank, . Pharmacy, and . Insurance eventually emerged as Verified Top Level Domains. They are called “verified” because the entity responsible for them enquires about the credentials, qualifications and suitability of persons or companies seeking to acquire a sub-domain under one of those headings. This severely restricts the possibility of bad actors being able to pass themselves off as legitimate but when it came to the creation of .kids, zero meaningful stipulations  or restrictions were made to try to protect children from being drawn towards  sub-domains within .kids that might be owned or operated by persons who wish to harm children.

IGF and IGF Plus

While also acknowledging its then zeitgeisty utopian underpinnings, a key reason why the IGF was created in the first place was to avoid a diplomatic rupture between States involved in the WSIS process in respect of how parts of the internet were to be managed at a global level.

There was never any intention of allowing the IGF to be anything more than a talking shop. Talking shops have their value, no doubt, but  to say they are linked in any meaningful way to questions of “governance” is dubious.

The IGF today is a bit like a cross between a trade fair for people who work in and around internet policy questions and going back to University for a week where a vast array of interesting seminars are laid on by lots of equally interesting people who are there to deliver papers or participate in the discussions. Marvellous but not “governance” by any commonly understood meaning of the word, or rather if it has any impact on “governance” it is incredibly diffuse and tenuous and perhaps of much less importance than discussions which take place elsewhere in other forums.

Whether it is necessary to have such elaborate or expensive mechanisms to organize a week of seminars linked to a trade fair must be moot but it would be a pity if the annual gathering disappeared because there is nothing else like it.

Thus the proposals to create an “IGF Plus” are welcome, but they fall a long way short of what is needed if the public interest across the whole internet governance eco system is to be adequately safeguarded.

Posted in Internet governance, Regulation, Self-regulation

On encryption and child protection

A company is normally driven by a desire and a legal obligation to build “shareholder value”. In the case of one company, Facebook, Mark Zuckerberg owns a majority of the voting stock so when looking at its big decisions we are not talking about a “company” in the way it is generally understood. We are talking about decisions taken by one person.

In recently leaked transcripts of an internal staff Q&A session Zuckerberg acknowledged that it is only because he owns a majority of the voting stock he is still in post because “some of the things I have done would otherwise have got me fired several times over.”

This uncomfortable fact of ownership and control matters hugely at the moment because Facebook, meaning Zuckerberg, has announced an intention to introduce end-to-end encryption (e2e) for Facebook Messenger.

12 million reports in 2018

In 2018 Facebook Messenger’s automated systems identified, deleted and reported 12 million instances of child sex abuse related activity or material. Any images of child sex abuse thus detected typically were gone within minutes or hours. Bravo.

Yet this will end if Zuckerberg persists with his plans.

In anticipation of introducing e2e on Messenger Facebook said

“We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can’t see the content of the messages, and we will continue to invest in this work.”

Any alternative approach which maintains or improves on the status quo in terms of protecting children will be welcomed by everybody. Anything that changes the status quo in the wrong direction will not be. If Facebook cannot actually see the content, it is difficult to imagine how, for example, they will be able to spot illegal images, therefore they will not be able, as now, to delete or report them in fast time or prevent their further distribution. This will compound and expand the harm already done to the child in the image and limit the possibility of her or him being rapidly identified and located in real life.

The alternative being offered

As a way of ameliorating the freely acknowledged adverse impact of going encrypted, I understand, inter alia, Facebook is suggesting that where they find behaviour which suggests a connection with a bad actor the individual’s account will be closed down. Let’s not dwell on the obvious implications of this. They are not the main point.

In addition, seemingly Facebook will hand over to law enforcement the metadata of the person. 12 million reports? A deluge of data will be added to the pre-existing deluge.

Has Zuckerberg had an irony bypass?

The irony of Facebook seeking to position itself as a champion of privacy will not be lost on those who have documented its persistent failures in that field. But already Zuckerberg’s strategy is paying dividends. Just look at the long list of free speech and similar organizations that have signed a letter praising Facebook’s decision and urging them on.

Few people will believe Zuckerberg’s Damoscene moment was prompted by anything other than a calculation about the future profitability of the good ship Facebook. Here’s my analogy. None of the porn companies and online gambling outfits active in the UK market wanted to introduce age verification until everyone did. They didn’t want less fastidious competitors to eat their lunch.

Similarly here, Zuckerberg has seen the likelihood of e2e services growing in importance so he has to find a way to move his major messaging services (Instagram gets caught up in this as well) into that space as quickly as possible.

If there is a sustained, public fight to bring that about, so much the better. The company once seen as the enemy of privacy will be able to burnish its reputation as a champion of it. Brilliant. But wrong. Wrong in principle but also wrong because it is too short-sighted.

Zuckerberg’s potential or actual motives, in truth, are irrelevant. What matters is the idea itself. It is a bad one that will not survive although it may not disappear quickly. Why? Because Facebook’s decision will prompt the US Congress to start off on a path which ultimately will lead to new, bi-partisan Supreme Court-friendly laws limiting what US-based entities can do with encryption, at least on mass messaging services. But before getting to that point Facebook and other businesses could find their devices and services banned in many different countries. Not all of these will be totalitarian dictatorships.

“Back doors” are a bad idea.

And here is the point: nobody I know wants or supports the creation of “back doors” into encrypted services. That implies the police, security services or others, could covertly access a person’s account without proper authorisation, be that a warrant or a court order. Such an approach is completely beyond the pale. But right now courts are issuing orders and they have no effect. Subpoenas and warrants are ignored or are not capable of being acted upon. That is not right. It is a trend that must be halted and reversed.

It was these sorts of concerns that were behind the US Government’s decision to call a conference yesterday under the title Lawless Spaces: Warrant-Proof encryption and its impact on child exploitation cases . Senior Ministers from the UK and Australia attended. I cannot recall any event like it devoted to the protection of children online.

Cynics say the US and other governments are showing fake concern about children when what they are really about is an undeclared intention to get to a position where they can spy on any and all of us in the online world as easily as they can in the physical world. Even if that were true it would not obviate the need to address the point about harming children, unless you are willing to accept that children are collateral damage, a sacrifice to be made on the altar of a different cause.

Companies or organizations providing encrypted services must be required to maintain the means whereby, on production of a properly authorised warrant or court order, they can produce a clear version of every piece of content they helped transmit. The businesses don’t have to hand over the decryption keys to anyone. They can do it all themselves.

Private organizations and the public interest

It is completely unacceptable for companies, or indeed any other types of private organizations, to decide that it is ok to create spaces which are completely beyond the reach of the law. Society is entitled to take a view on the balance to be struck between the public interest and the private interest. Facebook isn’t.

We should not base every decision we make about the internet solely on the basis of whether or not it helps or hinders paedophiles or puts children at risk without regard to any other factors. But equally I completely reject the idea that the protection of whistleblowers, political dissidents and the like trumps any and all other considerations. It’s back to that question of balance and who decides how and where to strike it.

The pendulum has swung too far. It is time for a correction.

Posted in Child abuse images, Facebook, Privacy, Regulation, Self-regulation

A startling statistic

In a live broadcast from Washington DC  the Assistant Attorney General of the USA just announced that in 2018 Apple reported 8 instances of csam on their platform. Facebook Messenger reported 12 million. Either Apple’s users are uniquely virtuous or encryption is hiding what is happening from the company’s eyes. Which do you suppose it is? And what are the consequences?

Posted in Apple, Child abuse images

There is something strange going on

I expect many of you will by now have seen the piece in the weekend edition of the  New York Times. I will be writing more fully about it quite soon but, meanwhile, there was something that rather leapt off the page.

In the article we learn that  in 2017 NCMEC received 18.4 million reports of child sex abuse material. This was up from “only” 1 million  in 2014.

18.4 million is one third of all the reports NCMEC has received since it was first founded in 1998. Clearly this  gargantuan and seemingly still growing volume is attributable to  the increased use of automated systems to identify and report csam. Incidentally, since you asked, the 18.4 million reports contained 45 million images and videos flagged as csam.

In the later blog I mentioned I will return to what these numbers tell us about the progress being made in combatting csam. It might not be all bad news. Key word: might.

What hit me between the eyes?

Here is what the article says

Of the 18.4 million reports received by NCMEC 12 million came from a single source: Facebook Messenger.

Almost two thirds from a single App.

Massively disproportionate

I know Facebook Messenger is popular but their contribution to the total is massively disproportionate. What it suggests to me is not that there is an over concentration of bad actors using Facebook Messenger. Instead :

Either

(a) Facebook is sweeping up and reporting everything to NCMEC that is being identified as being contrary to its policies, so a lot of stuff is included in their reports to NCMEC which may be against Facebook’s policies but is not actually illegal

Or

(b) A great many companies are not being as diligent or as serious as Facebook.

It could be a bit of both but I suspect it is much more (b) than (a).

Not knowing the truth is unacceptable

The case for greater transparency is overwhelming. As long as online businesses believe their identities will be shielded from public view by a good guy in the middle they will feel under no pressure or less pressure to up their game. We have known this for a while but the New York Times article puts it beyond dispute.

NCMEC in the USA, the IWF in the UK  and, internationally INHOPE, should voluntarily publish the fullest possible account of the sources of the illegal images they are processing.

And if they won’t do it voluntarily they should be compelled to do so by law.

There was a lot of other stuff in the Times article but I hope you agree that will do for now.

 

 

 

 

Posted in Child abuse images, Internet governance, Privacy, Regulation, Self-regulation

Brussels is listening

Good news from the EU. They have been listening to children’s organizations’ concerns about the e-Privacy Regulation. I think supporting Option 2 is the right way to go although not sure keeping such an exclusive focus on hashing technology is the best answer. Ends not means? Hashes matter today but if something better comes along tomorrow, what then? Is there a form of words which would make clear it is OK to use hashing technologies (within the limitations specified) but if other or new technologies help achieve a similar result, shouldn’t they also be allowed (again while keeping the limitations mentioned in the draft Article)?

Posted in Child abuse images, Internet governance, Regulation, Self-regulation

Facebook moving in the right direction

That is not a headline I thought I would write any time soon. However, I have to say I applaud Facebook’s announcement earlier this week concerning the company’s plans to address the standards they will adopt in relation to content published on their platforms.

Facebook have been saying for a while they alone should not have to ajudicate on a range of “significant” and “difficult” questions concerning the type of content they should permit or forbid.

One person’s free speech can be seen as a threat or as a huge insult by another. How and where do you strike the balance particularly when you, as a business, have a financial interest in the outcome? Facebook thinks it has found a way. I think they may be right.

An independent Oversight Board

Contained in a charter  are the details of how Facebook is going to establish an “Oversight Board”.  Here is how their principal spokesperson put it

“The content policies we write and the decisions we make every day matter to people. That’s why we always have to strive to keep getting better. The Oversight Board will make Facebook more accountable and improve our decision-making. This charter is a critical step towards what we hope will become a model for our industry.”

If you read the detailed proposals, you will see how the first Board is to be chosen. There will be high levels of transparency around how it operates and, once fully established, the Board will have the power to set its own rules and appoint additional members.

An intermediary body, a Trust, will handle all the money aspects so Board members will be isolated as well as they can be from any sense of depending on keeping Facebook executives happy in order to retain their position on the Board. Facebook the company, Facebook users, and the Board itself can refer matters for consideration.

As ever the proof of the pudding will be in the eating but I am taking Facebook at face value and in that context I cannot fault their approach. Fingers crossed it will work. If it does it will inevitably set the standard others will have to follow. Several phoney Advisory Boards will cease to exist. I’m not sure Facebook will be thanked for that but from Facebook’s point of view this is smart.

Artificial Intelligence is unlikely ever to be good enough on its own

A lot of people have invested heavily in the idea that AI would solve everything. All we need to do is set down clear rules, the algorithms will be instructed accordingly and we can all sit back and relax. Moderators will be spared having to look at terrible stuff or take decisions. Mathematics will set us free. This is only partially true.

If only life were so simple.  Nuance can be important. Contemporary mores change over time and with geography. Context matters.  If a particular group has been under attack one might interpret certain postings close to the time of the attack in a different way compared with six months later, assuming things have calmed down.

We are a very, very long way from being able to entrust machines to make good decisions about matters of this kind.

A question of scale and speed

The obvious challenges are going to be around scale and speed. Of the two scale is going to be the easier one to solve. I say that becaue it is plainly going to be impossible to look at every complaint or request for take down that comes in, or to initiate major enquiries into everything of interest, so some rational system is going to have to be devised to determine the workflow.

While the Board is bound to look at individual cases, particularly edge cases, what I imagine they will want to do, at any rate early on, is consider those which raise issues of wider significance so that their decisions can guide the company’s moderation/content policies. It might be a while before they look at some topics but you have to start somewhere.

In the beginning there will therefore be areas of uncertainty but that is unavoidable when building any new system. We will face exactly the same challenges in the UK when our new Regulator starts developing codes of practice. However,  as a body of decisions starts to evolve a form of jurisprudence will also evolve and in time that will provide greater certainty and predictability.

You cannot deliver a fully finished system with all the whistles and bells on Day 1. As long as there is scope for external judicial review we need to give Facebook some  space.

As for the speed at which issues are identified and decisions are taken, that is  going to be altogether more challenging.

It will be important for the Board to have sensitive and smart “point people”  linked to systems to spot things bubbling up in a particular jurisdiction. Then there will need to be Board members  available to give a view within a reasonable i.e.rapid, timescale.

Any repeats of the delay on deciding the fate of the picture of the Vietnamese girl  or the postings on the Rohinga crisis could fatally undermine the Board’s credibility.

Mission creep?

It will be interesting to see if the Board reach a view on how Facebook determines content questions, as it were, separately from wider operational principles e.g. would they ever say “This content would be OK if we could be sure only adults were viewing it?”

Nothing ventured, nothing gained. I wish this project every possible success. I don’t imagine I will like every decision they are going to make but that is not the point. If the process is seen to be fair, independendent and in the hands of trusted and respected individuals I can learn to live with it and so will the vast majority of people.

Posted in Default settings, E-commerce, Facebook, Internet governance, Privacy, Regulation, Self-regulation