The Aussies showing the way. Again.

Peiter Zatko was the Head of Security at Twitter. Last week he filed a complaint alleging the company had been following “reckless and negligent cybersecurity policies” . He also accused Twitter of deception in relation to the detection and deletion of fake or spam accounts, including those which may have been used for misinformation, for example by foreign powers wishing to influence the ouctome of elections. All this must be music to Elon Musk’s ears but that can’t be helped.

I haven’t read the whole of Zatko’s lengthy submission but extracts like those hardly inspire confidence in relation to other aspects of Twitter’s publicly proclaimed corporate hygiene standards, some of which may impinge on children’s safety and welfare.

But maybe that’s only one of the points. Transparency is the larger one. Transparency promises a wide range of benefits. To children, yes of course, but to other groups and interests as well. As far as children are concerned, the Australians are on it. Read on.

How can we know who to believe?

Zatko was fired by Twitter in January and it seems the company is now energetically putting it about it was because of “underperformance”. Given Zatko’s history they may have difficulty making that charge stick but launching a counter-narrative which undermines the unwanted story is a familiar tactic whenever a company wants to distract attention from or minimise something.

Without transparency we are always on the outside looking in

“We” are left wondering who or what to believe? We may hesitate. The trap closes. The story moves on. Yesterday’s news isn’t news.

Zatko therefore illustrates the transparency point extremely well. He reminds us why systematic and firmly-rooted  transparency is essential. The public, in particular parents and children, shouldn’t have to wait for accidental or deliberate leaks or apparent outbreaks of pique or conscience to have reliable information about what is going on behind the carefully constructed corporate veneer of the businesses which attract so many children as users. Neither should everything have to hang on the vagaries of cash-strapped law enforcement agencies

The status quo just isn’t good enough. Too many legal systems currently provide too many perfectly legal ways for businesses to cover things up. All you need is money and a capacity for “creative PR”.

Money no object

For example, in 2018 Facebook settled out of court in a case involving a child in Northern Ireland. We don’t know how much it cost them because the deal was wrapped in a confidentiality clause.  The public interest was sacrificed to spare …  to spare what exactly? We know what happened to the child. The media covered the story in great depth.  Because of the out-of-court settlement what we never heard was Facebook’s explanation of the how or what monetary value they placed on the damage done. Similarly we never learned what concrete steps they were going to take to make sure it didn’t happen again to another child.

Then last week it emerged Mark Zuckerberg did the same again only this time it seems we know they forked out US$5 billion just  days before he and Sheryl Sandberg were otherwise due to give testimony under oath in relation to  Cambridge Analytica.  Should I have put “US$5 billion” in bold caps? Either way we are once more denied the full story.

Transparency matters across a range of headings

While people who do not live inside the online policy bubble will simply see it as commonsense for there to be transparency requirements in respect of internet businesses, we should not underestimate the difficulties of getting a satisfactory and sustainable sytem up and running. I say this at least in part drawing on my experience as a member of the Internet Commission which, for several years, has been working with various companies developing voluntary transparency models.

A key reason for the difficulty in getting statutory, mandatory systems  up and running is going to be the undoubted fact that, of the people who will or might work with or for actual or potential regulators, very few will have ever worked in senior positions within major tech companies. Or if they did it was a long time ago and things and the people doing them may have changed a lot.

Also even quite senior people in a company may honestly not be in full possession of all the relevant facts, never have been, but they just never knew it.

They will have been told what the company’s policy is. They will have been armed with arguments to defend or explain the corporate line but, actually, even in the largest businesses the people who call the shots and have a full 360 degree picture are rarely the ones who appear in public to answer questions. And they are willing to go to great lengths to keep things that way. US$5 billion-type lengths.

Einstein and Wendell Holmes would struggle

So how will a regulator  know what questions to ask or how to evaluate the answers? Bear in mind that while it is not difficult to postulate a series of desired outcomes which can be universally applied to all platforms, after all we know about the harms being done to children, each platform is in fact different so when trying to make a judgement about the reasonableness of a company’s efforts it will be important to assess properly the context within which that firm is operating. That’s the hard part.

Many businesses have, intentionally or otherwise, constructed systems so complex it would be difficult for a person blessed with the brains of Albert Einstein and the  jurisprudential insights of Oliver Wendell Holmes Jr to pick the bones out of it.

The regulator can expect little or no help from the company itself. Almost by definition the business will be in an actual or potentially adversarial position vis-a-vis the regulator. Its lawyers will almost certainly advise staff to do only the absolute minimum and provide no extra help or assistance, unless they can see a way to steer it to their advantage.

You can expect company lawyers will be looking for any, even the smallest procedural or other failing to frustrate the regulator and kill the action.

With self-regulation a dead duck, scales having finally fallen from our all-too-trusting eyes, for good or ill this is what the future is going to look like for everyone fortunate enough to live in a democracy where their elected Government is not cowed by or in the pocket of Silicon Valley. Which brings us back to Australia.

The Aussies act

On 29th August the Australian e-Safety Commissioner issued letters to Apple, Meta (including WhatsApp), Microsoft ( including Skype), Snap and Omegle. The letters were issued under the Online Safety Act, 2021, arising from complaints made to the Commissioner since 2015 about child sex abuse material found on their platforms. The reasons for issuing the letter were clearly set out and then came the crucial bit (IMHO). Referring  to her transparency powers the Commissioner explained

“My interest in issuing the notices is to better understand the true scale of this issue, and in so doing shine a light on online safety standards, practices, and procedures.”

Quite so. And it is extremely refreshing finally to see a statutory actor moving in respect of child sex abuse material, trying to get to the bottom of the systems which allow or facilitate its distribution.

Each company  in receipt of the e-Commissioner’s letter will have 28 days to respond. Failure to comply risks penalties of up to 555,000 Australian dollars a day. That’s about £325,000 Sterling or US$ 380,000. It’s not US$ 5 billion but you have to believe no company will want to rack up a serious tally of days, whatever the price tag. And if that proves to be wrong I am sure Canberra will be willing to look again.

Developing a sound transparency regime which meets the needs of the moment and can evolve as technologies change is going to be an iterative process. It will require a lot of smarts and steely determination. They have both in abundance Down Under. If anyone can do it they can, which is why it is  such good news for children all over the world that the Aussies  have chalked up another global first.

 

 

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International, Technical Adviser to the European NGO Alliance for Child Safety Online, which is administered by Save the Children Italy and an Advisory Council Member of Beyond Borders (Canada). Amongst other things John is or has been an Adviser to the United Nations, ITU, the European Union, the Council of Europe and European Union Agency for Network and Information Security and is a former Executive Board Member of the UK Council for Child Internet Safety. He is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John has advised many of the world's largest internet companies on online child safety. In June, 2012, John was appointed a Visiting Senior Fellow at the London School of Economics and Political Science. This was renewed in 2018. More: http://johncarrcv.blogspot.com
This entry was posted in Advertising, Child abuse images, Default settings, E-commerce, Facebook, Google, Internet governance, Privacy, Regulation, Self-regulation. Bookmark the permalink.