Some bullet points 1

  • It is entirely possible, at a technical level, to build systems which can be inspected and audited so as to provide reassurance to any reasonably minded person that a particular App (or other programme) is only doing what it claims to be doing and nothing else i.e. not in any meaningful way collecting, storing, analysing, processing, profiting from or otherwise engaging with any other data.
  • Transparency and accountability mechanisms can be put in place to underpin or reinforce such systems. These can be further supported by being encased within a clear legal framework.
  • Apps or other programmes e.g. anti-virus software and firewalls, have been in use for a great many years to detect and prevent malware attacks. They do this by looking for known signs or signatures in emails, messaging Apps, or uploads on social media and other platforms.
  • In like manner Apps and programmes have been devised which can identify known child sexual abuse images or images which are likely to contain depictions of child sexual abuse. Some can identify grooming behaviour.
  • In relation to already known images the Apps and programmes facilitate rapid action to get the images deleted. From the victim’s perspective this is of fundamental importance. The longer the images are allowed to continue circulating the greater the danger to the child and the greater the harm.
  • The same is true if the image is identified as being likely to depict child sexual abuse although here an intermediary step is necessary to examine the image to determine whether or not the content is in fact illegal.
  • Where grooming activity is identified a range of swift interventions are possible.
  • Some of these Apps have been in use for several years, in one instance since at least 2009, with no known errors or problems.
  • The difficulty is the use of these Apps and programmes has been voluntary. The Australian e-Safety Commissioner’s recent report confirmed what many of us had previously suspected. Their use is very patchy. In one case, notoriously, what could be the world’s biggest tech company, Apple, announced they were going to start using one or more of the tools then changed their mind.
  • As Australian e-Safety Commissioner put it (ibid)

some of the biggest and richest technology companies …. are turning a blind eye, failing to take appropriate steps to protect the most vulnerable from the most predatory

  • Yet the use of these Apps and programmes entails no breach of anybody’s right to privacy, but even if it did the breach would be of such a minimal nature it would have no material legal standing.
  • There have been a string of cases in UK courts which, specifically in relation to alleged breaches of the GDPR, have reaffirmed an ancient legal principle (“de minimis non curat lex“) roughly tranlated “the law does not take account of trifles”. In other words for an action to progress there has to be demonstrable, actual harm not “mere upset”. These decsions reflect similar thinking in EU jurisprudence.
  • In a related vein, Illinois v Caballes the US Supreme Court decided there was no “legitimate privacy interest in possessing contraband”.
  • Courts in every other jurisidiction would be bound to take a similar view. Nobody can claim a right to privacy affords them protection against reasonable, proportionate steps to detect or prevent crimes, in this case criminal behaviour which harms children.
  • Look around the modern world. Similar types of proactive, anti-crime or public safety measures are commonplace.
  • Think about a dog sniffing for drugs, explosives or other illegal items at an airport or railway station, or a machine which scans your luggage, your clothes and your body before you are allowed to enter a place.
  • Most physical postal and parcel delivery systems have mechanisms in sorting offices which look for signs of prohibited content. The mail or parcel service provider, or a law enforcement agency, or both, generally have a right to open and examine particular envelopes, packages or containers if appropriate signals are received by their monitoring equipment.
  • The inconveniences or minor delays such measures can sometimes prompt, perhaps even mild embarrassment as the contents of your suitcase or handbag are examined by complete strangers, are accepted because people understand and value the underlying social purpose.
  • It is exactly the same with measures which can protect children. Or it ought to be.
  • If there are technical tools available which can help keep children safe the only question most people will ask is “So why isn’t everybody using them?”
  • Why would a company intentionally not use such tools? And if a company intentionally chooses not to use available tools why does the law not step in and compel them?
  • Quite.

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International and is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John is now or has formerly been an Adviser to the Council of Europe, the UN (ITU), the EU and UNICEF. John has advised many of the world's largest technology companies on online child safety. John's skill as a writer has also been widely recognised. http://johncarrcv.blogspot.com
This entry was posted in Child abuse images, Default settings, Internet governance, Privacy, Regulation, Self-regulation. Bookmark the permalink.