Just what the doctor ordered

The information held in doctors’ records about individuals’ medical histories is priceless in (at least) two ways. First and obviously, it is priceless to the person concerned. It helps guide clinicians’ decisions about current and future treatments for that individual as well as allowing for a review of past decisions and conditions.

Second, it is priceless as a source of data for medical research. But information about an individual’s health is highly confidential, for all kinds of reasons. It is also, in aggregate, potentially extremely valuable financially, particularly to pharmaceutical and a range of other companies. Horrible Governments around the world would also like to know the kind of intimate stuff kept in doctors’ records, especially about key people or specific groups e.g., identified “heretics”, political opponents, whistle blowers, you name it.

So how does a society manage patient data in a way which allows us all, and here “us” is both national and global, to benefit from the riches medical research can yield, without compromising any individual’s right to privacy?

Just before Christmas I listened to  Professor Ben Goldacre  describe “Trusted Research Environments” established within and by our National Health Service. The arrangements, unquestionably, are elaborate, but perhaps no more elaborate than they need to be given the sensitivity of the subject-matter. The transparency attaching to the processes is outstanding. More to the point, for the purposes of this blog, these arrangements have gone substantially unchallenged by what we might loosely call “the usual suspects” in the privacy lobby.

I’m guessing by now you have worked out where this is going.

If we can devise systems which can satisfy people’s reasonable concerns about privacy in respect of health data, is it not within the bounds of possibility we can also devise systems which are satisfactory for other issues where privacy matters?

Ways which reassure us the systems cannot be used for any purpose other than those which are lawful, declared and auditable?

Ways which no company could be bullied into misusing because the transparency and audit processes would make that politically impossible, at least in those countries where voters have the ultimate say and a free press exists?

A world of zero trust

We all know there is zero trust in Governments and Big Tech and we all know this has come about, even in the liberal democracies, not least in part because of the historic bad or unlawful behaviour of Governments, geeks and Big Tech, particularly when the virtual world was still young.

Who should pay the price today for these past shortcomings? Not us. Not children. Not when there are reliable ways of making up the trust deficit.

To put that slightly differently, who gave who the right to decide otherwise?

Does anyone truly believe the jurists and politicians who framed our human rights laws anticipated and knowingly willed the possibility for systems to be created which, at scale, would allow child rapists and other serious criminals to operate with impunity, for all practical purposes beyond the reach of the courts? I don’t.

About John Carr

John Carr is one of the world's leading authorities on children's and young people's use of digital technologies. He is Senior Technical Adviser to Bangkok-based global NGO ECPAT International and is Secretary of the UK's Children's Charities' Coalition on Internet Safety. John is now or has formerly been an Adviser to the Council of Europe, the UN (ITU), the EU and UNICEF. John has advised many of the world's largest technology companies on online child safety. John's skill as a writer has also been widely recognised. http://johncarrcv.blogspot.com
This entry was posted in Child abuse images, Consent, Default settings, Facebook, Google, Internet governance, Privacy, Regulation, Self-regulation. Bookmark the permalink.