Can a coalition of the willing be put together to do this?

Here’s an idea for how we might use algorithmic and machine learning tech for good. It is not the first time I have floated it. Others may also have done so before and since, but now seems as good a time as any for me to have another go.

A few years back senior people at two very large West Coast companies told me what I was proposing was impossible because of insurmountable methodological and privacy issues. Being told something, anything, is impossible by Big Tech generally does not put me off, but other issues piled in, life got in the way… You get the picture.

Then I met Frances Haugen at the NSPCC’s offices and listened to her speak about the intelligence-gathering, intimate analytical power of one particularly large online platform (no names, no pack drill).  By coincidence (!!) it was one of the two West Coast companies I referred to earlier. Listening to Frances and her passion made me beat myself up for not having persisted. Mea culpa, mea culpa, mea maxima culpa.

Collect profiles of the largest possible number of child sex abusers 

Put simply, my idea was to construct the largest possible dataset of the lives and habits of the largest obtainable cohort of convicted child sex abusers, covering a range of typologies, cultural backgrounds and so on.  The dataset should encompass online shopping, postings and any other known or discoverable behaviours as evidenced by online activity. Could it be supplemented by information obtained from other sources? Maybe. 

The probation services, the courts and the police would need to be involved in identifying who should be included.  They could also be a source of valuable additional data about each individual on the list.  Whatever emerges from here, plus what the online platforms can discern would need to be channelled to an agreed research team in a standardised, anonymised format. They would ensure it was properly secured before starting to work on it. The anonymisation should apply both to the source platform and the individuals.

Then the maths and machine learning techniques should be let loose to do their magic.

Is this so very far from what is already happening in other areas?

Let’s face it, given online platforms analyse everything we do online, my suggestion isn’t so very far removed from what is already going on only, at the moment, it results merely in most of us being sent ads for holidays in Greece or funky trainers.

Sticking with this theme, to underline its proximity to current practices, let’s not forget at least some platforms say they already ban from membership anyone who is on a Sex Offender Register. How do they know who these  people are? How well is that working? Which platforms are and are not doing it? In the latter case, why? Another point for the new transparency regimes we are being promised?

By the way, to be clear, I am suggesting something more targetted than “everybody on a Sex Offender Register”. I am interested in people who are actual or potential child sex offenders who are on platforms where children are allowed to be present or are known to be present. If the platform is adults only and they have an efficient way of ensuring only adults can get in then I step away.

Too widely diffused?

Is it likely child sex abusers are so thoroughly dispersed or diffused into the general population nothing useful or distinctive about them would or could emerge? Possibly. But possibly not. Maybe it would be worth doing this just to get that confirmed.

Serendipity favours the inquisitive. The other day I spoke to the inestimable Professor Ethel Quayle, late of this parish, now at the University of Edinburgh. Professor Quayle directed my attention to a study published online only last month. 

Entitled ” Self perceptions and cognitions of child sexual exploitation material offenders — University of Edinburgh Research Explorer ( the study), it might not make it into the best sellers list this year but, among other things, the study is meant to

” …..provide potential treatment targets, including behavioral areas that may be pathways to CSEM offending.”

Loosely translated I think that means, among other things, the data are meant to be useful in determining therapeutic and preventative strategies which will help keep children safe in the future.

While the study focuses on self-perceptions of people convicted of csam offences, it suggests there are observable differences between them and the general adult population. Can we latch on to, explore and expand on those differences to create red flags linked to them? It also suggests there are differences between csam-only offenders and contact offenders but maybe that is not so significant in this context.

The Edinburgh study was extremely small and at least part of the data relied upon came from self-selected individuals so my idea could either, at scale, confirm, amplify and solidify the Edinburgh findings and/or suggest alternative research paths.  Equally one cannot rule out the possibility it will find something which contradicts the proferred conclusions. That’s the scientific method for you.

We can also go one better

We should also carry out a mirror exercise in respect of victims, the data, once more,  being properly anonymised and made secure.

One immediate practical benefit of doing this could be to allow platforms to set up alerts so if anyone with a  red flag tries to contact a child in the  vulnerable group a klaxon should sound somewhere, triggering an intervention.

Ideally a way might be found to hold the data in common so platforms of different sizes were able to utilise it.

Other advantages flowing from having data of this kind? Perhaps children in the vulnerable cohort could be regularly targeted with extra positive messages and support. 

Again we are told things like this already happen in respect of children where platforms  pick up activity indicating suicidal ideation, anorexia, other forms of self-harm and extremism. Thus, one way of looking at this is we are “simply” trying to establish another  protected category.

Why always wait for a disaster to happen?

It seems weird that, too often we have to wait until a disaster has happened before anyone does anything. Clearly we should be educating and helping all children, but if we can get closer to those with greater vulnerability that has to be to a good thing. No? Isn’t that what happens in school classrooms and in families? Not every child needs exactly the same support or attention all of the time.

Obviously there are a host of  questions that would need to be resolved. One would need to proceed with care but, much as I like holidays in Greece and funky trainers, I cannot help feeling we are still under-utilising the power of tech to protect children and largely only because nobody has worked out a way of making money from it. 

My final word for now: why do I have this nagging feeling one or more of the big platforms has probably already done a study like the one I have described but they are just not speaking publicly about it?

That’s an easy question to answer which is precisely why we need a coalition of the willing working with and through anonymised data sources.