I recently heard Professor David Finklehor most forcefully make the point about how little independent evaluation is done in the online child protection space. This restricted the opportunities for people to learn from each other, allowing poor or ineffective programmes to continue or even grow while better initiatives might be overlooked.
Then last week at Parentzone’s excellent conference Professor Andy Przybylski made the remarkable claim that only around 30% of the programmes he had looked at or was aware of could be replicated and achieve results similar to those claimed by the authors. The implication of this was clear. There could be a lot of dodgy science going on.
Now we all know that some things are difficult to measure and, applying the precautionary principle, you don’t need to have absolute certainty about everything before common sense points you towards doing something. Also evaluation costs money and not everybody wants their work to be looked at too closely by outsiders but without it there is always a risk that the activities in question are little more than a PR gimmick truly designed to serve a different and undeclared purpose or are a money grab with no higher objective than meeting the wages bill. That wasn’t meant to sound as pious as it came out. As a freelance for many years I think I am more aware than most about the precarious nature of this business.
More importantly, though, the absence of sound evaluations suggests a lack of seriousness. It is inconceivable that in the field of medical research, for example, or in other areas where public safety or welfare arises, that there would not be repeated testing and assessment before something was given any kind of official blessing or imprimatur, much less recommended to third parties.