[BPSDB]The computer system at my workplace, a state secondary school, was down for three days last week. As a result, the usual practice of presenting lessons using interactive whiteboards had to be abandoned in favour of more traditional methods. Some teachers reckoned that student behaviour improved during these lessons. This got me thinking.
First off, this is anecdotal data and as I am fond of saying, anecdote is not evidence, so there is no real evidence that more traditional teaching methods result in better behaviour. Equally, however, there is no evidence that using IT routinely in lessons results in better behaviour, or better ‘learning outcomes’, to use current jargon. It has simply been assumed by the powers that be that this will be so, there has never been any rigorous trialling of this teaching method. The same applies to other teaching innovations.
This approach to innovation is not confined to education. Health and policing, to name but two, are subject to periodic reforms which seem based more on the enthusiasms of the relevent Government minister than any evidence that they will be better than what has gone before. In one case where there was evidence, in the case of harm caused by recreational drugs, the evidence (and the person giving it) was dismissed because it did not fit in with Government policy.
This is classic bad science; the conclusion is drawn up first and the data is hammered to fit. Nor is this attitude confined to the current ruling party. Not so long ago, the Tories produced a document which claimed in part that 54% of girls in poorer areas had at least one child by the time they were 18. It turned out that the incidence was actually 54 per thousand. A rational approach to this would be to re-examine the data and draw new conclusions. A human reaction might be to shove the report into a desk drawer and pretend it had never been written. The Tories, however, were neither rational nor human: their reaction was to say that this did not alter their conclusions. Why the hell not?
The answer is that it chimed in with David Cameron’s ‘Broken Britain’ meme – that Britain is on the point of social disintegration. Forget the tabloid headlines and take a stroll round where you live and work. Are they on the point of social disintegration? Yet Cameron clearly plans to base social policy on this mistaken assumption.
This is not to say that there are not areas of social deprivation, sink urban estates and isolated rural communities, but possible approaches to these problems – such as parenting classes and public investment – are dismissed as ‘the nanny state’. Social service interventions are given particularly short shrift but social workers are still blamed when cases such as Baby P slip through the net.
To devise social policy we first need to know if the problem is real. We then need to properly trial the proposed solution by, for example, using it in one community and comparing the outcomes with a similar community where it was not used. Also, just as pharmaceutical researchers must report side-effects of trialled drugs, all outcomes of the trialled policy should be reported. For example, a policing policy to reduce street crime might have some success – but that might be at the expense of excessive stop-and-search which many might regard as being as much of a problem as the risk of having ones bag snatched.
I am not saying that trialling policy changes will lead to perfection but it would be a step in the right direction.