Here’s a scenario. You are an early years teacher. You’ve worked hard to secure some extra money to improve the English language skills of the children you teach. You desperately need this to work; you have lots of children whose parents don’t speak English, and a few children with special educational needs. If these children are to be ready for formal schooling in a few years time, they must learn to understand complex commands (“before you sit down, please hang your coat on your peg”) and to express themselves in full sentences (“please, Miss, I don’t understand how to work out this sum”). You need an intervention program that is easy to implement - you can only really spare a couple of nursery workers to run it; and you need one that is inexpensive - we’re in the age of austerity, after all.
Which program do you choose? One option – the bad one – is to enter “language and communication interventions” into Google. This is a bad option because it’s not a good idea to choose an intervention program like you’d choose a holiday: because it has a convincing website, because your colleagues recommend it, or because you read about it on Twitter. Anyone can write anything they like on a website; it doesn’t actually have to be true. Colleagues can be mistaken, or can be affected by their own prejudices and biases. And the media, especially social media, is notorious for promoting exciting-sounding, but ultimately ineffective solutions to problems (see Ben Goldacre’s take on the Omega-3 trials here). This option could lead you straight into the open arms of the snake oil merchants.
The other option – the good one – is to choose a program in the same way that your GP chooses which medicine to prescribe: by checking whether there is clear, robust evidence that it will actually work, and by determining whether it is right for the person or population concerned.
But this is easier said than done. How do you – an early years teacher – judge what is a good evidence-base? Well, luckily you don’t have to. Organisations like The Communication Trust (TCT) and the Early Intervention Foundation are doing it for you. Check out TCT’s What Works database or the Early Intervention Foundation’s report on parenting programs. And don’t forget other English-speaking countries. A lot of work has been done in America; see, for example, the Rand Corporation’s review of early intervention programs.
But it’s not a bad idea to do your own checks too. So here are a few rules of thumb that I presented at a recent event hosted by the Big Lottery Fund’s A Better Start Project:
a. First, make sure that the intervention program you choose has been shown to work. A good first step is simply to scrutinise the website. Does it list lots of publications that give it a good evidence-base? The website of the Nuffield Early Language Intervention is a great example; the research that tests whether the program works are prominent in a “publications” section on the home page.
b. Second, check if the publications cited actually test if the program works. Some websites have an evidence section but when you click on it you find that the evidence contains only personal endorsements. It’s quite interesting to hear that a nursery worker somewhere “found the program really easy to implement” or “noticed a difference in the children straight away”, but this isn’t good enough to justify spending taxpayers’ hard-earned cash. The best evidence will test whether the children who took part improved significantly compared to a control group, and this evidence will be published in peer-reviewed journal articles. “Peer-review” is an important label, because it tells you that the evidence the authors cite has been rigorously scrutinised by experts.
c. And finally, check how well the program works. You’d be surprised at the number of published studies that report that programs don’t work, or have only minimal effects on children’s language. It’s important that these studies are published, because we need to know which programs don’t work too. But you shouldn’t spend good money on them. As a general rule, programs that report so-called effect sizes above 0.3 are likely to be effective. It’s even better if the program has been tested in a meta-analysis; this tells you whether it has worked across a number of different studies.
By Caroline Rowland, ESRC LuCiD Centre
This article is based on a talk that the author gave at A Better Start: Focus on Language and Communication in June 2016. Access the slides from the full talk at http://abetterstart.org.uk/content/focus-language-and-communication-event