You have looked at the data, examined the books, worked with the children and it has become apparent that you need an alternative approach. What to do? Then, as if by magic, a beautifully produced, glossy brochure arrives in the post. It is advertising a new intervention, claiming to the answer to every problem you have. It is well published, has testimonials all over it and seems very tempting – it sounds like the solution to everything. But, how do you know it will work? As budgets are reduced, it becomes even more essential that schools make well-informed decisions about how to spend their diminishing resources. It makes sense that we become more sophisticated in our use of research evidence to inform our decision making. But the evidence isn’t always easy to read, or simple to act upon – we need to delve a little deeper into the details before handing over the cash.
So, what does make good evidence? How might we know if something will be effective or not?
Asking for evidence of effectiveness is an important first step. A glossy brochure is nice. Testimonials from other professionals can be convincing, but they are based on opinions and good teachers will always make anything work in their classrooms. A solid, robustly conducted piece of evaluation carries far more weight. It is important to mention that there are many different types of educational research – they are all interesting and relevant; there isn’t one that is better than the other. Each style of study is designed to collect data and research a topic in a different way, from a different angle. The style of research used should come from the question that is being asked. So, for example, if you wanted to know about ways to support children to develop their vocabulary skills, you might ask the question “In what ways can the expressive vocabulary of children be developed?” This type of question would encourage you to find out about everything that has ever been written about expressive vocabulary. In contrast, if you are looking for evidence of the effectiveness of an intervention or programme, the question should be more specific and the trial should be designed to find out if it works better than what is the current practice. In this case, a study with a control (business as usual) or comparison group helps establish whether the new glossy programme really does work better than anything else.
Once you have got the evidence, it requires a critical eye. Questions to ask might include;
- Are there clearly stated aims and outcomes? Is there a clear convincing theory behind the development of the intervention and what it aims to support?
- Has a robust approach been used to measure the effectiveness of the intervention? For example; do the assessments used measure what they should? Are they fair to the children in the comparison group, or have they only tested on things the intervention children have done? Has a control group been used? How many children started the trial and how many finished?
- Is there clear and obvious evidence of impact?
- Have the authors acknowledged the limitations of their study? Do they identify what they don’t know, as well as what they have found out?
- Has the research been conducted within a similar context to you? Are the children the same age/gender/socio-economic status? Is the school context similar?
- Do the findings seem believable? If the findings seem too good to be true, they probably are!
If any of the answers are no, then it gives pause for thought… If finally, you are convinced, go for it… but don’t forget to check the programme is working for you. You may just be the one exception to the rule.
Megan is the Director of Literacy for the Aspire Educational Trust and the Director of the Aspirer Research School (one of the first 5 Education Endowment Foundation/Institute for Effective Education Research Schools). You can find out more at https://aspirer.researchschool.org.uk/ and follow her on @DamsonEd or @AspirerRS.