The quick answer to this question is to point you towards the various ‘standards of evidence’ that evaluators have developed to rate research in terms of quality and validity.
Within these standards of evidence, a common theme is the importance of control or comparison groups where service users are compared to non-users (ideally selected randomly, like a drug trial). The argument is that these studies allow us to be certain of what impact can be attributed to a project, and they are the best way to obtain ‘clinching evidence’—which is highly prized by funders and policymakers (possibly because it makes their decisions more straightforward).
It is hard to deny the logic of control groups, but I don’t think they should be the only determinant of ‘good evidence’. Running these studies is difficult for charities that have little or no resources for research, and if they are treated as the only game in town then more achievable, quality improvements can be overlooked.
As such—through our day-to-day work of looking at evaluation issues across the charity sector—at NPC, we have determined a broader range of evaluation ‘principles’ which acknowledge some of the more modest progress organisations can make. The principles are not technically ‘standards’ (like those mentioned above) because they’re not really a hierarchy. But I have tried to show them in the order in which they might be tackled—from the more basic to the more advanced.
Your programme theory: You describe what your projects do in terms of long-term impact, intermediate outcomes, outputs and activities in the form of a ‘project theory’ like theory of change, logic modelling or a planning triangle.
Getting to grips with the existing research: Your project theory is supported by a review of the relevant academic literature and other research on your client group and your intervention.
Prioritisation: You have thought carefully about what you actually need to evaluate. Crucially, if your project is already well evidenced by academic research then you only need to collect enough data to check you are delivering it effectively.
Service users: You are clear about the particular needs and characteristics of your target group and quantitatively track attendance and engagement in the project.
Qualitative research: You collect good quality qualitative evidence from samples of users and stakeholders. You can show that the people you’ve spoken to are representative of your target group and have not been cherry picked to highlight where you have had the most impact.
Quantitative research: You measure the progress of service users longitudinally (i.e, before and after your intervention) using a tool to record both the ‘soft’ and ‘hard’ outcomes relevant to your project theory. You collect this data in a database and analyse it.
Control groups: You have considered the counterfactual for your project and compared your results to any data available which helps you understand this. Opportunities to actually conduct a comparison group study have been considered and pursued (in an education context this often means accessing school or government data). Whatever methodologies are available to you, you should always try to ask ‘what would have happened if we had not delivered this programme?’
Transparency: You publish a report of your results which presents impartially the best quality evidence you have on: a) your impact; and b) what you have learned, along with an honest and comprehensive description of your research methods.
I don’t claim that this is the last word on what makes good evidence. But these principles reflect what we think is important across our work with charities. If you want to know more about what we do then take a look at the various resources available at http://www.thinknpc.org/
By James Noble, Deputy Head of Measurement and Evaluation at NPC