Anybody who has even dipped a toe into the waters of public engagement recently will know what I mean when I talk about the dreaded ‘i’ word.
It seems to be everywhere – in funding applications, at conferences, even (for those of us fortunate to work in higher education) in the REF case studies. Impact is the word of the day, and proving that you have it is everybody’s goal. After all, why fund something that isn’t having an appreciable effect? Why spend time and resources embedding something into your practice if it isn’t going to change hearts and minds?
The problem, of course, is how to measure this. Evaluation is impact’s much talked-about but highly misunderstood little sibling. Sure, we need to evaluate our projects, but not just any evaluation will do. This is why I have massively stepped back the evaluation I do of my programmes, all but eliminating the usual gamut of questionnaires and surveys that used to be a must-have for any robust initiative.
Think about it this way: have you ever ever gotten a truly surprising answer to ‘did you enjoy this activity/event/project?’ Most people will have done, a few people didn’t, and that tells you… precisely nothing. Sure, if you’re developing something particularly new or experimental it might be worth checking if your audience enjoyed it, but nine times out of ten you’ll be able to tell how enjoyable something was without asking.
Same with ‘did you learn anything today?’ The facts and figures people might be able to recall and parrot back five minutes after finishing your event are all but worthless in measuring whether you had a real impact on their knowledge. I can memorise a phone number that I need to call – that doesn’t mean I learned it or that I’ll remember it tomorrow, much less in a year’s time.
True evaluation of impact is going to take a lot more effort and a lot more care than what we’re used to. We need to look at long-term changes, all the while understanding the many complex and intersecting factors at play when it comes to affecting people’s attitudes about science. Groups like the British Science Association and Wellcome have started undertaking studies into longer-term impact of STEM projects, among other things, but it will still be many years before we have the data we need to know what makes a good, impactful project.
Despite the click-baity title this isn’t a call to stop all evaluation ever. But think about the questions you’re asking and what they’re telling you. Are they really informing best practice and proving impact, or are they just a waste of your audience’s time – and yours?