Evaluation can seem like a dry subject – looking at what happened and trying to evidence impact. However evaluation is not just audit (did we do what we said we would). It should also be all about learning. Evaluations should help us improve future activity and approaches.

Within Leapfrog we are trying to build evaluation into the tools we are developing, to generate valuable learning not only for us, but also for our partners, and a wider audience.

We are developing these tools in collaboration (often called co-​creation) and this is a fundamental part of our approach (the “How” of Leapfrog ).  As such our evaluation needs to explore not just what worked, what didn’t and why, but also whether the way we did things was as important as what we did?

Evaluation, therefore, needs to not only focus on measuring the final outcome (Did we, and our partners, achieve our objectives?), and which tools and approaches were most effective (What worked, what didn’t, how efficient was it, and so on?). Evaluation also needs to consider broader, harder-​to-​measure and qualitative elements such as greater trust or strengthened collaboration. Ideally it will also tell us something about how such changes happened. Not least, evaluation needs to pick up on negative aspects – if approaches didn’t work, and why such problems might have occurred.

Established approaches to evaluation (and indeed research) rely on observation, surveys, structured interviews, focus groups and so on. These methods might still be relevant, but they are require a lot of time, energy and resources. So one of the most exciting parts of Leapfrog is to develop evaluation that is embedded within the engagement tools, rather than being a (costly) add-​on. If evaluation is integrated in this way, then organisations, communities and researchers can learn at the same time, and help us learn even more as they adapt the tools for their own use.

It is quite an exciting challenge!