Since joining the Leapfrog project only a couple of months ago, I’ve learnt of the deep complexity in the challenges it tries to address and the genuinely exciting progress the project has made through co-​designing tools. As the lead for evaluation on the project, coming in at it’s final six months, this has laid out a truly intimidating task of not only trying to ensure the project’s impact is accounted for, but that the complexity of the value created is explored and understood.

For this reason, the Summer School couldn’t have come at a better time and couldn’t have provided better stimulus for discussion, as many, possibly all, participants came with the question of evaluation partially on their minds. It was an environment ripe for debate on the topic, including from the excellent selection of key note speakers. General opinion acknowledged how evaluation of such relational contexts was not easy. Evaluation is about measurement. Measuring where you add value. Where you’re making a difference. This is traditionally a numbers game, but the deep qualitative nature of a lot of the stories of change that come from community engagement means that numbers just can’t do it justice. At least not alone. This seemed to be understood across the cohort at Summer School.

One key note speaker, Stéphane Vincent, director of La 27eme Région, shared how their project of innovating practices in policy development in France fast learnt the need for evaluation based on the changes they developed. Prof. Rachel Cooper shared her vast experience of delivering complex research collaborations, oriented around identifying simple drivers behind the key actors. For example, for academics, it’s funding and research that drive them. For professionals, it’s knowledge in practice. From such simple starting points you can begin to articulate where value is being provided across the network.

Among the delegates, the discussion swayed largely between the value being provided to the communities they engaged, and the cultures or structures that they were attempting, and often struggling, to influence. From NHS services constrained by extensive targets, to anxieties in research practice not wanting to impose university or project agendas on their participants, there seemed a desire to provide space for evaluation that showed sensibilities to the context. Should universities take a more activist role to address the political challenges in their contexts? Are there more immediate methods of capturing the effects of community engagement, so as not to lose those valuable moments of insight and progress? Many complex and important questions emerged over the three days.

Such rich discourse truly served to develop our thinking on evaluation for Leapfrog. In addition to embarking on an extensive programme of interviews to capture as many stories of influence, benefit or failure, we also want to create that meaningful space for learning in the evaluation process. To aid in this, we are developing a process to iteratively model the complex stories that emerge from each Leapfrog project so that they can become inter-​relatable. Structuring the motivations of key actors alongside the changes in process or result, so that each story can remain unique but they still make sense altogether. The aim is to discover those qualitative units that evidence added value within Leapfrog. This process is seen as partially a creative process of co-​evaluation. If such modelling doesn’t make sense across the stakeholders and partners we engage, then it can serve no meaningful purpose of evaluation. The ambition is that it will serve a purpose. One that we’ll share beyond the scope of Leapfrog, so watch this space…