Last week Leapfrog participated in a working seminar focusing on the evaluation of intermediaries in the public sector. This was organised by IRISS (Institute of Research in Social Science) an independent research institute funded by the Scottish Government, HIS (Healthcare Improvement Scotland) and CELCIS (Centre for Excellence for Looked After Children in Scotland). The seminar gave us an excellent opportunity to meet new people, share our perspectives and experiences on evaluation and the challenges we face in evaluating intermediary organisations and communities.
We made contact with some interesting organisations who participated in the event. To name a few: Evaluation Support Scotland, Glasgow Centre for Population Health and NHS Education Scotland. These organisations could be valuable for Leapfrog as they were keen on engaging their community partners using creative evaluation tools, and equally Leapfrog can learn a great deal from these organisations to understand the context of communities with which the tools are deployed i.e the Highlands and Islands.
In the morning session, each organisation was given an opportunity to present who they are, what they do and what evaluation challenges they face. It was notable that most of the organisations talked about some common evaluation challenges they face:
- In creative evaluation, we face challenges in capturing meaningful information from our partners compared to face-to-face interviews.
- One of the challenges in evaluation is causality. How do we link the connections from the basket of evidence we gather?
- How to build feedback loops among stakeholders?
- Identifying contextual factors that influence the social change.
- Avoiding the ‘Hawthorn effect’ in which participants respond well to attention.
- Evaluating the use of tool by people who have downloaded tools from our website (we may never meet them).
- To evaluate the use of hundreds of instances of tool use across the UK.
In the afternoon session, we had two expert-practitioners who shared their perspectives about evaluation issues in social projects that aim to create social change. They also highlighted the relevance of understanding the context under which the project is set. Following that, we worked with other participants to discuss our experiences on:
Making decisions on research design for evaluating intermediaries.
- Defining outcomes when evaluating data.
- Collecting data.
- Effective interventions, effective implementations/methods, enabling context and socially significant outcomes.
- Embedding/sustaining evaluation practices within organisations.
This was a timely event for Leapfrog as our engagement tools are starting to be propagated from both short and major projects. With the aim to have a better understanding of how our tools have made an impact in communities. Also, how well we could examine the outcomes of the organisations we collaborate with, and share our creative evaluation approach to those who might greatly benefit in using them.
A key insight taken from the day was the potential for both contribution analysis and implementation science to help inform this challenging aspect of the Leapfrog project. Contribution analysis recognises that direct causal relationships are unlikely to be evident in complex, real-world interventions. Instead contribution analysis focuses on the strands or aspects that an intervention, project or other activities have played in larger overall changes in contexts. Implementation science was new to us, it looks at how there can be a transition from a successful pilot of intervention into real organisational change. We are still finding out about this, watch this space.