Three things come to mind.
One - we need to ensure that the program or policy you are working to get into a system is actually implementable. This means, it has to be adequately described. Unfortunately, particularly in social services, a lot of programs are made up of big-picture, broad statements. They are important but insufficient to enable practitioners and supervisors to understand what they are supposed to do, how they can continually get better at it, and how they monitor whether it’s happening. We often need to look at the implementability of interventions, and improve the descriptions of them to improve the training and coaching in the field.
This also means that you need innovation specific capacity – or somebody who knows, at a very detailed level, about the program that you’re trying to implement. For example in SVA’s project, Restacking the Odds which is creating an evidence-based measurement framework to help improve services for children experiencing intergenerational disadvantage. In this case, you need staff with the expertise and good understanding of the parenting, early childhood, and health-focused programs that the project intends to implement; staff who know what’s workable and what’s not.
Two - We need to fund more than just training. As mentioned earlier, coaching is the most effective way to help people deliver a new approach well. We need to move beyond only giving workers training because it’s been shown to be mostly ineffective by itself, no matter how good it is. Again, it is important but insufficient to get the change.
Three - is data. We need to align what we collect with what will help understand three things:
Reach: Are you reaching the population that you intend to serve? (This means really understanding the characteristics and the needs of the people you’re serving.)
Implementation: Are you implementing in the way you intended (often referred to as program adherence and fidelity).
Outcomes: As a result of your implementation efforts, are you achieving the outcomes that you set out to achieve?
If we are able to collect quality data across these three areas, then we are better able to monitor how we’re going and continuously improve it.
The data piece is important because your assessment of how well the implementation has gone is only as good as the data you collect. And focusing on continuous improvement is key to working in an implementation science informed way. For example, when implementing a mental health program, are we using a quality measure of mental health that’s valid and reliable so you can trust that it’s measuring what it intends to measure? Many organisations use homemade measures in their programs. Implementation science tries to help move away from that and towards the use of measures that are more valid and reliable.