The wisdom of the dictum “you cannot manage what you cannot measure” holds true. Thus the PATH methology is recommended as a systematic method to design, test, measure, learn and scale working solutions in an evidence-based manner. The PATH methodology in particular has been chosen due to its roots in applied social psychology insofar as the nature of issues being dealt with here are not just technical but also intimately affected by human responses in real world conditions.
As such, interventions have to be designed to account for the many possible variations in human behaviour and decision-making, including partial to zero compliance. Unlike purely technical solutions, it is also not the case that logical design will lead to predictable and controllable outcomes since these interventions will involve the participation of humans within a multitude of socio-cultural contexts. And finally, despite acting on the same issue towards common goals, successful implementation of interventions will likely differ across regions, local cultures and individual attitudes.
PATH itself is an acronym for the four steps of the model: problem, analysis, test and help. The following text describes the PATH methodology at a high level for consideration.
In the first step, the core problem within the context of each initiative needs to be identified, and articulated into a clear problem definition. To reach a solid problem definition, some amount preliminary research about the problem, the background and the stakeholders is needed. Some triggering questions could be:
1. What is the central problem to be addressed?
2. Why is this perceived as a problem in the first place?
3. For whom is this a problem?
4. What resources and whose help are available to solve this problem?
Based on the preliminary problem definition, target outcome variable(s) are specified. Outcome variables are comparable to key performance indicators (KPIs), so they must be relevant to the problem, follow logically from the problem definition and must be specific, concrete and measurable. Outcome variables that are too broad may adversely affect ultimate effectiveness of the intervention. At this point, there are 2 phases to follow through – a divergent phase and a convergent phase.
1. Divergent phase: in the Divergent phase, intervention designers aim to generate as many explanations that explain why the defined problem might have arisen. Creative association and brainstorm techniques are recommended for this task. In addition, intervention designers may also conduct stakeholder interviews, surveys or observations to gain more insights.
2. Convergent phase: in the Convergent phase, intervention designers narrow down the large list generated previously, clustering and combining where necessary, to a handful of highly convincing explanations that have significant influence on the outcome variable(s). These explanations may have roots in academic literature and/or extensive field experience.
In the Test phase, intervention designers use the narrowed down list of plausible explanations and formulate it within a mental model or framework that tells a systematic story between input conditions and outcome variable(s). Typically this is represented in the form of a flowchart where the outcome variable is on the right, such that the rest of the space depicts the influences and interactions that combine under the real world conditions to exert direct or indirect influence on the outcome variable. There may also be feedback loops that reinforce or inhibit the behaviour of the system.
As a general rule of thumb, a process model should not have more than 10 variables, and a maximum of four steps between any explanatory variable and the outcome. Most importantly, the process model should be communicable and convincing to stakeholders in the process, supported by theory as well as empirical evidence.
With the process model, the intervention itself is ready to be designed. In this step, the intervention designers prioritize the explanatory variables most amenable to action – depending on any number of factors such as timeline, budget, resource availability, process familiarity. Explanatory variables that are not easily modifiable – typically such factors like personality traits, deeply held values, incumbent political environment – are also highlighted to inform the nuances that the intervention must navigate. Based on this information, intervention approaches can be designed.
The following is a suggested process for intervention design, specifically called Intervention Mapping in literature.
1. Needs assessment: If the PATH method has been followed, this thrust of this step should already be covered by the activities under Problem, Analysis and Test phases. On the basis of this analyses, concrete goals can be defined. Practically this may look like target metrics for selected variables, including the outcome variable(s), in the process model.
2. Programme objectives: In this step, intervention designers ask “What does it take to decrease the effect of problem-reinforcing variables and increase the effect of problem-inhibiting variables?” The objectives may target both humans and their contextual environment, answering whois going to do whatbehaviour and why.
3. Methods and strategies: Intervention designers then select methods and strategies through which the programme objectives might be met, with the goal of achieving the target metrics within a certain timeline/budget.
4. Programme development: The methods and strategies selected above are further detailed out in this step, organising the approach into a deliverable program, taking into account the materials, the infrastructure, the stakeholders and the touchpoints. The endpoint of the this step should be an intervention program ready for implementation.
5. Implementation plan: Here, intervention designers develop a plan for systematically and successfully implementing the program. It might be wise here to identify potential logistical bottlenecks. It is also a good opportunity at this point to revisit on the PATH model based on new information about the stated goals and detailed programme design. The end-product should be an implementation plan with target objectives, success factors, and detailed instructions for program adopters/implementers.
6. Evaluation plan: Finally, intervention designers are tasked with building in evaluation frameworks for the programme, with heartbeat moments, tracking of milestones and systems to record and collect empirical data based on real world behaviour and interactions. This is a critical step – because as the dictum goes “you cannot manage what you cannot measure”.
It is important to understand that PATH is not intended to be used in a linear or rigorous manner but rather in an iterative and participative context, moving back and forth as needed between problem and intervention, involving the appropriate stakeholders where necessary, and adapting the implementation strategy based on new information. Of course, PATH is merely one of many possible frameworks, so it should be seen as just an illustrative example to highlight key requirements for systematic program design and evaluation. The key is to have a structured process in place to design human-centric initiatives, continuously learn from the performance of these initiatives, share these learnings among stakeholders and be able to systematically refine successful initiatives over a long period of time in an evidence-based manner.
Buunk, B and van Vught, M (2008), “Applying Social Psychology: From problems to solutions”, SAGE
Ruiter, R and van Vught M (2013), “Applying Social Psychology to understanding social problems”, Social Psychology of Social Problems ch.13