Update 2020 03 17: There is now a 12 minute YouTube video explaining how ParEvo works here
Planning a ParEvo exercise is important. At almost every stage described below, there are design choices which can make a significant difference to how the exercise proceeds, and the value it provides to the participants.
1. Clarifying the aim of a ParEvo exercise
These are of two kinds: (a) within-exercise aims, and (b) post-exercise aims
1.1 Within-exercise aims
Two types of objectives can be pursued, in parallel:
- Content objectives: These are about the nature of the contents of the storylines that are to be developed
- Process objectives: These are about the ways in which the participants might be expected to interact, and the effects of those interactions.
Re content objectives, ParEVo can be used to develop alternative views of:
- What might happen in the future, or
- What has happened in the past
Alternative futures can be of two types:
- Forecasting, where there is no prior view of what the desired end state is, for any particular time in the future.
- Backcasting, where there is an agreed end state, which any scenarios being developed should lead to.
Here is a Wikipedia explanation of the difference between forecasting and backcasting.
Process objectives can be about how people participate. For example:
- Identifying future scenarios which have maximum ownership by all participants
- Identifying which participants are particularly good at making contributions valued by others, and vice versa
- Identifying which participants are most similar and most different in their perspectives on a given issue
- Doing research on what forms of participation are associated with the development of scenarios that are positively evaluated on criteria like probability and desirability, or the opposite
Meta-cognition: This is the third type of objective that may be relevant, one that bridges the content and process distinction. Meta-cognition is thinking about how we think. In this instance (and quite importantly) – about how we think about the future (described by some as “futures literacy“) There are two ParEvo facilities that may help with this objective:
- The Comment facility, which can be used throughout a ParEvo exercise
- The Evaluation function, which would typically be used at the end of a ParEvo exercise
1.2 Post-exercise aims
These are all about how you want to make use of the exercise once it has been completed. For more on these expectations, skip to section 10. Using the results of the ParEvo process
2. Identifying who will be involved
Three types of people will normally be involved:
- Participants, who generate the contents of scenarios within a ParEvo exercise, and who evaluate the overall results.
- A Facilitator(s) who invites participants, sets up the framework within which they can participate, and provides continuing guidance throughout a ParEvo exercise.
- The ParEvo Administrator who approves requests from people to act as a Facilitator, and provides them with the parameters they can control. This is a background role thereafter. Technical support and advice can also be provided to facilitators on how to make the best use of ParEvo.
Postscript: Facilitators can now allow a forth group, known as Observers, to view the contents generated by a ParEvo process, in real-time and after completion. But not to add content in any way. This is done by sharing an exercise-specific hypertext link.
Participants can participate as individuals, representing their own views. Or they can take on roles, representing different stakeholders. Or, in each iteration, they can voice the views/behaviour of different actors who could be involved in the unfolding events. Or as individuals, participants can each represent a whole team or unit with a particular interest or perspective. Approaches which maximise the diversity of views are likely to be helpful. But all within the constraint that participants should be expected to have a shared interest in the scenarios being developed.
The minimum number seems likely to be four or more. Larger numbers will generate more diversity of views. For more on this question see “How many is too many” Diversity is what drives the ParEvo process. With really large numbers it may be best for these to be broken into a number of small teams, each acting as a quasi-individual. There is some evidence that diverse teams (each with more homogenous members) may be the best way way to solve complex problems (Pescetelli, et al, 2020)
As mentioned above, all participants must initially be invited by a ParEvo exercise Facilitator. They then register as participants on the ParEvo website, to obtain a password. This then enables them to log onto the ParEvo website thereafter and gain access to any of the exercises they are involved in.
2020 03 26: There needs to be some degree of fit between the characteristics of the group of participants and the purpose of a ParEvo exercise. The purpose has to be motivational in one respect or another. Asking people to speculate on alternate futures that they either know little about or whose contents will have little consequences for them, may not be very productive.
3. Describing the starting point of the process
This is a seed paragraph of text, providing a common starting point for all subsequent scenarios. This can be a real event or an imagined event. Think of it as the opening paragraph in the first chapter of a novel.
4. Defining the endpoint
At the planning stage, this is an option, not a requirement. The endpoint can be defined as (a) a point in time and/or (b) a specific number of iterations of the ParEvo process (see below). As above, it may or may not include a description of what is expected to happen at that endpoint (backcasting) or not (forecasting)
The optimal number of iterations will depend partly on the number of participants. If the number of iterations equals the number of participants +1 then this means that each participant will have had the opportunity to build on the contributions of each of the other participants, at on at least one occasion. This might represent a minimal ideal level of exploration of the diversity of ideas presented by the diversity of participants. In practice, in the exercises completed so far, only one of the exercises has extended this far.
5. The facilitator provides guidance to participants
In each iteration, from the beginning onwards, the facilitator needs to provide participants with some guidance. It can include the following:
- Minimal requirements for participants’ contributions:
- maximum length,
- plausibility/probability and consistency requirements,
- deadlines for contributions
- guidance on civility, etc
- Context setting information.:
- Reminding participants of the overall purpose of the exercise
- (Optionally) providing information on “surrounding developments” that the emerging storylines might need to take into account.
6. Participants make their contributions
In this first iteration, all participants:
- Receive and read the guidance from the facilitator and then
- Contribute an additional section of text describing what they think might happen next. This action develops the beginning of N storylines, where N = the number of participants. It contributes “variation”, one of the three essential parts of the evolutionary algorithm
7. Developing storylines are displayed and shared
When participants log onto the ParEvo webpage and then access the particular exercise they are involved in they will see a view like the one below, with five different parts:
- The Facilitators guidance in the centre top area, with the exercise title above it
- A graphic representing the exercise theme, on the top left
- The seed text, underneath the Facilitators guidance.
- A tree structure, on the left side, enabling participants to navigate along and between different storylines, while seeing how they connect to each other. This is supplemented by a scrollable column of text in the center of the page, representing the currently selected storyline of interest.
- Though not shown here, there is also a space for comments on contributions.
The identity of the contributor of each paragraph to each storyline is not made visible. The intention is that the participants’ focus should be on the content of the contributions, uninfluenced by knowledge of who the contributors are. Their identity will be known to the Facilitator (see more below).
8. Re-iteration of 5,6,7
A new iteration only begins when all participants have contributed to the previous iteration, and they have been displayed.
Facilitator updates guidance to participants
This may or may not be needed at the beginning of each new iteration, depending on participants previous behavior and the need to introduce any new information about the imagined “surrounding context”
Participants add new contributions
At the beginning of each new iteration all participants are asked to look at each of the developing storylines and choose one which they would like to extend with a new contribution of their own. Each participant can only make one contribution, to one existing storyline, per iteration. But with each new iteration, they can change their mind about which storyline they now want to contribute to. As above, their text contribution could be a good, bad or neutral development, as seen from any stakeholder’s perspective. Participants can choose to add to any previous contribution, made by others or by themselves. As before, these contributions are anonymous.
Previously I had thought that fast iterations would be a good thing, making fewer time demands on participants and delivering results sooner than later. But a paper by Bernstein et al ( 2018) titled “How Intermittent Breaks in Interaction Improve Collective Intelligence.” suggests otherwise, that delays between iterations might be beneficial.
Display and sharing of contributions
After all participants have made their next contribution the display is updated to show the extended contents of each of the storylines. If more than one participant chooses to add their next contribution to the same existing storyline then that storyline now branches and becomes two (or more) storylines. On the other hand, if some existing storylines did not receive any new contributions they remain as viewable storylines but are now treated as “extinct”. These storylines can no longer be added to in subsequent iterations of participant contributions. The total number of storylines in an iteration will always equal the number of participants.
Evaluation of content
The progress and achievements of a ParEvo exercise can be assessed in three ways:
1. Participants Comments
Participants can be enabled to post Comments the contributions that have been made. The use of the Comment facility is optional. The Facilitator decides if and when to allow Comments and individual participants can choose if and when to make comments during any given iteration. They can make a maximum of one comment per storyline in a given iteration. This comment facility can be used as the second part of each iteration of the ParEvo process. But only after all new contributions are received and displayed in a given iteration. These contributions are anonymised and all displayed at once, prior to the commencement of the next iteration.
2. Participants evaluations – within ParEvo
At the end of the ParEvo process, the facilitator triggers an evaluation stage, where participants are asked to rate the surviving storylines, on two default criteria: (a) their probability of happening in real life, (b) and their desirability of happening, or any other relevant criteria. See Figure 2 below.
The Facilitator can edit and change the default evaluation criteria. Other criteria, such as novelty, or observability, may be more useful in some circumstances. (See Pugh, 2016)
After all responses have been received the aggregated responses of all participants to the built-in evaluation questions are shared with all participants, through a display as seen in Figure 3 below. The ratings of each storyline can be viewed by clicking on the right and arrows above the evaluation panel.
3. Participants evaluations – via external survey
Survey Monkey (or similar) can be used to ask participants additional and more open-ended evaluation questions. See the design and results of such a survey associated Alternate futures for the USA 2020+ exercise.
At the end of a ParEvo exercise, the following kinds of data can be downloaded in an Excel file format, and subject to further analysis:
10. Using the products of the ParEvo process
The contents generated by a ParEvo exercise can be used at three stages of a development project or intervention of some kind:
Carried out at an early stage, a ParEvo exercise can inform the design of how a programme is expected to work. ParEvo storylines can provide multiple alternate views on how a programme might work, and fail to work (or fail to).
Where a programme design is being captured in the form of a Logical Framework the contents of the Activities, Outputs, Outcomes and Impacts cells can be informed by the contents of ParEvo storylines rated as “most desirable”. On the other hand, the Assumptions column may need to be informed by the contents of storylines rates as “most probable” but also “least desirable”.
Where a programme design is being represented by more diagrammatic means (e.g. network structures as described in Davies, 2018) there will be more opportunities to use the contents of multiple storylines to inform the articulation of multiple alternate causal pathways. Normally, in most diagrammatic Theories of Change it is only the desired pathways that are mapped out, but there may also be an argument for including pathways representing undesired changes.
A more traditional use of scenario planning is to identify alternative futures that may need to be forfended. That is, prepared for well in advance. For example, by developing strategies for responding to scenarios, should they actually happen. Two different types of scenarios can be considered, which correspond to a distinction often made between risk and uncertainty :
- Scenarios where some probabilities can be assigned
- Scenarios where it is not possible to do so
Note that with the existing evaluation feature built into ParEvo it is possible to identify these three types of storylines, those where there is: (a) agreement on most or least probability, (b) disagreed on most or least probability, (c) less extreme but unidentified probabilities. The difference between (a), and (b) & (c) approximates the risk and uncertainty distinction. For more on these measures, see the section on using predefined evaluation criteria
Where these risks are identifiable, a specific strategy may be needed to respond to each different scenario of this kind. These different kinds of scenarios may vary not only in terms of their estimated likelihood but possibly also on other criteria that may have consequences that can be forfended e.g undesirability.
Experience with ParEvo exercises so far suggests that only a minority of storylines will have identified and uncontested probabilities. The same is the case for other attributes such as undesirability. So other complementary strategies may also be needed.
For the second type of scenario, characterised by uncertainty. what is needed are more generalised and robust strategies. Here is a useful idea from Sandra Mitchell’s book Unsimple Truths: Science Complexity and Policy (2009).
Rather than maximise expected utility, Popper, Lempert and Bankes (2002:423) recommend identifying and adopting what they called the most robust strategies. These strategies might not have been the best possible option available as any one outcome but their satisfactory outcomes occur in the largest range of future contingencies. Robustness analysis requires one to consider models take into account what we do know without pretending that we have precise probability assignments for what we don’t know. Rather it analyses a range of diverse but possible scenarios and the ways in which a policy decision today would play out in each of them. As they put it, a key insight from scenario-based planning is that multiple highly differentiated views of the future can capture the information we have about the future better than any single best estimate” p93 [underlining added]
How could such robust strategies be developed? There are two options. One would be to treat all the remaining storylines as one group. Then to find a robust way of responding, regardless of which individual storyline actually occurred. The larger the number of storylines in this set, the more difficult this is likely to be.
Where this is too big a challenge the storylines could be split into two groups, where the members of each group would share some common characteristics, and which made them different from the other group. A common and feasible robust response might then be identifiable for each group. One way of doing this is by using participatory pile sorting exercises, where participants could be asked to “sort these storylines into two different different groups, according to what you see as the”most significant difference between them – which could make a difference to how we should respond“.
Regarding the above options for identifiable risks and for uncertainties, there may be some science that is relevant here, to do with “bet-hedging” strategies – a particular evolutionary strategy where there is a high level of environmental unpredictability.
“The difference between adaptive plasticity and bet-hedging is that plastic norms of reaction result in the expression of an optimal phenotype over a range of environments, whereas bet-hedging expresses a single phenotype (that may be a fixed level of diversification) that is neither optimal nor a failure across all environments“(Simons, 2011)
For more, see this Wikipedia article on bet-hedging in biology
2. Monitoring and adaptation
Once an intervention has begun, progress with implementation normally needs to be monitored. Ideally, along with the identification of events in the surrounding environment that may be necessary, supportive or obstructive.
Traditional monitoring and evaluation tools, notably the Logical Framework, have significant limitations. While they can be economic in terms of the number of events that need to be observed and measured this economy comes at the price of tunnel vision.
In contrast, the branching structure of ParEvo storylines can provide a much wider view of possible developments that could have consequences for the success of the intervention. The ongoing scanning for signs of any of these developments being realised may not necessarily be that costly. Events simply have to be observed as happening or not, not repeatedly measured with some care.
Some of the events in ParEvo storylines might be of sufficient concern and probability that they warrant consistent attention i.e. being continually looked out for. These could be, in effect, built into an organisation’s M&A system.
Other events might be less predictable, and less clearly identifiable, but still of concern. In these circumstances, it could make sense to make use of the Most Significant Change technique as a horizon scanning tool. The MSC question might ask “What was the most significant change that took place in the last x period that could have consequences for how our organisation/intervention works in the next y period?” The answer would be, as with any MSC application, a description of the event and an explanation of its significance. The organisation would then need to plan a management response, i.e. ideas/proposals for how the organisation should respond in anticipation of those consequences.
With some uses of MSC, one or more domains of change are identified in advance. These domains are added onto the end of the MSC question, as in ‘most significant change in… (domain description)….”. They give the search for significant changes a degree of focus without being too prescriptive. In the application proposed above domains could be particular kinds of storyline, or kinds of events seen in the ParEvo storylines. These could be identified by third-party content analysts or by the participants themselves.
Responding to different kinds of scenarios
When a ParEvo exercise has finished there will be a set of storylines describing a number of alternative futures. As shown in Figure 3 here, these can be located in a scatter plot where the two axes are desirability and probability. Storylines in different quadrants may merit different kinds of responses. For example,
- Most undesirable and least likely storylines may not need a response
- Most undesirable and most likely storylines may need careful monitoring, and a carefully considered intervention
- Most desirable and least likely storylines may benefit from some monitoring and some form of intervention which may increase their likelihood
- Most desirable and most likely storylines are likely to already be the subject of designed interventions and monitoring activities. The ongoing concern here may be about sufficient alignment – between these designed activities and the most current understanding of what is a likely and desirable storyline
Nerd note: There is a correspondence here with the four cells of a Confusion Matrix
- Most undesirable and least likely storylines = True Negatives
- Most undesirable and most likely storylines = False Positives
- Most desirable and least likely storylines = False Negatives
- Most desirable and most likely storylines = True Positives
All interventions have a history, and any evaluation of those interventions will need to pay attention to that history. But where multiple different stakeholders have been involved then their views on the key events in that history, and the consequences of those events, may vary. The planning of an evaluation should be informed by knowledge of those differences. That means paying attention to all branches of the storylines generated by a ParEvo exercise.
As an evaluation progresses, attention may justifiably focus on a particular storyline, addressing what happened to a particular intervention. In those circumstances, the contents of the storyline might provide useful initial raw material for a more in-depth analysis of causal mechanisms using process-tracing .