Ten stages of a ParEvo exercise

This is a re-edited version building on the experience of two pre-tests. It is likely to go through further revisions.

In summary

1. Clarifying the aim of a ParEvo exercise

2. Identifying who will be involved

3. Describing the starting point of the process

4. Defining the endpoint

5. The facilitator provides guidance to participants

6. Participants make their contributions

7. Developing storylines are shared

8. Re-iteration of 5, 6, 7

9. Evaluation

12. Follow-up

In detail

1. Clarifying the aim of a ParEvo exercise

Two types of objectives can be pursued, in parallel:

  • Content objectives: These are about the nature of the contents of the storylines that are to be developed
  • Process objectives: These are about the ways in which the participants might be expected to interact, and the effects of those interactions.

Re content objectives, ParEVo can be used to develop alternative views of:

  • What might happen in the future, or
  • What has happened in the past

Alternative futures can be of two types:

  • Forecasting, where there is no prior view of what the desired end state is, for any particular time in the future.
  • Backcasting, where there is an agreed end state, which any scenarios being developed should lead to.

Look here for more on the distinction between forecasting and backcasting.

Process objectives can be about how people participate. For example:

  • Identifying future scenarios which have maximum ownership by all participants
  • Identifying which participants are particularly good at making contributions valued by others, and vice versa
  • Identifying which participants are most similar and most different in their perspectives on a given issue
  • Doing research on what forms of participation are associated with the development of scenarios that are positively evaluated on criteria like probability and desirability, or the opposite

The 2018-19 ParEvo pretest experience

With the first (“MSC”) pretest the aim was to explore different ways of implementing the Most Significant Change monitoring technique in a particular setting – in northern Nigeria. This kind of exercise can be seen as a form of scenario planning.

In the second (“Brexit”) pretest the aim was to explore participants’ views of what might happen if Britain did not secure an agreement with the EU by the end of March 2019.  In my own view, this turned out to be more of an exercise in imaginative fiction, rather than plausible planning. But time may well prove me wrong!

2. Identifying who will be involved

Three types of people will be involved:

  • Participants, who generate the contents of scenarios within a ParEvo exercise, and who evaluate the overall results.
  • A Facilitator(s) who invites participants, sets up the framework within which they can participate, and provides continuing guidance throughout a ParEvo exercise.
  • The ParEvo Administrator who approves requests from people to act as a Facilitator, and provides them with the parameters they can control. This is a background role thereafter. Technical support and advice can also be provided to facilitators on how to make the best use of ParEvo.

Participants can be individuals or small teams.  The minimum number seems likely to be four or more. Larger numbers will generate more diversity of views. Diversity is what drives the ParEvo process. With really large numbers it may be best for these to be broken into a number of small teams, each acting as a quasi-individual. For more on this question see “How many is too many” 

Participants can participate as individuals, representing their own views. Or they can take on roles, representing different stakeholders. In either case, they should be expected to have a shared interest in the scenarios being developed.

As mentioned above, all participants must be invited by a ParEvo exercise facilitator. They then register as participants on the ParEvo website, to obtain a password.  This then enables them to log onto the ParEvo website thereafter and gain access to any of the exercises they are involved in.

The 2018-19 ParEvo pretest experience

The MSC pre-test started with 11 volunteer participants, sought via the Most Significant Change email list,. Over the course of 4 iterations, and final evaluation stage, this number went from to 10,9,8 then 7. The participants were male (4) and female (7) from 7 countries in Europe, North America, Africa, and Asia.

The Brexit pre-test started with 12 volunteer participants, sought via the MandE NEWS email list. Over the course of 4 iterations, and final evaluation stage, this number went from to 11,10, 9, and 10. The participants were male (7) and female (5) from 7 countries in Europe, North America, Africa, and Asia.

In both pre-tests, a list of reserve participants was available, but not used. While it can complicate the subsequent analysis of participation there is no substantial reason why “drop-out”  participants cannot be replaced by new ones.

3. Describing the starting point of the process

This is a seed paragraph of text, providing a common starting point for all subsequent scenarios. This can be a real event or an imagined event.

The 2018-19 ParEvo pretest experience

In the MSC pretest, the seed text was drafted by one of the participants. In retrospect, at 6 paragraphs long,  it was far too long for a quick and easy read by participants. In the Brexit pretest, a one-paragraph seed text was drafted by the facilitator.

4. Defining the endpoint

This is an option, not a requirement. The endpoint can be defined as a point in time and/or a specific number of iterations of the ParEvo process (see below). As above, it may or may not include a description of what is expected to happen at that endpoint (backcasting) or not (forecasting)

The 2018-19 ParEvo pretest experience

With the two pretests, it was initially planned to have up to 10 iterations of participant contributions, completed by the end of the 2018 year. There was no prior definition of what period of time each iteration covered, though it was thought the 10 iterations could span a period as long as 5 years. It could be argued that the number of iterations should at least equal the number of participants since this is the minimum needed for the exploration of the diversity of combinations created by that number of participants, as can be calculated using one measure of diversity (See more on this under Measures)

With the two manually operated pre-tested, each iteration took much longer than expected, sometimes up to two weeks. This meant the process did not finish at the end of December but was still going on in late February. By then four iterations had been completed, plus an evaluation survey after that. The main reasons for delays were: (a) the very labour intensive process of running the process, (b) participants were taking time to respond to emailed survey requests.

In contrast, a pretest carried out in the 1990s with a group of secondary school children completed 4 iterations within one hour, the class duration. But this involved no discussion of the results

It seems plausible that in a workshop setting, assisted by the web app currently under development, the process could be completed within a half day or one day workshop. Two versions of the process would run at different speeds. One that allowed comments on contributions made at each iteration would be slower. The other than did not would be faster. Within a group of people working together, but a distance, individual iterations might need two days, in order to take into account international time differences.

5. The facilitator provides guidance to participants

In each iteration, from the beginning onwards, the facilitator needs to provide participants with some guidance. It can include the following:

  • Minimal requirements for participants’  contributions:
    • maximum length,
    • plausibility and consistency requirements,
    • deadlines for contributions
  • Context setting information.:
    • Reminding participants of the overall purpose of the exercise
    • (Optionally) providing information on “surrounding developments” that the emerging storylines might need to take into account.

The 2018-19 ParEvo pretest experience

In the MSC pretest, this seed text was initially presented within an email and was quite detailed

This additional paragraph should read like part of an ongoing story. It should describe a sequence of events taking place, preferably in a short space of time (remember, this is only one part of a longer story) It should involve people, doing things and/or talking to other people. These things could be seen as a good, bad or neutral when seen from any stakeholders perspective. The description of what is happening can include both description and explanation, being spoken or thought about, by one or more of the people mentioned in the paragraph. When doing this, you may want to try to express the views of a particular kind of stakeholder. Ideally, the paragraph should describe events that are possible, rather than science or fantasy fiction. But these events may vary from the probable to the unlikely.  In this pre-test, the paragraphs will be limited in size to 100 words. This limit may be revised.

In the Brexit pre-test, it was presented within a Survey Monkey online survey.

Please say what you imagine might happen next, in one of two sentences. Imagine you are writing a story some years later, remembering what happened at this time. You could be expressing the views of any stakeholder. The event you are reporting could be a good or bad development”!

Guidance on contribution size varied as the pretests varied, from word numbers to sentence numbers and no guidance at all! When the pretest process began to be implemented via Survey Monkey (rather than email) there was no suggested limit on the size of the contribution, but the comment window in a Survey Monkey questionnaire was 3 lines deep and 100 characters wide.

In practice, in the Brexit pretest contributions ranged in size from 9 to 180 words, with an average of 50. More research into the pretest results may identify if there is any correlation between text length and the likelihood of those texts being added to by other participants in subsequent iterations (see below). Or if there is any relationship between word length and with how storylines are evaluated at the end of each exercise.

6. Participants make their contributions 

In this first iteration, all participants:

  • Receive and read the guidance from the facilitator and then
  • Contribute an additional section of text describing what they think might happen next.

7. Developing storylines are displayed and shared

When participants log onto the ParEvo webpage and then access the particular exercise are involved in they will see a view like the one below. This provides a tree structure, on the left side, to enable people to navigate along and between different storylines, while seeing how they connect to each other. This is supplemented by a scrollable column of text in the center of the page, representing the currently selected storyline of interest. On the right side, there is space for comments on these contributions, this is a more optional feature of the overall process.


The identity of the contributor of each paragraph to each storyline is not made visible. The intention is that the participants’ focus should be on the content of the contributions, uninfluenced by knowledge of who the contributors are.  Their identity will be known to the facilitator (see more below)

Participants do not get to see each other’s contributions until all have contributed to a given iteration and these are then displayed, as above.

The 2018-19 ParEvo pretest experience

In the two pretests, after all contributions were received, a web page was set up with a tree structure, showing how each new contribution linked to the original seed text. In one pretest clicks in any node took participants to another web page containing the original seed text and a new contribution to that text. In the other pretest, and later on in both,  holding a mouse over each node of the tree made visible the text that had been contributed at that point. Here is an example of a tree diagram after several iterations have been completed. Grey nodes show extinct storylines. Red text nodes show surviving storylines that can still be developed.

8. Re-iteration of 5,6,7

A new iteration only begins when all participants have contributed to the previous iteration, and they have been displayed.

Facilitator updates guidance to participants

This may or may not be needed, depending on participants previous behavior and the need to introduce any new information about the imagined “surrounding context”

Participants add new contributions

All participants are asked to look at each others’ most recent contributions and choose one only which they would like to extend with a second contribution of their own. Each participant can only make one contribution, to one existing storyline, per iteration. But at the start of each new iteration, they can change their mind about which storyline they now want to contribute to. As above, this could be a good, bad or neutral development, as seen from any stakeholders perspective. Participants could choose to add to any previous contribution, made by others or by themselves. As before, these contributions are anonymous.

Previously I had thought that fast iterations would be a good thing, making fewer time demands on participants and delivering results sooner than later. But a paper by Bernstein et al ( 2018) titled “How Intermittent Breaks in Interaction Improve Collective Intelligence.” suggests otherwise, that delays between iterations might be beneficial. Read it to find out why

Display and sharing of contributions

After all participants have made their next contribution the display is updated to show the extended contents of each of the 10 storylines. If more than one participants chose to add their next contribution to the same existing storyline then that storyline now becomes two (or more) storylines. On the other hand, if some existing storylines did not receive any new contributions they remain as viewable storylines but are treated as “extinct”. These storylines can no longer be added to in subsequent iterations of participant contributions.

9. Evaluation

Evaluation of content

The progress and achievements of a ParEvo exercise can be assessed in two ways, using the web app now under development:

  1. Participants can be enabled to post Comments on each contribution during the current iteration of the ParEvo process, but only after all new contributions are received and displayed in a given iteration. These contributions are anonymised and all displayed at once, prior to the commencement of the next iteration.
  2. At the end of the ParEvo process, the facilitator triggers an evaluation stage, where participants are asked to rate the surviving storylines, in terms of (a) their probability of happening in real life, (b) and their desirability of happening, or any other relevant criteria.  This survey data can be then downloaded and analysed, as is explained on the Evaluation page of this website.

The facilitator will then shares the results of the evaluation with all participants along with a link to the completed exercise with all storylines visible. This should remain available to participants after completion of the exercise i..e from this point.

The 2018-19 ParEvo pretest experience

A Survey Monkey questionnaire was used to seek participants evaluation of the completed MSC and Brexit pretests. The design of the questions can be seen on the Evaluation page.  After the evaluation results were shared follow-up open-ended interviews were carried out with a sub-set of the participants, to get a wider perspective on their views.

Evaluation of process

During a ParEvo exercise, two types of anonymised data is automatically collected about how people participated, during each iteration:

  • Which participant contributes to which other participants most recent contribution.
  • Which participant contributes to which developing storyline

The facilitator can download this data in a matrix format. It can then analysed to generate measures that describe how participants have helped construct the different storylines. These are described in detail on the Measures page.

10. Follow-up

Ideally, the completion of a ParEvo exercise should generate some consequences.


Carried out at an early stage, a ParEvo exercise could inform the design or modification of a Theory of Change.

Each ParEvo storyline could be seen as describing a particular pathway within a network version of a Theory of Change.  Or in a simpler linear version of a Theory of Change, each storyline could be seen as an alternative version of that linear model.

Once a Theory of Change has been developed, branching storylines could suggest changes in the types of activities planned for the future. Or in the particulars of their implementation, so that they could be more easily adapted, should an alternate scenario become more likely.


A ParEvo exercise could also generate changes in the way progress is being monitored and evaluated. Most agencies have M&E plan based on a particular scenario of how events will take place. But the branching perspective generated by a ParEvo exercise suggests that at different points in time there might be bifurcations, where real events follow a different route than expected. This possibility strongly suggests that M&E systems need to be looking to the right and left and not just straight ahead, so to speak. Alternative scenarios developed during a ParEvo exercise could suggest what else to be looking out for, as events unfold.


The repetition of a ParEvo exercise is likely to be of value, helping to update scenarios after the passage of time. This could be seen as a form of magnification, focusing on specific storylines closest to current reality and updating alternate expectations of what might then happen next.