Questions asked below:
- When will the app be publicly available?
- Where does expert knowledge come into the picture?
- What about what we don’t know?
- How many participants are too many?
- What sort of participant feedback would be most useful?
- What are some more innovative ways of using ParEvo?
- Could participation in a ParEvo exercise be open-ended?
When will the app be publicly available?
The app is already available, at https://parevo.org We are now looking for earlier adopters / beta-testers, i.e. people willing to try out the version that is now available. If you are interested then contact email@example.com All early adopters will get as much skype video-based tech support and advice as they need. We want to learn from your experience, as well as our own. A Facilitators Protocol is also available. This spells out all the steps needed for a facilitator to conduct a ParEvo exercise.
Where does expert knowledge come into the picture?
In many traditional approaches to scenario development and planning, expert knowledge is a key part of the scenario development process. In more participatory approaches there may need to be tradeoffs. In the Oteros-Rozas (2015) set of 23 case studies “Nine cases cited the lack of quantitative information, statistical and data-based testing, or modeling to support trends analysis as weaknesses. Five cases reported as a relevant weakness the unavoidable trade-off between the accuracy requested by the science base, which includes high complexity of scientific information, versus the social relevance of the process”
In ParEvo there are different opportunities for expert knowledge, however, defined:
- They can be represented as one of the groups of stakeholders participating, and/or…
- They can provide context-setting information, shared with all participants, by the facilitator, at the start of each new iteration, and/or…
- They can use the Comment facility after all contributions have been received in a given iteration, and/or…
- They can be involved in the evaluation of surviving storylines, especially in relation to plausibility and probability of the described sequence of events.
What about what we don’t know?
What if you are not impressed by the knowledge displayed by participants in a ParEvo exercise?
People have their limitations. Maybe often we don’t know very much about a subject of concern. Perhaps we should not necessarily expect the ParEvo process to always deliver impressive results, in the form of creative and collectively constructed scenarios.
Perhaps we should also treat a ParEVo exercise as a means of explicating the limits of our collective knowledge. If so, this suggests that almost as a default procedure, we should always get a third party to examine what has been produced, to identify what is missing
How many are too many?
For some time I have been cautious about having too many participants, thinking that once the number gets large no one will be able to read all the current storylines and thus make an informed choice. And there was also another argument, listening to Scott Page on “collective intelligence”, it seems that there may be diminishing returns as the number of participants increases.
But I now think this was a mistaken approach. As the number of participants, and thus storylines, grow, participants will have to resort to sampling storylines (if they have not done so already). In the absence of cues about authorship, this may well be a quasi-random process. If so, this may not be a bad thing. “Lorenz et al. (2011) proved that the diversity of views of the group would decline when the group was fully exchanging information,” says Yu, et al (2018) in their literature review on “collective intelligence”. So taking multiple samples of available information may avoid this trap.
There seems to be a parallel here with the use of “bagging” in ensemble methods of prediction modeling, where multiple different random samples of attributes in a data set are used to generate an aggregate prediction that is better than any model based on a single sample.
2019 04 10: For more on the effects of different sized groups on collective performance, see Vercammen, Ans, Yan Ji, and Mark Burgman. 2019. “The Collective Intelligence of Random Small Crowds: A Partial Replication of Kosinski et Al. (2012).” Judgment and Decision Making 14 (1): 91–98.
What sort of participant feedback would be the most useful?
What would happen if participants could see how popular their contributions were, compared to others e.g by seeing the number of others who build on these contributions, versus those of others? This could be a type of “gamification“ Would it have net positive or net negative effects? Could it lead to a ParEvo exercise becoming a collective bubble, where what is constructed always has to fit the collective preference?
Is there a way that risk could be monitored, using an appropriate measure?
What are some more innovative ways of using ParEvo?
Sea Rotmann (2017) has written about using “story spines” as a way of getting stakeholders in an issue to construct stories about what did or could happen. A story spine has consecutive elements that need to be filled in e.g:
- Once Upon a Time… [the background, where you outline the setting and who you are – including your mandate, your main stakeholder/s and your main restrictions]
- Every Day… [where you outline the problem and the End Users’ behaviors you/we are trying to change. It may include some of the End Users’ technological, social, environmental, etc. context/s – the ones that are most important to this issue]
- But One Day… [where you outline the idea/solution and how it is meant to change the End Users’ behaviors – concentrate on your speciﬁc tools you will bring to the table]
- Because of That… [where you outline the implementation of the intervention and the opportunities for success]
- But Then! [where you outline what can/will/has gone wrong and why]
- Because of That… [where you outline how you have reiterated the intervention because of what you have learned]
- Until, ﬁnally… [where you outline how you have measured the multiple beneﬁts that accrued to you/r organisation / sector and what the main results are]
- And, Ever Since Then… [where you outline the wider (e.g. national) change that has occurred because of this intervention and any possible lessons going forward or future research that needs to follow]
These storyline elements could be the name of 8 consecutive iterations in a ParEvo exercise.
2019 05 28: Sea and I are now in discussions about how to take this sort of application forward. Stay tuned!
Could participation in a ParEvo exercise be open-ended?
In its current form, any ParEvo exercise involves a defined number of participants. Up to now, some limited flexibility has been foreseen e.g. by replacing drop-outs with new participants. One way of extending participation would be through automating a drop-out process. For example, by only keeping those participants whose contributions were added to in the last iteration (by someone other than themselves). So not only would some storylines become extinct, but so would some participants, so to speak!! This process would require a backup queue of replacement participants. This approach would introduce an element of gamification, that may or may not prove to be desirable. Surviving as a participant will probably be seen as a positive. But the effects of such rewards on the content of the emerging storylines are less easy to anticipate. I suspect the purpose and context of an exercise may affect whether the influence is positive or negative, or in between.
A more open version of selective participation would not require the number of participants to be defined either at the beginning or thereafter. On the surface, this would be problematic because the limitation on the number of contributions during any given iteration is a necessary part of the implementation of the evolutionary algorithm – i.e. selection. If numbers of participants varied from one iteration to the next, so to would the selection pressure
However, a hybrid form may be feasible. That is, the number of people contributing during any iteration could be subject to a standard limit. But who actually made those contributions could vary from one iteration to the next. One way of doing this would be to apply a filter. Such as accepting the first x contributions. This would reward speed of response but could reduce quality. But if combined with selective retention of participants, as described above, that risk might be manageable.
Alternatively, say in a workshop setting, a large number of participants-to-be could form a queue, with new participants taken from the top of the queue and ex-participants added back to the bottom of the queue.