Questions asked below:
- When will the app be publicly available?
- Where does expert knowledge come into the picture?
- What about what we don’t know?
- How many participants are too many?
- How many participants can ParEvo work with?
- What sort of participant feedback would be most useful?
- Could participation in a ParEvo exercise be open-ended?
- How does the design of ParEvo relate to contemporary evolutionary theory?
When will the app be publicly available?
The app is already available, at https://parevo.org We are now looking for earlier adopters / beta-testers, i.e. people willing to try out the version that is now available. If you are interested then contact rick.davies@gmail.com All early adopters will get as much skype video-based tech support and advice as they need. We want to learn from your experience, as well as our own. A Facilitators Protocol is also available. This spells out all the steps needed for a facilitator to conduct a ParEvo exercise.
Where does expert knowledge come into the picture?
In many traditional approaches to scenario development and planning, expert knowledge is a key part of the scenario development process. In more participatory approaches there may need to be tradeoffs. The Oteros-Rozas (2015) paper on participatory scenario planning examines a set of 23 case studies “Nine cases cited the lack of quantitative information, statistical and data-based testing, or modelling to support trends analysis as weaknesses. Five cases reported as a relevant weakness the unavoidable trade-off between the accuracy requested by the science base, which includes high complexity of scientific information, versus the social relevance of the process“
In ParEvo there are different opportunities for expert knowledge, however, defined:
- Participants can be given a pre-exercise briefing by relevant experts, and/or…
- Experts can be represented as one of the groups of stakeholders participating in an exercise, and/or…
- Experts can provide additional context-setting information, shared with all participants, by the facilitator, at the start of each new iteration, and/or…
- Experts can use the Comment facility after all contributions have been received in a given iteration, and/or…
- Experts can be involved in the evaluation of surviving storylines, especially in relation to plausibility and probability of the described sequence of events.
- Experts can be involved in subsequent content analysis of all the storylines, surviving and extinct
What about what we don’t know?
What if you are not impressed by the knowledge displayed by participants in a ParEvo exercise?
People have their limitations. Maybe often we don’t know very much about a subject of concern. Perhaps we should not necessarily expect the ParEvo process to always deliver impressive results, in the form of creative and collectively constructed scenarios.
Perhaps we should also treat a ParEVo exercise as a means of explicating the limits of our collective knowledge. If so, this suggests that almost as a default procedure, we should always get a third party to examine what has been produced, to identify what is missing.
How many are too many?
There are two reasons for asking this question. Firstly, for some time I have been cautious about having too many participants, thinking that once the number gets large no one will be able to read all the current storylines and thus make an informed choice.
But I now think this was a mistaken approach. As the number of participants, and thus storylines, grow, participants will have to resort to sampling storylines (if they have not done so already). In the absence of cues about authorship, this may well be a quasi-random process. There seems to be a parallel here with the use of “bagging” in ensemble methods of prediction modelling, where multiple different random samples of attributes in a data set are used to generate an aggregate prediction that is better than any model based on a single sample.
The second reason is while it may be technically possible to involve a large number of participants, it may not be the case that doing so is beneficial. Listening to Scott Page on “collective intelligence”, it seems that there may be diminishing returns as the number of participants increases. Especially if there is extensive communication between the participants. “Lorenz et al. (2011) proved that the diversity of views of the group would decline when the group was fully exchanging information,” says Yu, et al (2018) in their literature review on “collective intelligence”. What seems to matter more is the way in which the group is structured. Performance seems to be better if a group consists of multiple teams who exchange more information within each other than between each other. See Vercammen, Ans, Yan Ji, and Mark Burgman. 2019. “The Collective Intelligence of Random Small Crowds: A Partial Replication of Kosinski et Al. (2012).” Judgment and Decision Making 14 (1): 91–98
There is a connection here with developments and findings with evolutionary theory. “Modularity has become a central concept in evolutionary biology (Wagner et al. 2007). A system is modular if it can be divided into multiple sets of strongly interacting parts that are relatively autonomous with respect to each other”…”Wagner & Altenberg (1996) argued that modularity was important in facilitating the evolution of morphological diversity. If all features of an organism are completely integrated, the parts will be prevented from evolving independent adaptations. A modular variational structure permits the evolution of complexity and diversity as observed in the natural world” See Melo, D., Porto, A., Cheverud, J. M., & Marroig, G. (2016). Modularity: Genes, development and evolution. Annual Review of Ecology, Evolution, and Systematics, 47, 463–486. and Pescetelli, N., Rutherford, A., & Rahwan, I. (2021). Modularity and composite diversity affect the collective gathering of information online. Nature Communications, 12(1), 3195.2019 04 10:
How many participants can ParEvo work with, in practice?
Up to now, the maximum number I have worked with is 12. The structure of the ParEvo app layout may impose some upper limits. I have yet to test this out, but guess that it might be around 15. I will explore this possible constraint soon.
Assuming there is a practical constraint on numbers, how could this be managed as in worked around? One approach, which I am keen to explore, is to treat those participants (e.g. number 16 or more) as members of a queue. If a storyline becomes extinct (i.e. is not added to in an existing iteration) then the last contributor to that storyline drops out of the list of active participants and joins the bottom of the queue. In the next iteration, his/her place in the list of active participants would be taken by a person on the top of the queue. This approach could have the effect of increasing the diversity of contributions available to the participants to build on.
This approach has a connection to the claim that “science progresses one funeral at a time“. In other words, it is not only the content of a scientific idea but also who carries it, which makes a difference. See the recent supporting research by Azoulay et al (2019)
PS: This approach could increase the total number of participants per exercise by between 150% and 260%, judging from extinction rates in six exercises to date. In real numbers, this would mean anywhere from 8 up to 15 participants in one exercise, and from 12 up to 31 participants in another exercise..
What sort of participant feedback would be the most useful?
The queue model proposed above provides the deselected participant with quite clear negative feedback. There are other less radical forms of feedback available.
Two are easily calculated, using data generated during a ParEvo exercise:
- The number of other contributions that were added on to one’s own contributions (excluding ones own). An egotistical perspective, perhaps.
- The number of contributions one added on to others’ contributions. An altruistic perspective, perhaps.
This information is already visible to individual participants, albeit with some effort. What is not automatically visible to each participant is the same information about the performance of the other participants.
What would happen if this information was collectively available? Making it so could be seen as a type of “gamification“. That is, it could affect the nature of the incentives affecting how people participate. Would this be a net positive or a net negative in its effects?
A Feedback bar chart is now under development , which will enable all participants to see how well they are doing compared to others, in terms of the numbers of contributions others have added to theirs to date. The effects are yet to be seen. It may or may not reduce the number of “own contributions” being added to one’s own contributions. In a current exercise these were around 30% of all. But they have been less frequent in other exercises.
Could participation in a ParEvo exercise be open-ended?
In its current form, any ParEvo exercise involves a defined number of participants. A more open version of selective participation would not require the number of participants to be defined either at the beginning or thereafter. On the surface, this would be problematic because the limitation on the number of contributions during any given iteration is a necessary part of the implementation of the evolutionary algorithm – i.e. selection. If numbers of participants varied from one iteration to the next, so to would the selection pressure
However, a hybrid form may be feasible. That is, the number of people contributing during any iteration could be subject to a standard limit. But in each iteration, a different group of people, taken from the top of a queue of all intended participants, could be the active participants. Previous active participants could be recycled through the bottom of the queue. Alternatively, contributions up to a defined maximum number per iteration on a “first come first accepted” basis.
More open-ended participation would probably require, compared to the present, a less cumbersome/time-consuming procedure for accepting new participants. One means would be to solicit contributions for a current iteration via a standing Survey Monkey questionnaire, with the Facilitator then responsible for transferring these responses into the web app. Then, after the maximum acceptable number have been posted there, making all the posted contributions in the current iteration visible via a dedicated link accessible to all known participants. That “shareable link” capacity is now under development.
How does the design of ParEvo relate to contemporary evolutionary theory?
Contemporary evolutionary theory is a really big area. One interesting sub-field is that of artificial life and evolutionary computation, and within that, research on how to design open-ended evolution. A simple but crude definition of open-endedness is the endless generation of novelty, as seems to be the case with evolution in the natural world.
Evolutionary algorithms, as embodied in computer software, are designed to search for and find an optimal solution to a particular problem. They involve convergent processes, which require the advance specification of the performance criteria i.e. when a particular solution will be deemed satisfactory. Whie they make use of diversity, that diversity is progressively reduced as their results get closer to the best available solution.
This is not the case with ParEvo. There is no convergence because there is no overall performance measure driving the process. Instead, a level of diversity is sustained at a relatively constant level throughout an exercise. All a storyline has to do in order to survive is to ensure that it is of sufficient interest to participants that one new extension is added to it in each new iteration. However this is within the context of other storylines which must also meet the same criterion in order to survive.
There are strong parallels here with a new approach called Minimum Criterion Co-Evolution (Brant & Stanley, 2017), developed by researchers in the field of artificial life. The important thing about a minimum criterion approach is that evolving entities only have a meet a minimal level of performance in order to survive and replicate. After the minimum criterion has been met there is still substantial diversity remaining amongst the storylines or other entities that are evolving.
The other interesting feature of this approach is that the minimum criterion is defined in reference to another entity that is also evolving. The above research looked at the co-evolution of a population of maze exploring robots with a population of mazes. A maze exploring robots could only reproduce if it could find its way in at least one of the current set of mazes. And on the other hand, a maze could only reproduce if it could be explored to the end by at least one robot.
This then begs the question of the nature of co-evolution in a ParEvo design. My interpretation is that the storylines are co-evolving in an environment which could be described as the mental worlds of the participants. The minimal survival criterion is defined by content of the other storylines present in the current iteration. The nature of the “interest” in the content that sets the bar is not immediately obvious, it would require interviewing the participants about why they chose to extend one storyline rather than another.