Inspiration

Heffernan, Margaret. 2020. Uncharted: How to Map the Future Together. Simon & Schuster.

Scenarios use all the material that our experiments and wandering minds have gathered.  The stories we make about where we are and where we could be are forms of preparedness, be they positive (what we might hope to achieve) or negative (what we fear for ourselves and from others).  The need to do this work is particularly acute in organisations, each of which can expect ‘plastic straw ‘moments of their own: occasions on which a supposedly solid ground disappears from beneath our feet.

Making the future is a collective activity because no one person can see enough.  No one can have enough adequate argument alone or in an echo chamber.  So the capacity to see multiple futures depends critically on the widest possible range of contributors and collaborators.  Leave perspectives out and the future is incomplete or invisible.  This isn’t only a democratic imperative but a frank acknowledgement that those not involved in making the future will have knowledge and motive to upend it.  The emergence of open strategy, in which the individuals at all levels can contribute to mapping their organisations future, is much more than a tacit acknowledgement that insight and intelligence exist everywhere.  For decisions to be credible, seen as being for the good of the whole, they must be developed by more than a few authorities or experts.  That this process can be intelligent, meaningful and productive is demonstrated repeatedly in deliberative polling, citizen juries and assemblies.  Engaging diverse minds is a more robust way of capturing the complexity of the environment in which decisions, policies and laws succeed or fail.”  Pages 315/6

Soros, Kenneth O. Stanley, Joel Lehman, Lisa. 2017. ‘Open-Endedness: The Last Grand Challenge You’ve Never Heard Of’. O’Reilly Media. 19 December 2017.

The scope of research into open-endedness further broadened with the introduction of the novelty search algorithm (by Stanley and Lehman; see the introductory paper and website). In the context of open-endedness, novelty search is particularly notable for disentangling open-endedness from any particular “world” or problem domain—it introduces some of the flavor of open-endedness as a generic algorithm that can be applied to almost anything. To see this point, it’s helpful to contrast it with the more closed process of conventional EAs, which generally push evolution toward a particular desired outcome. In this conventional kind of algorithm, the opportunities for unhampered discovery are limited because selection pressure directly seeks to improve performance with respect to the problem’s objective (which could be to walk as fast as possible). For example, the algorithm might find a stepping stone that could lead to something interesting (such as a precursor to wings), but because that preliminary discovery does not directly increase performance, it is simply discarded. The idea in novelty search is to take precisely the opposite approach—instead of selecting for “improvement,” novelty search selects only for novelty. That is, if a candidate born within an evolutionary algorithm is novel compared to what’s been seen before in the search, then it’s given greater opportunities for further reproduction. In a sense, novelty search is open-ended because it tends to open up new paths of search rather than closing them off.

At first, it may seem that this approach is closely related to random search, and therefore of little use, but actually, it turns out to be much more interesting than that. The difference is that computing the novelty of a candidate in the search space requires real information (which random search would ignore) on how the current candidate differs behaviorally from previous discoveries. So, we might ask how a current robot’s gait is different from its predecessor’s gaits. And if it’s different enough, then it’s considered novel and selected for further evolution. The result is a rapid branching (i.e., divergence) and proliferation of new strategies (e.g., new robot gaits in a walking simulator). In fact, we discovered that this kind of divergent search algorithm actually leads to the evolution of walking from an initial population of biped robots unable to walk! Not only that, but the walking strategies evolved by novelty search were on average significantly superior to those from a conventional attempt to breed the best walkers. So, the open-ended novelty search style of exploration can actually find some pretty interesting and functional solutions, ones also likely to be diverse even in a single run.

 

By the same author: Video of presentation by same author

Advertisement