Research on open-ended evolution, at least in the context of artificial life, is usually focused on reproducing the expansive generative power of Earth’s vast biological evolution within the small, metallic confines of a digital computer. Since the invention and widespread use of simulation technologies, we have been able to build silicon worlds that mimic the world around us. But what is it about living physical matter (as opposed to non-living) that “compels” it to evolve into a seemingly endless supply of seen and unseen species? This question is at the core of open-ended evolution research.
The applications of this kind of knowledge appear as endless as open-endedness itself. But despite the many advances researchers have made in understanding open-endedness, we, the ALife community, lack consensus as to what degree of open-endedness we’ve achieved. In our pursuits, we strive to create life-like systems that produce things as interesting to study as biology, but in an artificial context. We do not necessarily aim to exactly replicate every detail of terrestrial phenomena, but to identify key mechanisms necessary for their simulation and synthesis.
Achieving these goals requires researchers to carefully consider what is structurally essential for the natural world to generate new kinds of things continually. How can we compare our world to counterfactual ones that operate on different principles? Making such comparisons is something that, barring the discovery of life elsewhere in the universe, requires the simulation and analysis of artificial worlds. Even then, researchers must interpret the outcomes of those worlds. What was produced? How did it happen? In what ways are any differences meaningful to understanding the reality of our own biology in particular, and these classes of phenomena in general?
Researchers have debated how to objectively recognize and measure the phenomena of open-endedness for nearly as long as they’ve been debating which mechanisms to incorporate into their simulations. However, whenever we’ve defined metrics and created systems to optimize them, the results have always ended up being insufficiently satisfying to claim victory even when the metric in question has been satisfied. As a result, we’ve tended to redefine open-endedness just as often as we’ve made attempts at building open-ended systems. Unfortunately, it seems that “interesting-ness” does not naturally follow from the kind of open-endedness we are able to operationalize1.
So while it’s possible to engineer systems that generate open-ended behavior according to a slew of definitions, we end up still being disappointed in their inability to evolve and change like real, living systems, and match this notion of interesting-ness. This phenomenon is surprisingly familiar to both speculative sciences and economics. Goodhart’s Law2 seems to hold quite well. We can define openendedness in terms of a system that never stops producing new configurations, then in turn realize that a simple random number generator producing giant vectors satisfies this definition. But now that we’ve understood this, the generator has become uninteresting.
To make matters worse, what’s “interesting” is deeply steeped in subjectivity–a concept from which scientists have traditionally tended to steered away, to safeguard the objective quality of the scientific method.
How might we make concepts like “interesting-ness“ and subjective openendedness more concrete? We take the point of view of an observer3, either within the system or without, and then look at the relationships formed between observers and how those relationships might evolve along with the system. In a biological system, organisms each experience their environment through their own perspective – which things nurture them and which things harm them, which things may be interacted with or utilized. Each lineage, too, is an observer with its own subjective perspective and its own relationship with the other elements of the world. Patterns generated within the natural world are meaningful or not on the basis of what consequences they hold for the inhabitants of that world, at each of their scales of operation and contexts.
The dynamics of the natural world therefore produce not just new, complex structures but also new, complex perspectives that relate to these structures. In this network of relationships, patterns are no longer arbitrary. In turn, we are part of this world too, and this makes the diversity of biological structures personally meaningful – and thereby interesting – to us as well. A given molecule is not just an arbitrary pattern, but a scent indicating nutritious, edible food; a childhood memory of first having that food; etc.
That sort of connection cannot be captured by purely objective analyses which do not take into account the role and nature of the external observer as well. So if we wish to in some way formally pursue “interesting-ness”, we must consider the elements of human psychology which encourage us to form grounding connections which the systems we study.
Therefore, we scrutinize the belief that there is an all-encompassing objective phenomenon of open-endedness that can be measured in a way that isn’t grounded in human (or non-human) experience.
More often than not, evaluating a system for its capacity to be open-ended gets entangled with its ability to express an ever-increase in complexity–and, of course how to measure the complexity. This endeavor immediately presents a challenge: Complexity that fails to be recognized is often mistaken for randomness. Mathematically, a system with high entropy has many bits of information, but this information doesn’t do anything useful. Once that information becomes tied to a particular use or ability to do something, we are able to distinguish it from noise.
But only if we are able to do it.
This kind of contextualizing, surrounding information with a method that fits bits of information to some sort of “meaning,” a way to decode any possible hidden messages, is a kind of groundedness. It’s tying abstract “amounts of information” to a generating function. In some sense, this kind of groundedness is subjective with respect to the elements of the system that couldn’t exist without those functions. In other words, the fine line between complexity and randomness may very well lie in our own subjective ability to decode information.
Our pursuit of open-endedness, and perhaps of artificial life in general, is ultimately grounded in our experiences as living beings on Earth–the properties of being “lifelike” depend on a possible perception of “life as it is” or “life as it could be.” In the natural world, the information corresponding to the products of open-endedness may not be grounded directly in the physics of the world, but also in the other members of the overarching system. As an example, consider that specific DNA sequences are only meaningful in light of the corresponding transcription and translation systems; change the mapping between base pairs and amino acids and those sequences would cease to represent enzymes and structural elements and enablers of reproductive function.
Earth contains many similar examples, from biofilms and multicellular organisms, to signaling molecules and regulatory hormones, all of which only makes sense because other elements of those aggregates interpret them and behave accordingly. In large-scale ecologies and food webs, the function of one organism is only apparent in light of other organisms to which it is adapted. For example, flowers attract pollinators, and toxins are evolved to harm the predators of a creature but not the creature itself. And in the emergence of cognition and societies of cognitive agents, so much of language and art (and even the organization of economies) is only meaningful because of the context that the rest of those societies creates for it.
Groundedness in the world gives way to groundedness in itself. But the fact that the grounding of OEE systems is generally based on internal relationships between elements rather than relationships between the OEE system and its outside poses a problem, should we want to observe such systems in an artificial and potentially alien context. If those things are scaffolded by contextual meaningfulness, then why would we expect to find an open-ended system that we are fundamentally apart from to be at all comprehensible or interesting? If we do not participate in a common context with the system, why would it be anything but noise to us?
Open-ended systems have internal things that “matter” to other internal things but generally escape or sacrifice having single shared points of relevance with human researchers (e.g. things like a specific externally relevant “task” tend to disrupt open-ended cascades). Participatory work like PicBreeder may be resistant to this. In general, things that make use of a transducer-like design continually bring in external reference points via the vehicle of directed participation by human researchers, rather than relying on 100% autonomy.
Let’s think about what actually drives people to be interested in observing natural and artificial systems for long periods of time. As we have yet to create a system capable of doing so, we may wonder whether we are missing something conceptual. If so, where are we missing it? To that end, we look at the phenomenology of what makes things feel open-ended, whether or not they actually are open-ended in any formally measurable sense, and identify a number of mechanisms and motifs that seem to contribute to that subjective experience of open-endedness.
Active inference is a process of gaining information about the world via intentionally selected actions; we make choices about what to explore and thereby enrich our mental models of the world. Systems that trigger this process may lead observers to believe, even after making an observation, that there is still something to be learned or discovered, either authentically, or as part of a kind of illusion or adversarial stimulus.
• Example: Deceptive experiment design. An informal goal of open-endedness research might be to create a system that continually produces new and exciting innovations. Science, too, can lead one to believe that new and exciting results are forthcoming that would teach us more and more about the world. However, it’s possible that an experiment would seem infinite but might actually not generate novel results.
The perceived feeling of infinity can be achieved by playing with the sense of anticipation. In particular, we call attention to processes or phenomena whereby something happening stretches out the expected spatio-temporal scales for the next occurence to happen. A prime example is the Mind Time Machine art piece by Ikegami: it feels open-ended because we know something is coming. The logarithmic scale makes an observer feel that new things are forthcoming forever, or disappearing at horizon.
• Example: Auditory illusions. Experiential analogue of Shepard Tones, Risset Rhythms, etc. (From Wikipedia: “A Shepard tone... creates the auditory illusion of a tone that seems to continually ascend or descend in pitch, yet which ultimately gets no higher or lower.”)
• Example: Idle games. Idle (or incremental) games reward the player at ever-lengthening intervals of interaction – the player might have to, for example, wait 1 minute (or click 1 time) to progress the underlying game system, then 10 minutes/clicks, then 100 minutes/clicks, and so on and so forth. Examples of such games include Cookie Clicker and AdVenture Capitalist. The player may not evaluate the game as truly endless, but the game does increase their tolerance or expectations for delays until progression.
A transducer is a concept from engineering, referring to ”a device that converts energy from one form to another. Usually a transducer converts a signal in one form of energy to a signal in another.”4 Note that not all transducers necessarily transform the original signal; some merely reflect, as in the case of a mirror. The concept of transducers can be employed for our purposes to represent systems or processes that reflect the complexity or open-endedness of the things that interact with them, thereby embedding external sources of open-endedness in the core system.
• Example: Massive Data Flows. This example refers to a joint work by Takashi Ikegami and Mizuki Oka. Massive Data Flow (MDF) systems are embedded in an external world and display properites of self-organization that are not only internal to the system itself. ”Composed of many interacting heterogeneous elements, MDF systems exhibit self-referential, selfmodifying, and self-sustaining dynamics, that can enable door-opening innovation. While the web may be the best example of an MDF system, the concept is generic to natural/artificial systems such as brains, cells, markets and ecosystems.”
Transducers seem particularly effective when they force re-interpretation of or actually do transform the external signal:
• Example: Video game modding. Video games provide interactive experiences encapsulated in some virtual universe defined by game designers. However, at least some games can be modified by players to add additional features. In particular, a game could be modded to incorporate elements from another game and in this way expand the size of the original game’s ”universe”. Baldur’s Gate is a series of video games based on the Dungeons and Dragons universe. Modding the later Dragon Age video game allowed players to recreate Irenicus’s Dungeon from Baldur’s Gate II, transforming an instance of one game into an instance of the other.
• Example: Text-to image models. Systems such as Stable Diffusion allow users to transform written verbal input into the visual domain.
Observing systems exploring ”complete” spaces makes us feel like they could accomplish arbitrary goals, or that the systems could function as arbitrary tools. Of course, this feeling is merely evocative – whether or not the system actually does achieve arbitrary functionality is dependent on its dynamics.
• Example: Open-ended video games. While some video games are primarily defined by progression mechanics (where one task must be completed, then another, then another, etc. until the end of the game is reached), others allow or even require the player to build, create, or design freely. Sandbox games such as Minecraft, or even tool-like ”games” such as the Super Nintendo game Mario Paint belong in this category. In these software systems, the fact that little is actually required creates the sense that anything is possible.
• Example: Programming languages and Turing Completeness. In computer science theory, a Turing Complete machine is a system that, given enough time and resources, could conceivably execute any arbitrary computational task. Most modern programming languages are Turing Complete. We can additionally consider systems like Conway’s Game of Life, in which simple rules can give way to a possibility space so large that people are still finding ways to explore it decades later.
Completeness may be also related to the concept of transducers, which was introduced in the previous section. Complete spaces make good substrates for transducers because there is somewhat of a guarantee that the external signal to be converted can also exist within the transducer system.
In this post, we have identified a number of phenomena that can create subjective perceptions of open-endedness, not all of which have anything to do with ’real’ autonomous open-endedness. What should be done with this information? Proving that it’s possible to manage autonomously open-ended systems was important as part of refining the initial understanding of open-endedness, but now that this has been done in various ways, that constraint can be relaxed for the purpose of exploring alternative (i.e. subjective) framings of the phenomena. Let’s follow our intuitions at the same time that we are advancing science, and go beyond open-endedness that reduces to mere numbers to actually producing systems that genuinely astonish us in satisfying ways.
We ask to what extent these considerations can or should be fused with our existing approaches to building open-ended systems, and we wonder what sort of possibilities will arise. And finally, we raise the question of what it is we are really trying to achieve.
1. Wikipedia defines operationalization as “defining the measurement of a phenomenon which is not directly measurable, though its existence is inferred by other phenomena.”
2. “...when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it.” – Mario Biagioli
3. According to NECSI: “An observer is a person who makes measurements (observations) on a system to gain information about it”.
4. Definition from Wikipedia, https://en.wikipedia.org/wiki/Transducer