Towards an Evolutionary Technology

The introduction of causal operations provides a source for new complexity

George Kampis

JAIST

 

Introduction: Rendezvous with Evolution

In A.C.Clarke’s epic science fiction novel, Rendezvous with Rama, a hollow world inside an alien spaceship is depicted. The world is activated in front of the eyes of the visiting astronauts, and it soons fills with a variety of “biots”, or machine beings, but also with other gadgets, drawn from the “Illustrated Catalogue of Raman Artifacts” (a kind of a “wish list”), and produced from scratch by means of some unkown mechanisms. Rama is an enigmatic story, where imagination has to fill in the details. Are the beings produced by some fast, controlled evolutionary process? Or are they a result of “total synthesis” – a process in which nothing but raw materials and directional steps are used, as in a chemical laboratory? Is it possible to have the latter without the former, to produce machinery of arbitrary complexity?

John von Neumann developed the mathematical notion of a “universal constructor”, a system in which a fixed set of operators, applied to a suitable set of building blocks, is able to produce any desirable object by means of completely transparent, computational steps. Objects of the kind considered by von Neumann were mathematical configurations in a class of abstract systems called “cellular automata”. However, his original idea was significantly different. He had planned to focus on a fluid system, with floating mechanical components that could freely combine and split in a fashion typical for molecules. Technical difficulties soon directed von Neumann’s attention towards the more tractable, purely abstract systems, where he succeeded in developing the required set of constructing operators.

In Nature, no universal set of building operators is known to exist that would directly produce patterns as diverse as proteins, cars, or spaceships. Nature solves this task indirectly, through evolution. It may be the complexity of the problem or some other factor that makes a direct “total synthesis” impossible. Or perhaps such a mechanism would have no advantage over Nature’s chosen one, natural selection, after all.

Is the fantasy of a universal technology just the product of exaggerated imagination, a kind of Promethean dream? Maybe so. Still, the degrees of its possibility can be analysed. The above consideration suggests that there are two different roads to the problem. At present none of them is available to us. The limitations of abstract “universal constructors”, when applied to real-world situations, are not yet understood. We don’t kow whether total synthesis is possible without evolution. But, by contrast, we start to see the limitations of abstract evolutionary procedures. What went wrong when von Neumann changed his design, where is Nature superior to the abstract evolutionary models, what is the “value-added” element of reality, and what can we do about the difference? These are the questions we are going to discuss in this article.

 

Abstract Evolutionary Systems Deplete Their Resources

It seems easy to formulate the fundamental principles of natural evolution. The basic idea is that of selection, or differential survival based on heritable variation. This principle has been put to innumerable tests since it was first formulated by Darwin more than 150 years ago. Natural selection has been proven to work successfully in the free Nature and under the supervision of the breeder. The principle of selection has been confirmed so many times over the years, that it is probably the single most tested scientific theory ever. However, we have been much less successful when it comes to completely man-made systems of evolution. Many known attempts that tried to put the evolutionary principle to work in abstract systems, have significantly failed, at least in the sense of not producing anything even remotely comparable to the outcomes of the natural evolutionary process. To understand why, let us review some recent developments.

One of the best-known examples of “evolutionary programs” had its origin as a humble illustrative toy developed by Richard Dawkins at Oxford University. Despite its simplicity, it demonstrates how the combination of some simple operations such as replication and mutation (that is, random error) can lead to complicated and often unpredictable outcomes when guided by selection. Selection is implemented in the Dawkins model through a kind of search-and-elimination process which looks for variants that are the best matches for some desirable properties, then discards the variants that are relatively inferior, and re-allocates room for reproduction to favor the successful. As a result, within a few steps of the reiterated process the system undergoes a dramatic improvement. In a series of popular writings, Dawkins provides an easily accessible analysis of the details of the procedure. The model could not be any simpler, yet it still succeeds in capturing several important properties of the natural evolutionary process. At the same time, from this model we can diagnose striking limitations that turn out to handicap more advanced systems as well.

The simulated evolutionary process essentially terminates when a best fit is found. Afterwards, nothing “interesting” happens any more – only random fluctuations will occur. Otherwise, further evolution is impossible, unless the system is used interactively, by some trick which introduces new selection criteria every time the user interferes with the system. But even in that case, the fundamental type of the obtained evolutionary products remains unchanged forever. This is another frustrating feature. The system just produces more and more of the same, as before. If it started out tranforming trees, it will be transforming trees forever. It never comes up with something as completely different as a wristwatch – to allude to the famous “watchmaker” metaphor that expresses the productivity of natural evolution, a metaphor efficiently propagated by Dawkins himself.

Why abstract evolution fails to produce a sustained and progressive process that could bootstrap anything like the production of the complex items in the Rama catalogue is best seen through another example, the Tierra system designed by Tom Ray of the University of Delaware. Tierra is conceived as a population of virtual machines that “live” in the computer memory and compete for “room” allocated in the form of memory addresses. The Tierra simulation encapsulates a complex ecosystem which emerges spontaneously under the single constraint of memory space utilization. Tierra’s self-replicating “digital organisms” contain a program code for their own behavior, which makes it possible to directly observe both their structural and their functional evolution. Starting with a random seed, many simultaneous developments take place in the system, and spontanenous improvements lead in several directions. As a result, several different kinds of specialized progams tend to evolve, and “learn” to coexist. Faster, smaller programs are counterbalanced by slower, more complicated and more flexible ones. Co-operation and parasitism emerges, with programs that utilize other competitors’ codes (as well as time share) for their own replication process. And so on; Tierra is a truly powerful system of artificial evolution well deserving of further experimentation.

The Tierra system is quite widely known: it is the first example of a computational model sufficiently rich to explore the complexities of real evolution. Tierra is potentially open-ended; if Dawkins’ system was a minimalist one, then Tierra is about as sophisticated as you can get. The specialized computer language allows for “digital organisms” to be universal computers as well. In principle, Tierra offers the possibility for every computer program to evolve, and to perform every task doable by any computer program – to compose music or write whimsical poems. It looks like a true canditate for Rama’s technology. However, Tierra’s long-term behavior does not fulfill these promises. Degeneration and extinction wait in the long run. After several hours of CPU time, the process tends to get stuck at the simplest and fastest replicators that outperform all their competitors. Imagine, as a parallel, that bacteria have taken over the whole Earth. Whereas it is possible to see the persistence of some of the simplest known living creatures on Earth as an evolutionary success achieved at the expense of higher organisms, in the natural evolutionary process this goes never so far as to eliminate all the mammals. We are here, and the Tierrans are not even on the horizon. Tierra’s process appears to lead to an unavoidable dead-end of relatively uninteresting behavior – typically, just reproduction speed optimization in the end. Often the entire population dies out.

The immediate reason for this failure may have to do with technical problems such as the accummulation of “debris”, or unused fragments of code, that occupies valuable space in the available “habitat” (memory field). Or the failure may stem from a lack of true “producers” in the ecological sense. Unlike in a real ecosystem, resources in Tierra are not developed, only consumed. There is just one basic resource, room - of which there will be less and less. As a consequence, new “organisms” can only appear at the cost of the ones already present. On Earth, by contrast, even the most basic of all resources, raw energy input, can be biologically increased. Most other goods, such as nutrients, shelter, and options for locomotion, arise or are dramatically multiplied in the course of the evolutionary process, and as a result, the system can support more and more organisms, of more and more varied kinds. The incorporation of such factors is so far missing in Tierra and other evolutionary systems, which is a significant drawback. Still, in what follows I will suggest that the ultimate reason for the fiasco of Tierra and related systems lies elsewhere.

It is not sufficient for combinatorial operations for the new structures (or means) to be open-ended, if the system is closed for the achievable goals (or ends). Achievable goals are defined by the ultimate resource, the exerted selection pressure, which, however, is not renewed within these systems. It appears that selection alone cannot produce sustained evolutionary growth. The system exhausts this force as selection progresses. New resources are needed to make it possible for new selection forces to arise.

 

Causality Adds Depth

New resources might be supplied by causality. Causality is one of the oldest and intuitively most accessible concepts in science, and there has been a considerable recent interest in it. Many theoreticians now believe that the hallmark of science lies in causality rather than equations or elsewhere. Judea Pearl of UCLA has recently written: “Fortunately, very few physicists paid attention to Russell’s enigma [according to which science can do without causality]. They continued to write equations in the office and talk cause-effect in the cafeteria; with astonishing success they smashed the atom, invented the transistor and the laser.”

Despite its relevance and utter familiarity, the essence of causality is difficult to grasp. Of the many existing conceptions, of particular importance is that offred by I. Hacking of the University of Toronto, one which captures causality in the role it plays in scientific experiments. In an experiment, the scientist uses certain entities in order to manipulate certain others. The causal element is simply where the indirect dependence occurs. Using such points of dependence, the scientist can test the “reality” of his hypothetical entities: something is as real as it is available for affecting other things. In the spirit of Hacking’s initiative, this puts causation in a unique position: experimental causality is seen by some scientists as a litmus test for scientific value. Experimental causality can, at the same time, also illustrate causal “depth”, a concept that might facilitate our evolutionary considerations.

To see how this concept arises in science, note that no experiment can be exactly repeated, nor would anybody be interested in an exact repetition. To repeat an experiment is to repeat it with some inevitable differences, yet with the same result. That such an alteration is possible suggests that causality, which plays a central role in any experiment, is somehow robust. This is a first indication that causal relations cannot be as fragile as a relationship between single events. Such a relationship would break irreparably if any of the events were absent. Causality is not just a relationship between a cause and an effect, but a more complex feature that involves essential aspects of matter.

At the heart of robustness we find the fact that a causal relationship always involves doing several things in tandem. That is why it does not matter whether the experimenter hits exactly the same nerve in a repated experiment. Anything that pertains to the same causal relationship will do. Such parallelism is crucial in every causal interaction, and it is expressed in the concept of “depth”. Causal depth is a term for the various subprocesses that together constitute a causal process. Take biochemistry, for example (which is known to be relevant for real-world evolution). A biochemical reaction consists of a composite of electromagnetic, mechanical, atomic, subatomic, and numerous other processes. The reaction splits and combines the biomolecules in a fashion that transfers these several features simultaneously and inseparably. This is a significant fact for both experimental science and evolutionary theory. In experimenting, it is this inseparable composition of causal subprocesses that the experimenter utilizes when preparing or detecting the outcomes of an experiment. Under some conditions, every causal subprocess can be used as an indicator of every other. In biochemical monitoring, it is most frequently mechanical or geometric properties alone - such as the arrangement (or, in the simplest case, the sequence) of amino acids, nucleotides, and other building blocks - that are used for detection, tested by physical binding. It might be possible to monitor or manipulate electrochemical or quantum properties instead. One can stand for all: it is the whole reaction, and not just some selected aspect thereof, which is the target of the experiment.

When one feature is altered, several features change. Things are bound to happen together, and this is a chance for still more things to happen, in a chain. Here is another link to science fiction: Stanislaw Lem, the Polish writer, famously contrasted human technology with “the methodological precision of Genesis”, and expressed the wish that “by saying ‘let there be light’, we could obtain, as a final product, the very light, without any unwanted ingredients”. Obviously, such purity is impossible, exactly because of what is expressed in causal depth. In the natural world it is not possible to do just one thing and nothing else, without invoking causal depth and further causal consequences.

Compare this clear-cut situation in experiments with the current style of evolutionary modeling. No experimental scientist would be satisfied with a view of experimentation that neglected the elbow-room of causality, and narrowed it down to some single, isolated feature. But this is exactly what happens in the models we discussed. The notion of causal depth suggests that the effect of natural selection cannot be confined to such an abstract process, any more than an experiment can be reduced to a few selected variables.

 

Causal Depth as a New Resource for the Evolutionary Process

Our evaluation of evolutionary models has pointed out that these models tend to “run out of steam”. If this is true, what can be done about it? What factors should be added? Here is where suggestions from causality may help to cope with the resource problem.

Imagine a set of interacting digital organisms that can reproduce and feed on one another. This defines a usual simulated evolutionary process if we introduce the typical combination of heritable variation and differential survival. But now consider the following. Each time a mutation occurs, it is taken to influence the parameters not only of survival, but of much else that at the moment has no effect on survival. As evolution proceeds further along the lines determined by the actual selection forces, changes in these “silent” parameters accumulate uncontrollably. We now allow these uncontrolled parameters to serve as additional cues to survival, once they have reached a critical threshold, and provided that they match the correspoding parameters of other organisms. In this way we can arrive at a new, more complicated selection process, with enough room for organisms to develop in various directions other than the initial one.

Perhaps this model already contains all necessary ingredients to trigger an endless evolutionary process, however simple. The model is currently being tested at the Fujitsu Chair of Complex Systems at the Japan Advanced Intitute for Science and Technology (in cooperation with Harvard University). This simplified model is an experimental, abstract version of what may prove to be a causal story of evolution.

Natural selection is usually assumed to operate only on those traits to which it is applied. It is clear, however, that natural evolution is a causal process, and evolutionary effects are always “deep”, in the precise sense characterized earlier. As a consequence, every step of evolutionary interaction produces a multiplicity of events, most of them probably unrelated to selection. That is, besides producing adaptations in specific traits, selection transforms entire organisms. When a given domain of selection-based development is depleted, another can take over, using new selection pressure that accumulates from the other altered traits, engendered by the depth of the interactions. Natural selection yields not only adaptations but also new causal opportunities, and as the process continues, these may become sources of new obstacles to survival. Our abstract model tries to grasp this in a crude preliminary form. We can also encapsulate it in a causal evolutionary hypothesis:

Sustained evolution is an iterative process consisting of selection steps linked by causal steps. Selection steps are depicted in the familiar evolutionary algorithms. Causal steps involve implicit components, expressed as the “depth” of causation.

Using this formulation, we can immediately add more details. In Nature, causality exerts its action through the flesh, the body of the organism, which is called the “interactor” in evolutionary theory (to distinguish it from the heritable part, the genome, which is called the “replicator”). Using these well-established concepts, we may say that according to the hypothesis causal depth uses the interactors to set new rules for the replicators. However, selection can act on replicators only, and it is therefore no wonder that the preponderance of research has gone into how replicators respond to selection. During the past twenty years, the importance of the interactors has also often been stressed, for example in the works on ontogeny and extra-genomic inheritance – the causal perspective on the interactors showing an additional feature: that they may be responsible for the perpetuation of the entire evolutionary process.

In addition, the new causal perpective might help to integrate diverse developments of considerable current and historical interest in evolutionary theory. They go back as far into the past as The Origin of Species, which is still today a rich source of evolutionary ideas. Darwin repeatedly stressed the existence of further evolutionary mechanisms that supplement selection and adaptation. Among these, function change may be the most important. Darwin’s observation was that a typical “adaptive run” (as we would call it today) never starts from scratch – it usually redefines existing parts instead, and assigns to them new functions demanded by the new selection forces. This idea has enjoyed a recent revival in Francis Jacob’s celebrated “tinkering” concept. Jacob stressed the sub-optimality (from the naive engineering or design point of view) of the “evolutionary recycling” of organs and other materials in new evolutionary contexts. The concept of causal depth may in the future contribute to an explanation both of new selection forces and of why they affect the old body parts in new, unexpected ways. Self-organization, exaptation and other recent concepts of “organismic” evolution might also be reconsidered from a unifying perspective of causality, without having to refer to any alternative to Darwinian evolution. They may be best seen as partial mechanisms or epiphenomena of a more fundamental causal process.

 

A Causality-based Research Programme

Theoretical science is dominated by the study of purely abstract systems. Evolution may now prove to be a domain where causality and other “wet” factors of science cannot be ignored. The idea that evolutionary theory and evolutionary technology are not purely theoretical has been tested by several scientists quite recently, with mixed results.

The GOLEM project directed by Jordan Pollack of Brandeis University was one of the first attempts to cross the barrier between simulation and realization. In this pioneering effort, the purpose was to grow robotic devices by means of selective breeding based on the locomotive performance of individual robots. The system was based on an evolutionary algorithm run on a computer that created newer and newer variants of blueprints of robotic bodies. The procedure started with a random population and was fed with fitness values derived from the subsequent trials. Fitness was simply defined as speed of locomotion, and the repeated production and elimination of the successive variants – which were generated to resemble the then prevailing best, but also to differ from them slightly – soon created a paraphernalia of truly remarkable gadgets, with often startling locomotive movements. Some of the artificial creatures crawled, others pushed themselves forward, while still others attempted to leap. An important part of Pollack’s experiment was a revolutionary “rapid prototyping” system, an automated method of synthesis for creating real-world bodies from the digital plans evolved in the computer. As MIT’s Rodney Brooks has remarked, in his commentary with the telling title, From Robot Dreams to Reality: ”this is a long awaited and necessary step towards the ultimate dream of self-evolving machines”. Having a real body has proved crucial in a number of other robotic experiments as well, incuding Brooks’s own, and in the entire field of research on ”autonomous robots”, with SONY’s Luc Steels and Aibo as the best known examples.

Perhaps surprisingly, the GOLEM robots were trapped into the same pitfall as other evolutionary systems. Their evolution ceased after a spectacular initial success, and the project in its original form has been abandoned. Pollack concluded that “merely more CPU [time] is not sufficient to evolve complexity”. But, as in the case of all earlier simulations we have seen before, evolution in this system went on entirely within the computer. Brooks noted that “[t]his means there can be no feedback from the physical world into the evolutionary process”.

A causality-based research programme can offer a different approach. We may not currently understand what exactly is required to apply feedback from the full-fledged causal interactions taking place in the real world to the factors that constitute the boundary conditions for the selection process. Nonetheless, at least one new requirement emerges from the causal hypothesis: let the organisms act back on their environment (and thus possibly on each other). This defines a simple causal ecology which is a prerequisite for a spreading evolutionary interaction as foreseen in the new model.

There are some other, immediate, short-term suggestions to be considered pursuant to implementation of the causal programme. Even a minimalist ecology has several alternative goals that the organisms can follow, and in a causal ecology more and more such goals are expected to emerge. This allows for a more life-like pattern of evolutionary development, where the subgoals do not cosntitute a linear hierarchy. If development gets stuck in one direction, it can resume in another, without a loss. As the old joke says, ”there is no road from here to there” – this may actually be true for the brute force evolutionary experiments. But if that is so, why don’t we go somewhere else first (as we often do when heading from city A to city B)? The idea is that if there is no direct route, we should take a map, and follow other roads. In evolution, we should let the definition of the dynamics to change, as often as necessary, to go from C to D, from D up to perhaps Z, and from there, B may also be reachable. Along the way, we can hope to produce a number of fascinating byproducts, and to tick several checkboxes on the Rama shopping list.

In other words, one close intimation of causal thinking with repsect to evolution might be found in an application of the principle of multi-stage spaceflight. What is not doable in one step, even in principle (such as putting a rocket onto orbit), may be doable in several. The causality-based research programme has a number of other, profound, and more important implications as well, which go beyond the immediate problem at hand.

 

From Evolutionary Technology to Causal Science

Evolutionary technology need not be robot technology. This is a useful dramatization, but not essential. Evolutionary technology probably needs to be causal technology, however, and this necessitates new studies and new understandings of causation and the role it plays in various domains of science. Since causality (as charactrerized here) is essentially a form of complexity, it is important to see its relation to other forms of complex systems, especially to systems with a changing and flexible dynamical structure. Such systems may be viewed as proximate methapors of causal systems, adhering to comparable rules of logic and structure.

The molecular domain is another area where ideas from evolutionary technology may obtain importance. Experiments on real bodied artificial creatures are likely to have intrinsic limitations, as there are too many restrictions on these devices, that could stop causality from prevailing. For instance, there is no known method for the assembly and disassembly of such macrosopic systems, except in special cases like Pollack’s. Development of machines from ”machine germs” or others simple parts is still in the early phases of consideration. Lacking these, how are we to transform a body without a disruption of function? It seems to be necessary for evolutionary causes and effects to lie in the same domain; for the evolving machines to be able to transform other machines through the feedback of differential survival. To be able to switch roles afterwards, as required in the causal hypothesis, we also need a guraranteed ”closure” capacity – an assurance that the same process can be repeated indefinitely, without physically breaking or destroying the system. Unlike in present-day machines, in molecules we find an example (perhaps the only known example) of an environment where these kinds of things are doable. This puts molecules, with respect to both theory and technology (in the broadest sense of ”molecular computing”), on a par with evolution, making it possible to interchange ideas and problems between these fields. There are some easy generalizations of this allience, such as a new treatment of evolvable and reconfigurable hardware, and real-world intelligence. The exploitation of these connections is a possible focus of future research.

Where do we stand on evolution? There is no denying that evolutionary theory in biology exerted a termendous impact on human thinking, yet the theory could never quite live up to its own promises. It cannot be disputed that the theory is correct in its present outline, nevertheless, there is a long list of fundamental problems which tend to be downplayed if not altogether ignored. The theory cannot yet explain many of the most important features of evolution on Earth: how do species arise, where does evolutionary complexity comes from, why is there progression, why do organisms look so much like machines?, and so on. How can these conundrums be reconciled with the fact that the theory which does not explain them proves to be correct wherever tested? Rather than handling these as ”dirty family secrets”, evolutionary technology may in the near future be in a position to say something positive about them, and about the underlying Darwinian framework at the same time.

When seeking the scope and limits of the causal approach, it is easy to find several other realms of knowledge affected. The same structure of problems can be found whenever evolution, growth and flexibility play a role – as in the human mind, for instance. In the study of the mind, the use of evolutionary learning methods and other evolutionary ideas has already been well established. ”Evolutionary psychology” is a recent term to express the concept in a broad framework inspired by cognitive science. It is possible that a new causal perspective on the mind, in particular when coupled to evolution, will induce a change in some of our oldest concepts regarding mental activity, such as intentionality and agency. (Here is one suggestion: perhaps ”agent” is just a synonym for ”causal evolutionary component”; perhaps the concept was such a mystery for a long time because the causal element was neglected or its nature misunderstood.)

These remarks may also allow for some general conclusions. We might suspect that the failure of evolutionary simulations is a symptom of a more widespread phenomenon. In science we tend to forget that our abstract systems are just shorthands for or indicators of actrual processes. It has become a landmark of theoretical science to be purely theoretical and abstract, and to ignore the rough causal origins. Whearas the first development can be seen as an advance, the second is almost certainly a loss. The prevailing attitude of science might have generated many, for the most part yet unseen, problems. The causal or natural component of science invites amplification, and not just in science, but, paradoxically, it seems - also in technology. Technology should be a field of causality and manipulation per se, but the leading edge of technology – as seen in the example of evolutionary technology – appears to be too deeply influenced by concepts of theoretical rather than experimental science. This could be partially changed by putting causality back into the position it deserves.

Recent developments in cognition, artificial intelligence/robotics, logic and linguistics has led to a current renaissance of notions like ”embodiment”, ”situatedness” and other anti-Cartesian concepts. The common pattern behind this growing concern about physical bodies and the current interest in causality may be considered an example of the acknowledged significance of matter on its own account, with more to come.

 

 

In the broadest sense, evolutionary technology is about creating things. We cannot expect to be able to ever imitate Genesis. Lem’s example shows that we can never do things “purely”, never operate on just one thing at a time. Causality expresses this feature in a form adequate for scientific treatment. A better understanding of this kind of complexity - which, if the suppositions of the causal evolutionary model are correct, might be the source of all evolutionary complexity - is crucial for a great variety of studies in science.