Major Directions of Inquiry

How Understanding Self-Organization & Pattern Formation Can be Used to Form Engineered Systems

Self-organization is the process by which elements interact to create spatio-temporal patterns of behavior that are not directly imposed by external forces. To be concrete, consider the patterns of spontaneous traffic jams or heart beats. For engineering applications, the promise of understanding such pattern formation is the opportunity to use the natural dynamics of the system to create structures and impose functions rather than to construct them element by element. The robustness of self-organized systems is also a desired quality in conventional engineered systems — and on that is difficult to obtain. For biomedical applications, the promise is to understand developmental processes like the development of the fertilized egg into a complex physiological organism, like a human being. In the context of the formation of complex systems through development or through evolution, elementary patterns are the building blocks of complex systems. This is diametrically opposed to considering parts as the building blocks of such systems.

Spontaneous (self-organizing) patterns arise through symmetry breaking in a system when there are multiple inequivalent static or dynamic attractors. In general, in such systems, a particular element of a system is affected by forces from more than one other element, and this gives rise to "frustration" as elements respond to aggregate forces that are not the same as each force separately. Frustration contributes to the existence of multiple attractors and therefore of pattern formation.

Pattern formation can be understood using simple rules of local interaction, and there are identifiable classes of rules (universality) that give rise to classes of patterns. These models can be refined for more detailed studies. Useful illustrative examples of pattern forming processes are local-activation, long-range inhibition models that can describe patterns on animal skins, magnets, dynamics of air flows in clouds, wind-driven ocean waves, and swarm behaviors of insects and animals. Studies of spontaneous and persistent spatial pattern formation were initiated a half century ago by Turing (1952), and the wide applicability of patterns has gained increasing interest in recent years (Bar-Yam 1997; Meinhardt 1994; Murray 1989; Nijhout 1992; Segel 1984; Ball 1999).

The universality of patterns has been studied in statistical physics, where dynamic patterns arise in quenching to a first-order phase transition in cases of both conserved (spinodal decomposition, e.g., oil-water separation) and nonconserved (coarsening, e.g., freezing water) order parameters (Bray 1994) and also in growing systems (self-organized criticality, e.g., roughening). Generic types of patterns are relevant for such contexts and are distinguished by their spatio-temporal behaviors. Classic models have characteristic spatial scales (Turing patterns, coarsening, spinodal decomposition); others are scale invariant (self-organized criticality, roughening). Additional classes of complex patterns arise in networks with long-range interactions (rather than just spatially localized interactions) and are used for modeling spin glasses, neural networks (Anderson and Rosenfeld 1988; Bishop 1995; Kandel, Schwartz, and Jessell 2000), or genetic networks (Kauffman 1969).

Understanding Description and Representation

The study of how we describe complex systems is itself an essential part of the study of such systems. Since science is concerned with describing reproducible phenomena and engineering is concerned with the physical realization of described functions, description is essential to both. A description is some form of identified map of the actual system onto a mathematical or linguistic object. Shannon's information theory (Shannon 1963) has taught us that the notion of description is linked to the space of possibilities. Thus, while description appears to be very concrete, any description must reflect not only what is observed but also an understanding of what might be possible to see. An important practical objective is to capture information and create representations that allow human or computer-based inquiry into the properties of the system.

Among the essential concepts relevant to the study of description is the role of universality and non-universality (Wilson 1983) as a key to the classification of systems and of their possible representations. In this context, effective studies are those that identify the class of models that can capture properties of a system, rather than those of a single model of a system. Related to this issue is the problem of testability of representations through validating the mapping of the system to the representation. Finally, the practical objective of achieving human-usable representations must contend with the finite complexity of a human being, as well as other human factors due to both "intrinsic" properties of complex human function and "extrinsic" properties that are due to the specific environment in which human beings have developed their sensory and information processing systems.

The issue of human factors can be understood more generally as part of the problem of identifying the observer's role in description. A key issue is identifying the scale of observation: the level of detail that can be seen by an observer, or the degree of distinction between possibilities (NIGMS 2002; Bar-Yam 1997). Effective descriptions have a consistent precision so that all necessary but not a lot of unnecessary information is used, irrelevant details are eliminated, but all relevant details are included. A multiscale approach (Bar-Yam 1997) relates the notion of scale to the properties of the system and relates descriptions at different scales.

The key engineering challenge is to relate the characteristics of a description to function. This involves relating the space of possibilities of the system to the space of possibilities of the environment (variety, adaptive function). Complexity is a logarithmic measure of the number of possibilities of the system, equivalently the length of the description of a state. The Law of Requisite Variety (Ashby 1957) limits the possible functions of a system of a particular complexity.

Understanding Evolutionary Dynamics

The formation of complex systems and the structural/functional change of such systems, is the process of adaptation. Evolution (Darwin 1859) is the adaptation of populations through intergenerational changes in the composition of the population (the individuals of which it is formed), and learning is a similar process of adaptation of a system through changes in its internal patterns, including (but not exclusively) the changes in its component parts.

Characterizing the mechanism and process of adaptation, both evolution and learning, is a central part of complex systems research (Holland 1992; Kauffman 1993; Goodwin 1994; Kauffman 1995; Holland 1995). This research generalizes the problem of biological evolution by recognizing the relevance of processes of incremental change to the formation of all complex systems. It is diametrically opposed to the notion of creation in engineering that typically assumes new systems are invented without precursor. The reality of incremental changes in processes of creativity and design reflect the general applicability of evolutionary concepts to all complex systems.

The conventional notion of evolution of a population based upon replication with variation and selection with competition continues to be central. However, additional concepts have become recognized as important and are the subject of ongoing research, including the concepts of co-evolution (Kauffman 1993), ecosystems (Kauffman 1993), multiple niches, hierarchical or multilevel selection (Brandon and Burian 1984; Bar-Yam 2000), and spatial populations (Sayama, Kauffman, and Bar-Yam 2000). Ongoing areas of research include the traditional philosophical paradoxes involving selfishness and altruism (Sober and Wilson 1999), competition and cooperation (Axelrod 1984), and nature and nurture (Lewontin 20001). Another key area of ongoing inquiry is the origin of organization, including the origins of life (Day 1984), which investigate the initial processes that give rise to the evolutionary process of complex systems.

The engineering applications of evolutionary process are often mostly associated with the concept of evolutionary programming or genetic algorithms (Holland 1992; Fogel, Owens, and Walsh 1966). In this context, evolution is embodied in a computer. Among the other examples of the incorporation of evolution into engineering are the use of artificial selection and replication in molecular drug design (Herschlag and Cech 1990; Beaudry and Joyce 1992; Szostak 1999), and the human-induced variation with electronic replication of computer viruses, worms, and Trojan horses in Internet attacks (Goldberg et al. 1998). The importance of a wider application of evolution in management and engineering is becoming apparent. The essential concept is that evolutionary processes may enable us to form systems that are more complex than we can understand but will still serve functions that we need. When high complexity is necessary for desired function, the system should be designed for evolvability: e.g., smaller components (subdivided modular systems) evolve faster (Simon 1998). We note, however, that in addition to the usual concept of modularity, evolution should be understood to use patterns, not elements, as building blocks. The reason for this is that patterns are more directly related to collective system function and are therefore testable in a system context.

Understanding Choices and Anticipated Effects: Games and Agents

Game theory (von Neumann and Morgenstern 1944; Smith 1982; Fudenberg and Tirole 1991; Aumann and Hart 1992) explores the relationship between individual and collective action using models where there is a clear statement of consequences (individual payoffs), that depend on the actions of more than one individual. A paradigmatic game is the "prisoner's dilemma." Traditionally, game theory is based upon logical agents that make optimal decisions with full knowledge of the possible outcomes, though these assumptions can be usefully relaxed. Underlying game theory is the study of the role of anticipated effects on actions and the paradoxes that arise because of contingent anticipation by multiple anticipating agents, leading to choices that are undetermined within the narrow definition of the game and thus are sensitive to additional properties of the system. Game theory is relevant to fundamental studies of various aspects of collective behavior: altruism and selfishness, and cooperation and competition. It is relevant to our understanding of biological evolution, socioeconomic systems, and societies of electronic agents. At some point in increasing complexity of games and agents, the models become agent-based models directed at understanding specific systems.

Understanding Generic Architectures

The concept of a network as capturing aspects of the connectivity, accessibility, or relatedness of components in a complex system is widely recognized as important in understanding aspects of these systems — so much so that many names of complex systems include the term "network." Among the systems that have been identified thus are artificial and natural transportation networks (roads, railroads, waterways, airways) (Maritan et al. 1996; Banavar, Maritan, and Rinaldo 1999; Dodds and Rothman 2000), social networks (Wasserman and Faust 1994), military forces (INSS 1997), the Internet (Cheswick and Burch n.d.; Zegura, Calvert, and Donahoo 1997), the World Wide Web (Lawrence and Giles 1999; Huberman et al. 1998; Huberman and Lukose 1997), biochemical networks (Service 1999; Normile 1999; Weng, Bhalla, and Iyengar 1999), neural networks (Anderson and Rosenfeld 1988; Bishop 1995; Kandel, Schwartz, and Jessell 2000), and food webs (Williams and Martinez 2000). Networks are anchored by topological information about nodes and links, with additional information that can include nodal locations and state variables, link distances, capacities, and state variables, and possibly detailed local functional relationships involved in network behaviors.

In recent years, there has been significant interest in understanding the role played by the abstract topological structure of networks represented solely by nodes and links (Milgram 1967; Milgram 1992; Watts and Strogatz 1998; BarthÈÈmy and Amaral 1999; Watts 1999; Latora and Marchiori 2001; Barab-si and Albert 1999; Albert, Jeong, and Barab-si 1999; Huberman and Adamic 1999; Albert, Jeong, and Barab- si 2000; Jeong et al. 2001). This work has focused on understanding the possible relationships between classes of topological networks and their functional capacities. Among the classes of networks contrasted recently are locally connected, random, small-world (Milgram 1967, 1992; Watts and Strogatz 1998; BarthÈÈmy and Amaral 1999; Watts 1999), and scale-free networks (Latora and Marchiori 2001; Barab- si and Albert 1999; Albert, Jeong, and Barab- si 1999; Huberman and Adamic 1999; Albert, Jeong, and Barab- si 2000; Jeong et al. 2001). Other network architectures include regular lattices, trees, and hierarchically decomposable networks (Simon 1998). Among the issues of functional capacity are which networks are optimal by some measure, e.g., their efficiency in inducing connectivity, and the robustness or sensitivity of their properties to local or random failure or directed attack. The significance of these studies from an engineering perspective is in answering questions such as, What kind of organizational structure is needed to perform what function with what level of reliability? and What are the tradeoffs that are made in different network architectures? Determining the organizational structures and their tradeoffs is relevant to all scales and areas of the converging technologies: nanotechnology, biomedical, information, cognition, and social networks.

Understanding(Recognizing) the Paradoxes of Complex Systems

The study of complex systems often reveals difficulties with concepts that are used in the study of simpler systems. Among these are conceptual paradoxes. Many of these paradoxes take the form of the coexistence of properties that, in simpler contexts, appear to be incompatible. In some cases it has been argued that there is a specific balance of properties, for example the "edge-of-chaos" concept suggests a specific balance of order and chaos. However, in complex systems, order and chaos often coexist, and this is only one example of the wealth of paradoxes that are present. A more complete list would include paired properties such as the following:

Stable and adaptable Reliable and controllable Persistent and dynamic Deterministic and chaotic Random and predictable

Cooperative and competitive Selfish and altruistic Logical and paradoxical Averaging and non-averaging Universal and unique

• Ordered and disordered

While these pairs describe paradoxes of properties, the most direct paradox in complex systems is a recognition that more than one "cause" can exist, so that A causes B, and C causes B are not mutually incompatible statements. The key to understanding paradox in complex systems is to broaden our ability to conceive of the diversity of possibilities, both for our understanding of science, and for our ability to design engineered systems that serve specific functions and have distinct design tradeoffs that do not fit within conventional perspectives.

Developing Systematic Methodologies for the Study of Complex Systems

While there exists a conventional "scientific method," study of complex systems suggests that many more detailed aspects of scientific inquiry can be formalized. The existence of a unified understanding of patterns, description, and evolution as relevant to the study of complex systems suggests that we adopt a more systematic approach to scientific inquiry. Components of such a systematic approach would include experimental, theoretical, modeling, simulation, and analysis strategies. Among the aspects of a systematic strategy are the capture of quantitative descriptions of structure and dynamics, network analysis, dynamic response, information flow, multiscale decomposition, identification of modeling universality class, and refinement of modeling and simulations.

0 0

Post a comment