Connectionism

The Mind's Eye may be the apex of a collective hierarchy of parallel systems in which the cytoskeleton and related structures are the ground floor. Parallel systems in both computers and biological systems rely on lateral connections and networks to provide the richness and complexity required for sophisticated information processing. Computer simulations of parallel connected networks of relatively simple switches ("neural nets") develop "cognitive-like functions" at sufficient levels of connectedness complexity-a "collective phenomenon" (Huberman and Hogg, 1985). Philosopher John Searle (Pagels, 1984), who has an understandable bias against the notion that computer systems can attain human consciousness equivalence, points out that computers can do enormously complex tasks without appreciating the essence of their situation. Searle likens this to an individual sorting out Chinese characters into specific categories without understanding their meaning, being unable to speak Chinese. He likens the computer to the individual sorting out information without comprehending its essence.

It would be difficult to prove that human beings comprehend the essence of anything. Nevertheless, even the simulation of cognitive-like events is interesting. Neural net models and connectionist networks (described further in Chapter 4) have been characterized mathematically by Cal Tech's John Hopfield (1982) and others. His work suggests that solutions to a problem can be understood in terms of minimizing an associated energy function and that isolated errors or incomplete data can, within limits, be tolerated. Hopfield describes neural net energy functions as having contours like hills and valleys in a landscape. By minimizing energy functions, information (metaphorically) flows like rain falling on the landscape, forming streams and rivers until stable states ("lakes") occur. A new concept in connectionist neural net theory has emerged with the use of multilevel networks. Geoffrey Hinton (1985) of Carnegie-Mellon University and Terry Sejnowski of Johns Hopkins University have worked on allowing neural nets to find optimal solutions, like finding the lowest particular lake in an entire landscape. According to Sejnowski (Allman, 1986; Hinton, Sejnowski and Ackley, 1984) the trick is to avoid getting stuck in a tiny depression between two mountains:

Imagine you have a model of a landscape in a big box and you want to find a lowest point on the terrain. If you drop a marble into the box, it will roll around for a while and come to a stop. But it may not be the lowest point, so you shake the box. After enough shaking you usually find it.

«u» Tg ftp ml «fe Ä |E3i iw iijf • ry. lyr TUT. p^» ^ rwe iw ft *gi f & 1 $

^•■ö' ^! SS-* ^Sh iö=-lö-"' iß ■ -ft' -Si' §?»; j^j'^öi'iS- IS*' 8"' Iffi ^ gö; Ar i^faisaiiBiiBEit^sff ^ fEÜiSiift ^ ^

pisjtttfipm^PVHitp^msii

^mmmmmmm^mwmmmmm^mmm^

ismw

Figure 1.6: Ten dimensional hypercube with 1,024 nodes, and 10 connections per node. Computer generation by Conrad Schneiker.

Hinton and Sejnowski have used this concept of mathematically shaking their neural net simulations to find optimal solutions. It requires a multilevel hierarchy of parallel systems so that one level can "shake" or tune a lower level. Such an arrangement can perhaps explain the relationship between hierarchical layers of parallel systems within the brain. For example, neural networks based on synaptic connection may regulate (and be regulated by) smaller, faster, more comprehensive networks in the intracellular cytoskeleton.

Extensive comparisons between information processing in the brain and artificial intelligence have been reviewed by A. M. Decallatay (1986) who feels the laws of thought described in philosophy have been rediscovered by AI: "The mental world of Plato is reproduced in the physical symbols of Newell and Simon." DeCallatay observes that Al represents data by virtual pointers which connect symbols. In computers these virtual relations are actual wires with potential gate connection; in the brain they appear to be neuronal synaptic connections. Within neurons they may be cross-bridge filaments connecting cytoskeletal microtubules. As a computer expert evaluating the brain, DeCallatay states that the brain learns by opening gates to build new connections between elements simultaneously activated. He sees the presence or absence of dendritic spines playing the role of an "all or none" switch at the neural level. Dendritic spines are knobby projections of membrane covered cytoplasm on neuronal dendrites which are generated and maintained by the cytoskeleton and form synapses with other neurons. The most accepted theory for learning and memory in the brain is that of strengthening of specific synapses within neural circuits, an idea generated by Donald Hebb (1949). As will be described in Chapters 4 and 5, dynamic structural activities of the cytoskeleton are responsible for all cytoplasmic rearrangements including formation and regulation of dendritic spines and synapses. The spines are branchings of dendrites which themselves are branchings of neurons. A further dimension of complexity, these cytoskeletal appendages are prime candidates for "synaptic plasticity," the cornerstone for prevalent models of brain learning and memory.

0 0

Post a comment