Parallelism

The previous generations of computer architecture have been based on the von Neumann concept of sequential, serial processing. In serial processing, computing steps are done consecutively which is time consuming. One false bit of information can cascade to chaotic output. The brain with its highly parallel nerve tracks shines as a possible alternative. In parallel computing, information enters a large number of computer pathways which process the data simultaneously. In parallel computers information processors may be independent of each other and proceed at individual tempos. Separate processors, or groups of processors, can address different aspects of a given problem asynchronously. As an example, Reeke and Edelman (1984) have described a computer model of a parallel pair of recognition automata which use complementary features (Chapter 4). Parallel processing requires reconciliation of multiple outputs which may differ due to individual processors being biased differently than their counterparts, performing different functions, or because of random error. Voting or reconciliation must occur by lateral connection, which may also function as associative memory. Output from a parallel array is a collective effect of the input and processing, and is generally a consensus which depends on multiple features of the original data input and how it is processed. Parallel and laterally connected tracks of nerve fibers inspired AI researchers to appreciate and embrace parallelism. Cytoskeletal networks within nerve cells are highly parallel and interconnected, a thousand times smaller, and contain millions to billions of cytoskeletal subunits per nerve cell!

Present day evolution of computers toward parallelism has engendered the "Connection Machine" (Thinking Machines, Inc.) which is a parallel assembly of 64,000 microprocessors. Early computer scientists would have been impressed with an assembly of 64,000 switches without realizing that each one was a microprocessor. Similarly, present day cognitive scientists are impressed with the billions of neurons within each human brain without considering that each neuron is itself complex.

Another stage of computer evolution appears as multidimensional network parallelism, or "hypercubes." Hypercubes are processor networks whose interconnection topology is seen as an "n-dimensional" cube. The "vertices" or "nodes" are the processors and the "edges" are the interconnections. Parallelism in "n-dimensions" leads to hypercubes which can maximize available computing potential and, with optimal programming, lead to collective effects. Complex interconnectedness observed among brain neurons and among cytoskeletal structures may be more accurately described as hypercube architecture rather than simple parallelism. Hypercubes are exemplified in Figures 1.4, 1.5, and 1.6.

Al/Roboticist Hans Moravec (1986) of Carnegie-Mellon University has attempted to calculate the "computing power" of a computer, and of the human brain. Considering the number of "next states" available per time in binary digits, or bits, Moravec arrives at the following conclusions. A microcomputer has a capacity of about 106 bits per second. Moravec calculates the brain "computing" power by assuming 40 billion neurons which can change states hundreds of times per second, resulting in 40 x 1011 bits per second. Including the cytoskeleton increases the potential capacity for information processing immensely. Microtubules are the most visible cytoskeletal structures. Making some rough assumptions about cytoskeletal density (i.e. microtubules spaced about 1000 nanometers apart) and the volume of brain which is neuronal cytoplasm leads to about 1014 microtubule subunits in a human brain (ignoring neurofilaments and other cytoskeletal elements). As described in Chapters 5 and 6, the frequency of cytoskeletal subunit state changes may be greater than billions per second! The cytoskeleton is capable not only of immense information capacity, but appears to be designed such that interacting conformational state patterns may perform computing functions. Several theories which propose such mechanisms will be described in Chapter 8.

Figure 1.4: Six dimensional hypercube with 64 nodes, and 6 connections per node. Computer generation by Conrad Schneiker.

The brain is a continuous system. Classical computers have operated on recursive repetitive functions to process information in batches and the output is obtained as the final product. Similarly, most parallel processing designs have discrete input and output points. Carl Hewitt (1985) has described open systems within computers in which processing may never halt, which can provide output while computing is still in operation, and can accept input from sources not anticipated when the computation began. Like the human brain/mind, open continuous systems can interact with the environment and adapt to new situations. Hewitt describes an asynchronous parallel computer system which can make use of multiple inputs and outputs and whose parallel elements are connected by "arbiters" which "weigh" and reconcile differing content, and can provide continuous input and output. Among brain neurons, "arbiters" would appear to be synaptic connections among laterally connected parallel neurons. Within the cytoskeleton, laterally connecting filaments and microtubule associated proteins ("MAPs") could serve as logical arbiters.

Figure 1.5: Eight dimensional hypercube with 256 nodes, and 8 connections per node. Computer generation by Conrad Schneiker.

Hewitt argues that parallel, open systems are "non-hierarchical" because input and output are continuously processed throughout the system. Early views of brain/mind organization assumed a hierarchical arrangement of processing units. Sensory input was thought to be processed and relayed to higher and higher levels of cognition until reaching a single "Grandfather neuron" or "Mind's Eye" which comprehended the input's "essence." Classical brain research by Lashley (1929, 1950) and others (Chapter 4) strongly suggest that memory and information are distributed throughout the brain and that specific anatomical hierarchical arrangements leading to "Grandfather neurons" do not exist. The "Mind's Eye" is not localized to a given site but is mobile over wide volumes of brain. Assuming that humans actually do comprehend the essence of at least some things, who or what is comprehending? The site and nature of attention, "self," consciousness or the Mind's Eye remains a philosophical issue and barrier to Mind/Tech merger. Neuroanatomical structure and the distributed storage of brain information point toward highly parallel, open brain/mind computing systems which may occur both at the neural level, and within neurons in the cytoskeleton. The perception component of consciousness, the "Mind's Eye" may be a mobile hierarchy determined by collective dynamics.

Brain Blaster

Brain Blaster

Have you ever been envious of people who seem to have no end of clever ideas, who are able to think quickly in any situation, or who seem to have flawless memories? Could it be that they're just born smarter or quicker than the rest of us? Or are there some secrets that they might know that we don't?

Get My Free Ebook


Post a comment