NBIC and Improving Learning

What will NBIC allow us to achieve in the learning domain that we cannot achieve now? The effects of NBIC may be

• improved knowledge of brain functioning and capabilities

• new learning domains such as immersive virtual environments

• more widespread use of nonvisual experiences for solving spatial problems

• examining sensory substitution as a way to enhance learning . Let us briefly examine how these might occur.

Improving Knowledge ofBrain Functioning and Capabilities: Place Cell Analysis.

Advances in Magnetic Resonance Imagery (MRI) have given some promise for tracking what parts of the brain are used for what functions. Opinions differ regarding the value of this technology, but much of the negative criticism is directed towards identifying which parts of the brain appear to be used for emotions such as love or hate, or for aesthetic reactions to concepts of beauty, danger, and fear. Somewhat less controversy is present in the spatial domain, where the 25-year-old hypothesis of O'Keefe and Nadel (1978) that the hippocampus is one's "cognitive map" (or place where spatial information is stored) is being actively investigated. Neurobiologists may be able to determine which neurons "fire" (or are excited) when spatial information relating to objects and their locations are sensed and stored. If NBIC can develop reliable place cell analysis, the process of mapping the human brain could be transformed into examining the geography of the brain. To do this in a thorough manner, we need to know more about spatial cognition, including understanding spatial concepts, spatial relations, spatial thinking, and spatial reasoning.

Within the domains of spatial thinking and reasoning — domains that span all scales of science and technology from the nano scale to a universe-wide scale — there is enormous potential for improving our understanding of all facets of the spatial domain. Spatial thinking and reasoning are dominated by perceptualizations, which are the multisensory expansion of visualization. The major processes of information processing include encoding of sensed experiences, the internal manipulation of sensed information in working memory, the decoding of manipulated information, and the use of the results in the decision-making and choice processes involved in problem-solving and spatial behavior. According to Golledge (2002), thinking and reasoning spatially involves

• Understanding the effects of scale

• Competently mentally transforming perceptions and representations among different geometric dimensions (e.g., mentally expanding 1-dimensional traverses or profiles to 2-D or 3-D configurations similar to that involved in geological mapping, or reducing 3-D or 4-D static or dynamic observations to 2-D formats for purposes of simplification or generalization (as when creating graphs, maps, or images)

• Comprehending different frames of reference for location, distance estimation, determining density gradients, calculating direction and orientation, and other referencing purposes (e.g., defining coordinates, vectors, rasters, grids, and topologies)

• Being capable of distinguishing spatial associations among point, line, area, and surface distributions or configurations

• Exercising the ability to perform spatial classification (e.g., regionalization)

• Discerning patterns in processes of change or spread (e.g., recognizing patterns in observations of the spatial spread of AIDS or city growth over time)

• Revealing the presence of spatial and nonspatial hierarchies

Each of the above involves sensing of phenomena and cognitive processing to unpack embedded detail. It should also be obvious that these perceptual and cognitive processes have their equivalents in information technology (IT), particularly with respect to creating, managing and analyzing datasets. While we are creating multiple terabytes of data each day from satellites, from LIght Detection And Ranging (LIDAR), from cameras, and from visualizations, our technology for dealing with this data — particularly for dynamic updating and realtime analysis — lags somewhat, even in the most advanced systems currently invented. Even in the case of the most efficient data collector and analyzer ever developed, the human mind, there is still a need to simplify, summarize, generalize, and represent information to make it legible. The activities required to undertake this knowledge acquisition process are called education, and the knowledge accumulation resulting from this exposure is called learning. Thus, if NBIC can empower spatial thinking and reasoning, it will promote learning and knowledge accumulation among individuals and societies, and the results will have impact the entire spatial domain. (Note, there is a National Research Council committee on spatial thinking whose report is due at the end of 2002.)

To summarize, spatial thinking is an important part of the process of acquiring knowledge. In particular, spatial knowledge, defined as the product of spatial thinking and reasoning (i.e., defined as cognitive processes) can be characterized as follows:

• Spatial thinking and reasoning do not require perfect information because of the closure power of cognitive processes such as imaging, imagining, interpolating, generalizing, perceptual closure, gestalt integration, and learning

• Spatial metaphors are being used — particularly in IT related database development and operation — but it is uncertain whether they may or may not be in congruence with equivalent cognitive functioning.

• Spatial thinking has become an important component of IT. IT has focused on visualization as a dominant theme in information representation but has paid less attention to other sensory modalities for its input and output architectures; more emphasis needs to be given to sound, touch, smell, gaze, gesture, emotion, etc. (i.e., changing emphasis from visualizations to perceptualizations).

New Learning Domains

One specific way that NBIC developments may promote learning is by enhancement of virtual systems. In geography and other spatial sciences, learning about places other than one's immediate environment is achieved by accessing secondary information, as in books, maps, images, and tables. In the future, one may conceive of the possibility that all place knowledge could be learned by primary experience in immersive virtual environments. In fact, within 20 years, much geospatial knowledge could be taught in immersive virtual environments (VE) labs. This will require

• solution of the space sickness or motion sickness problems sometimes associated with immersion in VE

• quick and immediate access to huge volumes of data — as in terabytes of data on a chip — so that suitably real environments can be created

• adoption of the educational practice of "learning by doing"

• major new development of hardware and virtual reality language (VRL) software

• conviction of teachers that use of VE labs would be a natural consequence of the educational premise that humans learn to think and reason best in the spatial domain by directly experiencing environments.

• Investigation of which types of learning experiences are best facilitated by use of VE.

Using More Nonvisual Methods

Because of the absence of geography in many school curricula in the United States, many people have severely restricted access to (and understanding of) representations of the environment (for example, maps and images) and more abstract concepts (including spatial concepts of hierarchy and association or adjacency displayed by maps or data represented only in tables and graphs) that are fundamental in education and daily life. Representations of the geographic world (maps, charts, models, graphs, images, tables, and pictures) have the potential to provide a rich array of information about the modern world. Learning from spatialized representations provides insights into layout, association, adjacency, and other characteristics that are not provided by other learning modes. But, electronic spatial representations (maps and images) are not accessible to many groups who lack sight, training, or experience with computerized visualizations, thus contributing to an ever-widening digital divide. With new technological developments, such as the evolution from textual interfaces to graphically based Windows environments, and the increasing tendencies for website information to be restricted to those who can access visualizations and images, many people are being frustrated in their attempts to access necessary information — even that relevant to daily life, such as weather forecasts.

When viewing representations of the geographic world, such as a map on a computer screen, sight provides a gestalt-like view of information, allowing the perception of the synoptic whole and almost simultaneously recognizing and integrating its constituent parts. However, interacting with a natural environment is in fact a multi-modal experience. Humans engage nearly all of their sensory modalities when traversing space. Jacobson, Rice, Golledge and Hegarty (2002) summarize recent literature relating to non-visual interfaces. They suggest that, in order to attend to some of this multisensory experience and to provide access to information for individuals with restricted senses, several research threads can be identified for exploring the presentation of information multimodally. For example, information in science and mathematics (such as formulae, equations, and graphs) has been presented through auditory display (e.g., hearing a sine wave) and through audio-guided keyboard input (Gardner et al. 1998; Stevens et al. 1997). Mynatt (1977) has developed a tonal interface that allows users without vision to access Windows-style graphical user interfaces. Multimodal interfaces are usually developed for specialist situations where external vision is not necessarily available, such as for piloting and operating military aircraft (Cohen and Wenzel 1995; Cohen and Oviatt 1995; Rhyne and Wolf 1993).

Jacobson et al. also point out that abstract sound variables have been used successfully for the presentation of complex multivariate data. Parkes and Dear (1990) incorporated "sound painting" into their tactual-auditory information system (NOMAD) to identify gradients in slope, temperature, and rainfall. Yeung (1980) showed that seven chemistry variables could be presented through abstract sound, and reported a 90% correct classification rate prior to training and a 98% correct response rate after training. Lunney and Morrison (1981) have shown that sound graphs can convey scientific data to visually impaired students. Sound graphs have also been compared to equivalent tactual graphs; for example, Mansur et al. (1985) found comparable information communication capabilities between the two media, with the auditory displays having the added benefit of being easier to create and quicker to read. Recent research has represented graphs by combining sound and brailled images with the mathematical formula for each graph being verbally presented while a user reads the brailled shape. Researchers have investigated navigating the Internet World Wide Web through audio (Albers 1996; Metois and Back 1996) and as a tool to access the structure of a document (Portigal and Carey 1994). Data sonification has been used to investigate the structure of multivariate and geometric data (Axen and Choi 1994; Axen and Choi 1996; Flowers et al. 1996), and auditory interfaces have been used in aircraft cockpits and to aid satellite ground control stations (Albers 1994; Ballas and Kieras 1996; Begault and Wenzel 1996). But while hardware and software developments have shown "proof of concept," there appear to be few successful implementations of the results for general use (except for some gaming contexts) and no conclusive behavioral experiments to evaluate the ability of the general public or specialty groups (e.g., the vision-impaired) to use these innovations to interpret on screen maps, graphics, and images.

Thus, while Jacobson et al. (2002) have illustrated that multimodal interfaces have been explored within computer science and related disciplines (e.g., Delclos and Hartman 1993; Haga and Nishino 1995; Ladewski 1996; Mayer and Anderson 1992; Merlet, Nadalin, Soutou, Lapujade, and Ravat 1993; Morozov 1996; Phillips 1994; Stemler 1997; Hui et al. 1995; and others), and a number of researchers have looked at innovative interface mediums such as gesture, speech, sketching, and eye tracking (e.g., Ballas and Kieras 1996; Briffault and Denis 1996; Dufresne et al. 1995; Schomaker et al. 1995; Taylor et al. 1991), they also claim that only recently are such findings beginning to have an impact upon technology for general education, a view shared by Hardwick et al. (1996; 1997).

In summary, extrapolating from this example, one can assume that developments in NBIC will impact the learning activities of many disciplines by providing new environments for experience, by providing dynamic realtime data to explore with innovative teaching methods, and (if biotechnology continues to unpack the secrets of the brain and how it stores information as in place cell theory), the possibility of direct human-computer interaction for learning purposes may all be possible. Such developments could

• enhance the process of spatial learning by earlier development of the ability to reason abstractly or to more readily comprehend metric and nonmetric relations in simple and complex environments

• assist learning by discovering the biotechnological signatures of phenomena and discovering the place cells where different kinds of information are stored, and in this way enhance the encoding and storage of sensed information

• where functional loss in the brain occurs (e.g., if loss of sight leaves parts of the brain relatively inactive), to find ways to use the cells allocated to sight to be reallocated to other sensory organs, thus improving their functioning capabilities.

• Representations of the geographic world (maps, charts, models, graphs, images, tables, and pictures) have the potential to provide a rich array of information about the modern world.

• Learning from spatialized representations provides insights into layout, association, adjacency, and other spatial characteristics that are not provided by other learning modes.

• However, interacting with a natural environment is in fact a multimodal experience. Humans engage nearly all of their sensory modalities when traversing or experiencing space.

Given the dominance of computer platforms for representing information and the overwhelming use of flat screens to display such information, there is reason to believe that multimodal representations may not be possible until alternatives to 2-D screen surfaces have been developed for everyday use. The reasons for moving beyond visualization on flat screens are compelling and are elaborated on later in this chapter.

0 0

Post a comment