Enhancing Sensory and Cognitive Capabilities in the Spatial Domain

How can we exploit developments in NBIC to enhance perceptual and cognitive capabilities across the life span, and what will be the types of developments needed to achieve this goal?

To enhance sensory and cognitive capabilities, a functional change in the way we encode information, store it, decode it, represent it, and use it may be needed. Much of the effort in Information Technology has been directed towards developing bigger and bigger databases that can be used on smaller and smaller computers. From satellites above we get terabytes of data (digitized records of the occurrence of phenomena), and we have perhaps outgrown our ability to examine this data. As nanotechnology and IT come into congruence, the terabytes of data being stored in boxes will be stored on chips and made accessible in real time via wearable and mobile computers, and even may be fed into smart fabrics woven into the clothes we wear. But just how well can we absorb, access, or use this data? How much do we need to access? And how best can we access it and use it? The question arises as to how we can exploit human perception and cognition to best help in this process, and the answer is to find out more about these processes so that they can be enhanced. Examples of questions to be pursued include the following:

• How can we enhance the sensory and cognitive aspects of human wayfinding for use in navigating in cyberspace?

• What particular sensory and cognitive capabilities are used in the field, and how do we enhance them for more effective fieldwork with wearable and mobile computers (e.g., for disaster responses)?

• How do we solve problems of filtering information for purposes of representation and analysis (e.g., enhance visualizations)?

• How do we solve the problem of resolution, particularly on the tiny screens typical of wearable and field computers?

• What alternatives to visualization may be needed to promote ease of access, representation, and use of information?

• What is the best mode for data retrieval in field settings (e.g., how do we get the information we need now)?

• How can we build technology to handle realtime dynamic input from several sources, as is done by human sensory organs and the human brain?

• Will we need a totally new approach to computer design and interface architecture (e.g., abandon keyboards and mice) that will allow use of the full range of sensory and cognitive capabilities, such as audition, touch, gaze, and gesture (e.g., the use of Talking Signs® and Internet connections to access websites tied to specific locations)?

Visualization is the dominant form of human-IT interaction. This is partly because the visual sense is so dominant, particularly in the spatial domain. It is also the dominant mode for representation of analyzed data (on-screen). But visualization is but a subset of spatialization, which goes beyond the visual domain by using everyday multimodal situations (from desktops and file cabinets to overlay and digital worlds) to organize and facilitate access to stored information. These establish a linking by analogy and metaphor between an information domain and familiar elements of everyday experience. Spatial (and specifically geographic) metaphors have been used as database organizing systems. But even everyday geospatial experiences are biased, and to enhance our sensory and cognitive abilities we need to recognize those biases and mediate them if successful initiation of everyday knowledge and experience (including natural languages) are to be used to increase human-IT interactions.

The main problem arising from these usages is simply that an assumption of general geospatial awareness is false. Basic geographic knowledge (at least in the United States) is minimal, and knowledge of even rudimentary spatial concepts like distance, orientation, adjacency, and hierarchy is flawed. Recent research in spatial cognition has revealed a series of biases that permeate naive spatial thinking. Partly because of a result of cognitive filtering of sensed information and partly because of inevitable technical errors in data capture and representation, biases occur. Golledge (2002) has suggested that these include the following:

• conceptual bias due to improper thinking and reasoning (e.g., applying metric principles to nonmetric situations)

• perceptual biases, including misunderstandings and misconceptions of notions of symmetry, alignment, clustering, classification, closure, and so on (e.g., assuming Miami, Florida, MUST be east of Santiago, Chile, because Miami is on the east cost of North America and Santiago is on the west coast of South America) (Fig. B.1)

• violating topological features of inclusion and exclusion when grouping (spatial) data

• assuming distance asymmetry when distance symmetry actually exists, and vice versa (e.g., different perceptions of trips to and from work)

• inappropriate use of cognitive concepts of rotation and alignment (e.g., misreading map orientation)

• cognitively overestimating shorter distances and underestimating longer distances (Stevens' Law or regression towards the mean)

• distortions in externalized spatial products (e.g., distorted cognitive maps) (Liben 1982; Fig. B.2)

• bias that results from using imprecise natural language (e.g., fuzzy spatial prepositions like "near" and "behind" that are perspective dependent). (Landau and Jackendoff 1993)

p i h- -

0 0

Post a comment