NBIC and Improving Human Computer Interfaces and Interactions

A key question is why won't existing interface architecture be appropriate for human-computer interaction in the future?

Existing interface architecture is still being modeled on dated technology — the typewriter keyboard and the cursor driven mouse — and not for ease of human-computer interaction. The interface concern is the most pressing problem of HCI and is its most critical part. It is the medium through which information is accessed, questions are posed, and solution paths are laid out and monitored. It is the tool with which the user manipulates and interacts with data. Interface architectures like the desktop, filing cabinet, and digital world are implemented (still) via keyboards and mice. Today's interfaces are cursor dependent and contribute significantly to creating a digital divide that impedes 8 million sight-impaired and 82 million low-vision (potential) users from freely interacting with the dominant IT of this age.

Communicating involves transferring information; to do so requires compatibility between sender and receiver. The interface architecture that controls human-computer information exchange, according to Norman (1988), must

• facilitate the exchange of knowledge in the environment and knowledge in the head

• keep the interaction task simple

• ensure that operations are easy to do

• ensure correct transfer among information domains

• understand real and artificial restraints on interaction

• acknowledge existence of error and bias due to modal difficulties

• eventually standardize procedures

Thus, the interface must maximize the needs of both human user and computer.

These needs raise the question of what cutting edge hardware (e.g., rendering engines, motion tracking by head mounted display units, gaze tracking, holographic images, avatars complete with gestures, and auditory, tactual, and kinesthetic interface devices), adds to information processing? Besides the emphasis on historic input devices (keyboard and mouse), there is a similar emphasis on a dated output device, the limited domain of the flat computer screen (inherited from the TV screen of the 1930s), which is suited primarily for visualization procedures for output representation. While there is little doubt that the visual senses are the most versatile mode for the display of geospatial data and data analysis (e.g., in graph, table, map, and image mode), it is also argued that multiple modality interfaces could enrich the type, scale, and immediacy of displayed information. One of the most critical interface problem relates to the size and resolution of data displays. This will be of increasing importance as micro-scale mobile and wearable computers have to find alternatives to 2-inch square LED displays for output presentation. The reasons for moving beyond visualization on flat screens are compelling. Examples include

• multimodal access to data and representations provide a cognitively and perceptually rich form of interaction

• multimodal input and output interfaces allow HC interaction when sight is not available (e.g., for blind or sight-impaired users) or when sight is an inappropriate medium (e.g., accessing onscreen computer information when driving a vehicle at high speeds)

• when absence of light or low precludes the use of sight

• when visual information needs to be augmented

• when a sense other than vision may be necessary (e.g., for recording and identifying bird calls in the field)

Nonvisual technology allows people with little or no sight to interact (e.g., using sound, touch, and force-feedback) with computers. Not only is there a need for text to speech conversion, but there is also a need to investigate the potential use of nonvisual modalities for accessing cursor-driven information displays, icons, graphs, tables, maps, images, photos, windows, menus, or other common data representations. Without such access, sight-disabled and low-sight populations are at an immense disadvantage, particularly when trying to access spatial data. This need is paramount today as home pages on the World Wide Web encapsulate so much important information in graphic format, and as digital libraries (including the Alexandria Digital Map and Image Library at the University of California, Santa Barbara) become the major storage places for multidimensional representations of spatial information.

In the near future, one can imagine a variety of new interfaces, some of which exist in part now but which need significant experimentation to evaluate human usability in different circumstances before being widely adopted. Examples of underutilized and underinvestigated technologies include the following:

• a force-feedback mouse that requires building virtual walls around on-screen features, including windows, icons, objects, maps, diagrams, charts, and graphs. The pressure-sensitive mouse allows users to trace the shape of objects or features and uses the concept of a gravity well to slip inside a virtual wall (e.g., a building entrance) to explore the information contained therein (Jacobson et al. 2002).

• vibrotactile devices (mice) that allow sensing of different surfaces (dots, lines, grates, and hachures) to explore flat, on-screen features (e.g., density shading maps and meteorological or isoline temperature maps) ( O'Modhrain and Gillespie 1995; Jacobson, et al. 2002)

• use of real, digitized, or virtual sounds including speech to identify on-screen phenomena (e.g., Loomis, Golledge, and Klatzky 2001)

• avatars to express emotions or give directions by gesturing or gazing

• smart clothing that can process nearby spatial information and provide information on nearby objects or give details of ambient temperature, humidity, pollution levels, UV levels, etc.

Currently, the use of abstract sound appears to have significant potential, although problems of spatial localization of sound appear to offer a significant barrier to further immediate use. Some uses (e.g., combinations of sound and touch — NOMAD — and sound and Braille lettering — GPS Talk — are examples of useful multimodal interfaces (e.g., Parkes and Dear 1990; Brabyn and Brabyn 1983; Sendero Group 2002). Some maps (e.g., isotherms/density shading) have proven amenable to sound painting, and researchers in several countries have been trying to equate sound and color. At present, much of the experimentation with multimodal interfaces is concentrated in the areas of video games and cartoon-like movies. Researchers such as Krygier (1994) and Golledge, Loomis, and Klatzky (1994) have argued that auditory maps may be more useful than tactual maps and may, in circumstances such as navigating in vision-obstructed environments, even prove more useful than visual maps because they don't require map-reading ability but rely on normal sensory experiences to indicate spatial information such as direction.

0 0

Post a comment