The Outside Inside Framework and Future Imaginings

The Outside-Inside framework consists of four categories of human performance-enhancing technologies:

• Outside the body and environmental

• Outside the body and personal

• Inside the body and temporary

• Inside the body and permanent

In this section, while briefly describing the categories and subcategories, some extremely speculative visions of the future will be discussed to help stretch our imaginations before "coming back to earth" in the last section to discuss more practical and near term possibilities. Readers are encouraged to view this section as a number of imagination challenges and to create their own answers to questions like what new materials, agents, places, mediators, ingestibles, senses, and species might come to be in the next few decades. In the true spirit of brainstorming, anything goes in this section. Also, it is worth noting that while futurists may be overestimating the desirability and feasibility of how quickly, if ever, we can achieve many of their visions, we are probably collectively underestimating the impact of many of the smaller technological steps along the way. Finally, as an example of improving human performance, the task of learning will be considered, focusing on the way existing and imaginary technologies may improve our ability to learn and/or perform more intelligently.

Outside the Body and Environmental

People perform tasks in a variety of environmental contexts or places, such as homes, offices, farms, factories, hotels, banks, schools, churches, restaurants, amusement parks, cars, submarines, aircraft, space stations, and a host of other environments that have been augmented by what is termed here environmental technologies. From the materials that are used to construct the buildings and artifacts at these locations to the agents (people, domesticated animals) that provide services in these locations to the very nature of the places themselves, environmental technologies account for most of the advances in human performance that have occurred in the past five hundred generations of recorded history (most of us overlap and therefore experience only about five generations of perspectives from grandparents to grandchildren). For the task of learning, consider the important roles that the three innovations paper (material), teachers (agents), and schools (places) have had on education. NBICS convergence will surely lead to new materials, new agents, and new places.

Outside the body and environmental: Materials. We expect that the progression from rocks, wood, bricks, cloth, ceramics, glass, bronze, iron, cement, paper, steel, rubber, plastic, semiconductors, and so on, will be augmented with new materials, such as smart, chromatically active (change color), polymorphic (change shape) materials such as those NASA is already experimenting with. For a thought-provoking vision of where new materials could lead, the reader is directed to the currently infeasible but intriguing notion of "utility fog" developed by Rutgers computer science professor

J. Storrs Hall in the early 1990s. Smaller than dust, "foglets" are speculative tiny interlocking machines that can run "programs" that make collections of billions of them work together to assume any shape, color, and texture, from flowing, clear water to fancy seat belt body suits that appear only when an accident has occurred. If utility fog were a reality, most artifacts could be made invisible until needed, making them quite portable. There would be no need to carry luggage on trips; one could simply create clothes out of utility fog. Materializing objects out of thin air (or fog), while wildly infeasible today, nevertheless provides an interesting springboard for imagining some of the ultimate human-computer interfaces (such as a second skin covering human bodies, eyes, ears, mouth, nose, and skin) that may someday exist. Perhaps these ultimate interfaces might connect us to telerobotic versions of ourselves assembled out of utility fog in distance places.

There are many reasons to be skeptical about utility fog (the Energy budget, for one), but notions like utility fog help us understand the potential of NBICS. For example, multi-cellular organisms provide a vast library of examples of the ways cells can be interlinked and grouped to produce shapes, textures, and macroscopic mechanical structures. Social insects like ants have been observed to interlink to solve problems in their environments. And while I'm unaware of any types of air born bacteria that can spontaneously cluster into large groups, I suspect that mechanisms that bacteria and slime molds use for connecting in various arrangements may one day allow us to create new kinds of smart materials. Hopefully the notion of utility fog has served its brainstorming purpose of imagination stretching, and there are a number of related but nearer term investigations underway. For example, U.C. Berkeley professor and microroboticist Kris Pister's Smart Dust and Micromechanical Flying Insect projects are good examples of the state-of-the-art in building microrobots, and as these microrobots get smaller, they may very well pave the way to many exciting new materials.

Outside the body and environmental: Agents. Interacting with intelligent agents, such as other people and other species (e.g., guide dogs), has clear advantages for augmenting human performance. Some of the most important agents we interact with daily are role-specialized people and businesses (organization as agent). The legal process of incorporating a business or nonprofit organization is essentially equivalent to setting up a fictitious person with specialized rights, responsibilities, and capabilities. The notion of new agents was an active area of discussion among the workshop participants: from the implications of digital personae (assumed identities on-line) to artificial intelligence and robotics, as well as the evolution of new types of organizations. The successful entrepreneur and futurist Ray Kurzweil has a website kurzweilai.net (see Top KurzweilAI News of 2001) that explores these and other futures and interestingly includes Kurzweil's alter-ego, Ramona!, that has been interviewed by the press to obtain Kurzweil's views on a variety of subjects. Undoubtedly, as technology evolves, more of this digital cloning of aspects of human interactions will occur. An army of trusted agents that can interact on our behalf has the potential to be very empowering as well as the potential to be quite difficult to update and maintain synchrony with the real you. What happens when a learning agent that is an extension of you becomes more knowledgeable about a subject than you? This is the kind of dilemma that many parents and professors have already faced.

Outside the body and environmental: Places. New places create new opportunities for people. The exploration of the physical world (trade connecting ancient civilization, New World, the Wild West, Antarctica, the oceans, the moon, etc.) and the discovery of new places allows new types of human activities and some previously constrained activities to flourish. For example, the New World enhanced the Puritans' abilities to create the kind of communities they wanted for themselves and their children. Moving beyond the physical world, science fiction writer William Gibson first defined the term cyberspace. The free thinking artist and futurist Jaron Lanier, who coined the term virtual reality, and many other people have worked to transform the science fiction notion of cyberspace into working virtual reality technologies. Undoubtedly, the digital world will be a place of many possibilities and affordances that can enhance human performance on a wide variety of tasks, including both old, constrained activities as well as new activities. The increasing demand for home game machines and combinatorial design tools used by engineers to explore design possibilities is resulting in rapid advances in the state-of-the-art for creation of simulated worlds and places. Furthermore, in the context of learning, inventor and researcher Warren Robinett, who was one of the workshop participants, co-created a project that allows learners to "feel" interactions with simulated molecules and other nanostructures via virtual realities with haptic interfaces. In addition, Brandeis University professor Jordan Pollack, who was also one of the workshop participants, described his team's work in the area of combinatorial design for robot evolution, using new places (simulated worlds) to evolve new agents (robots) and then semi-automatically manifest them as real robots in the real world. Also, it is worth noting that in simulated worlds, new materials, such as utility fog, become much easier to implement or, more accurately, at least emulate.

Outside the Body and Personal

The second major category, personal technologies, are technologies that are outside of the body, but unlike environmental technologies are typically carried or worn by a person to be constantly available. Two of the earliest examples of personal technologies were of course clothing and jewelry, which both arose thousands of generations ago. For hunter gatherers as well as cowboys in the American West, weapons were another form of early personal technology. Also included in this category are money, credit cards, eyeglasses, watches, pens, cell phones, handheld game machines, and PDAs (personal digital assistants). For learners, a number of portable computing and communication devices are available, such as leapfrog, which allows students to prepare for quizzes on chapters from their school textbooks, and graphing calculators from Texas Instruments. Recently, a number of wearable biometric devices have also appeared on the market.

Outside the body and personal: Mediators. Mediators are personal technologies that include cellphones; PDAs; and handheld game machines that connect their users to people, information, and organizations and support a wide range of interactions that enhance human performance. WorldBoard is a vision of an information infrastructure and companion mediator devices for associating information with places. WorldBoard, as originally conceived in my papers in the mid-1990s, can be thought of either as a planetary augmented reality system or a sensory augment that would allow people to perceive information objects associated with locations (e.g., virtual signs and billboards). For example, on a nature walk in a national park a person could use either heads up display glasses or a cell phone equipped with a display, camera, and GPS (Global Positioning System) to show the names of mountains, trees, and buildings virtually spliced into the scenes displayed on the glasses or cell phone. WorldBoard mediators might be able to provide a pseudo X-ray vision, allowing construction equipment operators to see below the surface to determine the location of underground buried pipes and cables rather than consulting blueprints that might not be available or might be cumbersome to properly orient and align with reality. The slogan of WorldBoard is "putting information in its place" as a first step to contextualizing and making useful the mountains of data being created by the modern day information explosion.

Human-made tools and artifacts are termed mediators, in this paper, because they help externalize knowledge in the environment and mediate the communication of information between people. Two final points are worth making before moving inside the body. First, the author and cognitive scientist Don Norman, in his book Things that Make Us Smart provides an excellent, in-depth discussion of the way human-made tools and artifacts augment human performance and intelligence. Furthermore, Norman's website includes a useful article on the seeming inevitability of implants and indeed cyborgs in our future, and why implants will become increasingly accepted over time for a wider and wider range of uses. A second point worth making in the context of mediators is that human performance could be significantly enhanced if people had more will power to achieve the goals that they set for themselves. Willpower enforcers can be achieved in many ways, ranging from the help of other people (e.g., mothers for children) to mediator devices that remove intentionality from the equation and allow multitasking (e.g., FastAbs electric stimulation workout devices).

Inside the Body and Temporary

The third major category, inside the body temporary technologies, includes most medicines (pills) as well as new devices such as the camera that can be swallowed to transmit pictures of a journey through a person's intestines. A number of basic natural human processes seem to align with this category, including inhaling and exhaling air; ingesting food and excreting waste; the spreading of infections that eventually overcome the body's immune system; as well as altered states of awareness such as sleep, reproduction, pregnancy, and childbirth.

Inside the body and temporary: Ingestibles. Researchers at Lawrence Livermore National Laboratories have used mass spectrometry equipment to help study the way that the metabolisms of different people vary in their uptake of certain chemical components in various parts of the body. Eventually, this line of investigation may lead to precisely calibrating the amount of a drug that an individual should take to achieve an optimal benefit from ingesting it. For example, a number of studies show positive effects of mild stimulants, such as coffee, used by subjects who were studying material to be learned, as well as positive effects from being in the appropriate mental and physical states when performing particular tasks. However, equally clear from the data in these studies are indications that too much or too little of a good thing can result in no enhancement or detrimental side effects instead of enhanced performance.

With the exception of an Air Force 2025 study done by the Air University, I have not yet found a reference (besides jokes, science fiction plots, and graduate school quiz questions), to what I suspect is someone's ultimate vision of this ingestible enhancements subcategory, namely a learning pill or knowledge pill. Imagine that some day we are able to decode how different brains store information, and one can simply take a custom designed learning pill before going to sleep at night to induce specific learning dreams, and when morning arrives the person's wetware will have been conditioned or primed with memories of the new information. Staggeringly improbable, I know.

Nevertheless, what if someone could take a pill before falling asleep at night, and awaken in the morning knowing or being conditioned to more rapidly learn how to play, for example, a game like chess? If learning could be accelerated in this manner, every night before going to bed, people would have a "learning nightcap." Imagine an industry developing around this new learning pill technology. The process at first might require someone spending the time to actually learn something new, and monitoring and measuring specific neurological changes that occur as a result of the learning experience, and then re-encoding that information in molecular machines custom-designed for an individual to attach himself or herself to locations in the brain and interact with the brain to create dream-like patterns of activation that induce time-released learning. Businesses might then assign learning pills to their employees, schools might assign learning pills to their students, soldiers might take learning pills before being sent out on missions (per the Air Force 2025 study that mentioned a "selective knowledge pill"), and families might all take learning pills before heading out on vacations. However, perhaps like steroids, unanticipated side effects could cause more than the intended changes.

What makes the learning pill scenario seem so far-fetched and improbable? Well, first of all, we do not understand much about the way that specific bits of information are encoded in the brain. For example, what changes in my brain (short term and then long term memory) occur when I learn that there is a new kind of oak tree called a Live Oak that does not lose its leaves in the winter? Second, we do not know how to monitor the process of encoding information in the brain. Third, different people probably have idiosyncratic variations in the ways their brains encode information, so that one person's encoding of an event or skill is probably considerably different from another person's. So how would the sharing work, even if we did know how it was encoded in one person's brain? Fourth, how do we design so many different molecular machines, and what is the process of interaction for time-released learning? Fifth, exactly how do the molecular machines attach to the right parts of the brain? And how are they powered? We could go on and on, convincing ourselves that this fantasy is about as improbable as any that could possibly be conceived. Nevertheless, imagination-stretching warmups like these are useful to help identify subproblems that may have nearer term partial solutions with significant impacts of their own.

Inside the Body and Permanent

The fourth major category, inside the body permanent technologies, raises the human dignity flag for many people, as negative images of cyborgs from various science fiction fare leap immediately to mind. The science fact and e-life writer Chris O'Malley recently wrote a short overview of this area. Excerpts follow:

Largely lost in the effort to downsize our digital hardware is the fact that every step forward brings us closer to an era in which computers will routinely reside within us. Fantasy? Hardly. We already implant electronics into the human body. But today's pacemakers, cochlear implants, and the like will seem crude — not to mention huge — in the coming years. And these few instances of electronic intervention will multiply dramatically... The most pervasive, if least exciting, use of inner-body computing is likely to be for monitoring our vital stats (heart rate, blood pressure, and so on) and communicating the same, wirelessly, to a home healthcare station, physician's office, or hospital. But with its ability to warn of imminent heart attacks or maybe even detect early-stage cancers, onboard monitoring will make up in saved lives what it lacks in sex appeal... More sensational will be the use of internal computers to remedy deficiencies of the senses. Blindness will, it seems reasonable to speculate, be cured through the use of electronic sensors — a technology that's already been developed. So, too, will deafness. Someday, computers may be able to mimic precisely the signal that our muscles send to our brain and vice versa, giving new mobility to paralysis victims. Indeed, tiny computers near or inside our central processing unit, the human brain, could prove a cure for conditions such as Alzheimer's, depression, schizophrenia, and mental retardation... Ethical dilemmas will follow, as always...

Inside the body and permanent: New organs (senses and effectors). This subcategory includes replacement organs, such as cochlear implants, retinal implants, and pacemakers, as well as entirely new senses. People come equipped with at least five basic senses: sight, hearing, touch, taste, and smell. Imagine if we were all blind but had the other four senses. We'd design a world optimized for our sightless species, and probably do quite well. If we asked members of that species to design a new sense, what might they suggest? How would they even begin to describe vision and sight? Perhaps they might describe a new sense in terms of echo location, like a species of bats, that would provide a realtime multipoint model of space in the brain of the individual that could be reasoned to be capable of preventing tripping on things in hostile environments.

In our own case, because of the information explosion our species has created, I suggest that the most valuable sixth sense for our species would be a sense that would allow us to quickly understand, in one big sensory gulp, vast quantities of written information (or even better, information encoded in other people's neural nets). Author Robert Lucky has estimated that all senses give us only about 50 bits per second of information, in the Shannon sense. A new high bandwidth sense might be called a Giant UpLoad Process or the GULP Sense. Imagine a sixth sense that would allow us to take a book and gulp it down, so that the information in the book was suddenly part of our wetware, ready for inferencing, reference, etc., with some residual sense of the whole, as part of the sensory gulp experience. Just as some AI programs load ontologies and rules, the gulp sense would allow for rapid knowledge uptake. A gulp sense would have a result not unlike the imaginary learning pill above. What makes the information-gulping sixth sense and the learning pill seem so fantastic has to do in part with how difficult it is for us to transform information encoded in one format for one set of processes into information encoded in another format for a different set of processes — especially when one of those formats is idiosyncratic human encoding of information in our brains. Perhaps the closest analogy today to the complexity of transforming information in one encoding to another is the ongoing transformation of businesses into e-businesses, which requires linking idiosyncratic legacy systems in one company to state-of-the-art information systems in another company.

The process of creating new sensory organs that work in tandem with our own brains is truly in a nascent state, though the cochlear implant and retinal implant directions seem promising. University of Texas researcher Larry Cauller, who was one of the workshop participants, grabbed the bull by the horns and discussed ways to attack the problem of building an artificial brain as well as recent technology improvements in the area of direct neural interfaces. As neural interface chips get smaller, with finer and more numerous pins, and leveraging RF ID tag technology advances, the day is rapidly approaching where these types of implants can be done in a way that does minimal damage to a brain receiving a modern neural interface implant chip. Improved neural interface chips are apparently already paying dividends in deepening the understanding of the so-called mirror neurons that are tied in with the "monkey see, monkey do" behaviors familiar in higher primates. One final point on this somewhat uncomfortable topic, MIT researcher and author Sherry Turkle, who was also a workshop participant, presented a wealth of information on the topic of sociable technologies as well as empirical data concerning people's attitudes about different technologies. While much of the discussion centered on the human acceptance of new agents such as household entertainment robots (e.g., Sony's AIBO dog), there was unanimous agreement among all the participants that as certain NBICS technologies find their way into more universally available products, attitudes will be shaped, positively as well as negatively, and evolve rapidly, often in unexpected ways for unexpected reasons.

Tokyo University's Professor Isao Shimoyama has created a robo-roach or cyborg roach that can be controlled with the same kind of remote that children use to control radio-controlled cars. Neural interfaces to insects are still crude, as can be seen by going to Google and searching for images of "robo-roach." Nevertheless, projecting the miniaturization of devices that will be possible over the next decade, one can imagine tools that will help us understand the behaviors of other species at a fine level of detail. Ultimately, as our ability to rapidly map genes improves, neural interface tools may even be valuable for studying the relationship between genes and behaviors in various species. NBICS convergence will accelerate as the linkages between genes, cellular development, nervous systems, and behavior are mapped.

Inside the body and permanent: New skills (new uses of old sensors and effectors). Senses allow us to extract information from the world, exchange information between individuals, and encode and remember relevant aspects of the information in our brains (neural networks, wetware). Sometimes physical, cognitive, and social evolution of a species allows an old sense to be used in a new way. Take, for example, verbal language communication or speech. Long before our ancestors could effectively listen to and understand spoken language, they could hear. A lion crashing through the jungle at them registered a sound pattern in their prehuman brains and caused action. However, over time, a set of sound associations with meaning and abstractions, as well as an ability to create sounds, along with increased brain capacity for creating associations with symbols and stringing them together via grammars to create complex spoken languages, occurred. Over time, large groups of people shared and evolved language to include more sounds and more symbolic, abstract representations of things, events, and feelings in their world. An important point about acquiring new skills, such as sounds in a language, is that infants and young children have certain advantages. Evidence indicates that brains come prewired at the neural level for many more possibilities than actually get used, and if those connections are not needed, they go away. Once the connections go away, learning can still occur, but the infant brain advantage is no longer available. Essentially, the infant brain comes prewired to facilitate the development of new uses of old sensors and effectors.

Entrepreneur and author Bob Horn, who was also a participant at the workshop, argues that visual languages have already evolved and can be further evolved — perhaps, dramatically so for certain important categories of complex information, and thus progressing towards the information gulp-like sense alluded to above. In addition, researchers at IBM's Knowledge Management Institute and elsewhere offer stories and story languages as a highly evolved, and yet mostly untapped except for entertainment purposes, way to rapidly convey large volumes of information. For example, when I mention the names of two television shows, The Honeymooners and The Flintstones, many TV-literate Americans in their 1940s and 1950s will understand that these are in fact the same basic story formula, and immediately draw on a wealth of abstractions and experience to interpret new data in terms of these stories. They may even be reminded of a Honeymooner episode when watching a Flintstone cartoon — this is powerful stuff for conveying information. The generation of television and videogame enthusiasts have a wealth of new cognitive constructs that can be leveraged in the evolution of a new sense for rapid, high volume information communication. Certainly, new notations and languages (e.g., musical notation, programming languages, and mathematics) offer many opportunities for empowering people and enhancing their performance on particular tasks. All of these approaches to new uses of old senses are primarily limited by our learning abilities, both individually and collectively. Like the evolution of speech, perhaps new portions of the brain with particular capabilities could accelerate our ability to learn to use old senses in new ways. An ability to assimilate large amounts of information more rapidly could be an important next step in human evolution, potentially as important as the evolution of the first language spoken between our ancestors.

Inside the body and permanent: New genes. If the notion of "computers inside" or cyborgs raise certain ethical dilemmas, then tinkering with our own genetic code is certain to raise eyebrows as well. After all, this is shocking and frightening stuff to contemplate, especially in light of our inability to fully foresee the consequences of our actions. Nevertheless, for several reasons, including, for the sake of completeness in describing the Outside-Inside Framework, this is an area worth mentioning. While selective breeding of crops, animals, and people (as in ancient Sparta) is many hundreds of generations old, only recently have gene therapies become possible as the inner working of the billion year old molecular tools of bacteria for slicing and splicing DNA have been harnessed by the medical and research communities. Just as better understanding of the inner working of memory of rodents and its genetic underpinnings have allowed researchers to boost the IQs of rodents on certain maze running tasks, soon we can expect other researchers building on these results to suggest ways of increasing the IQs of humans.

University of Washington researcher and medical doctor Jeffrey Bonadio (Bonadio 2002), who was a workshop participant, discussed emerging technologies in the area of gene therapies. Gene therapy is the use of recombinant DNA as a biologic substance for therapeutic purposes, using viruses and other means to modify cellular DNA and proteins for a desired purpose.

In sum, the Outside-Inside Framework includes four main categories and a few subcategories for the ways that technology might be used to enhance human performance:

• Outside the body and environmental

- new materials

- new agents

- new places

- new mediators (tools and artifacts)

• Outside the body, personal

- new mediators (tools and artifacts)

• Inside the body, temporary

- new ingestibles

• Inside the body, permanent

- new organs (new sensors and effectors)

- new skills (new uses of old sensors and effectors)

- new genes

The four categories progress from external to internal changes, and span a range of acceptable versus questionable changes. In the next section, we'll consider these categories from the perspective of information encoding and exchange processes in complex dynamic systems.

0 0

Post a comment