Skip to content

3

Cartoon created by Fred Voorhorst

Last week, in the middle of the pandemic, the highly publicized police killings of black men, and the resulting protests and demonstrations I learned of the death of Professor Anders Ericsson. Professor Ericsson was a preeminent psychologist who studied the development of expertise. He was interested in the development of high levels of skill that allow 'experts' to do things that are beyond the capacity of most humans. In particular, his work was instrumental in illustrating that deliberate practice was critical in developing the heuristics that allow experts to become both faster and more accurate in processing information. In essence, because of these tricks of the trade, experts are able to avoid the information limits that bound the performance of most humans. This is the positive, good aspect of heuristics. These heuristics allow experts to focus on the patterns (or chunks) that specify significant aspects of situations and to coordinate their action to respond automatically - quickly and accurately. 

The term heuristic is also used to describe biases in decision making. As the extensive research of Kahneman and Tversky demonstrates, heuristics can lead people to make choices that violate the conventions of traditional logic. Heuristics are a form of bounded rationality that apply within certain domains of activity. Thus, the automatic responses that work in one set of situations can seem illogical or mindless when they are applied in situations outside that domain. This is the negative, bad side of heuristics - they can lead to performance that Jim Reason referred to as 'strong, but wrong.' 

The ugly side emerges when we put people in domains where the circumstances lead them to develop heuristics or implicit biases that not only violate logic, but that violate our cultural values!  

My father entered the Marines when he was 17 at the end of WW II. He only served for two years and never left the country. But in his forties, when someone tried to mug him on the street one night, he automatically responded as he had been trained - slash, kick, gauge! It thwarted the attack and may have saved his life.

There are many parallels in the training of police and soldiers - to prepare people to respond automatically to defend themselves and to survive potentially deadly situations. It is very tempting to attribute these mindless responses to evil intent of individuals, but we might consider that these implicit biases are a product of training and socialization. These biases are the result of years of deliberate practice!

In some cases, the violent acts that we see on the cell phone videos are the result of many years of deliberate practice, that result in mindless responses to situations.  The implication is that the problem of police violence is not simply a function of a few bad apples, but the product of a system that deliberately trains mindless responses to threatening situations. 

In hindsight, it is easy to attribute the clearly mindless violence to evil intent. But, at least in some cases, consider that these mindless responses are the product of a system of training designed for soldiers, not for peace officers. We want to blame the officer, but this will not ultimately solve the problem of police violence. Ultimately, we must change the system and reconsider what type of skill training is most important for creating expert peace keepers, rather than expert soldiers. 

It's the same temperature, as measured by the thermometer, but one person experiences miserable cold, while another experiences a refreshing chill. Which experience is 'true' or 'real.' Or are both experiences illusions (i.e., purely subjective)? 

Most introductory psychology texts describe an experiment in which the participant leaves one hand in a bucket of ice water and another hand in a bucket of hot water for a few minutes, then simultaneously puts both hands into a bucket of water at room temperature. What happens? One hand experiences the water as being warm, while the other hand experiences the bucket as being cold. 

Does this demonstrate that perception is illusionary? Or does it suggest that the experience of temperature is dynamic. That it reflects change, or relations over time (as opposed to isolated events in time). 

Imagine two intersecting lines on a graph one sloping down and the other sloping up. At the intersection, both point values are identical - but the slopes are different. Classically, we have tended to treat human experience as if it were a collection of isolated points in time.  Thus, we have failed to consider that the slopes may be different. 

If experience is dynamic, then it is essential that we consider the points as integrated components of the line. Rather than isolating behavior in time, we need to examine behavior over time (we need to consider the lines). 

From the perspective of dynamics, the experiences of both people can be real. Both experiences are partial functions of the current physical temperature (as measured by a thermometer), but they are also functions of different past histories. Though the temperature 'points' may be the same for both, the slopes may be very different. The experiences may be situated on different trajectories. 

When the different experiences are dismissed as 'subjective,' there is an implication that all experience is illusionary. That experiences are groundless with respect to the objective physical situations (e.g., the objective temperature). That experiences are 'in the head.' However, if you view experience as a dynamic property over time (e.g., with both a position and a velocity or slope), then you can see that the differences may be due to the fact that both are grounded, but in different ways. In this context, both experiences can be considered to be 'real' (in the sense that they are grounded, but with respect to objectively different situations).

5

Ever since the Cybernetic Hypothesis was introduced to Psychology, there has been greater appreciation of the "intentional" nature of cognitive systems. Yet, despite this awareness, causal (or stimulus-response) forms of explanation continue to dominate the way many people think about how humans (and other animals) process information. For example, most cognition texts begin with sensation and then follow 'stimulation' through successively deeper levels of processing (perception, decision-making ...).

A result of this framing is an at least implicit suggestion that sensations cause action. And there is a danger that people fail to appreciate many of the significant aspects of the circular coupling of perception and action (e.g., self-organization to skillfully and creatively adapt to the ecology) that differentiate animals from plants. 

While it is true that in a circular coupling, there is no sense in which any element in the circle must be given priority as "the cause," I wonder if a simple reorganization of how we depict the dynamic would help people to break away from conventional notions of causality and to better appreciate the intentional dynamic of cognitive systems.

Perhaps, the most important implication of the Cybernetic Hypothesis is that 'action' becomes the prime mover of the dynamic. In a framing that gives priority to action, looking becomes the prerequisite for seeing, and the function of our senses is to serve, rather than to cause action.  

Ideally, science is motivated by the curiosity of individuals and success depends on their ability to formulate well structured questions about important phenomenon. But in practice, science requires resources and these resources depend on the ability of individuals to convince those with the resources that they have an answer to some important contemporary problem. 

The process of convincing the people with the resources to fund your curiosity often hinges on the ability to provide a simple, easy to understand answer to the complex problem. This is where having the right buzz word can make all the difference. For example, in seeking funds to explore the teaming of humans with autonomous systems, one might frame the problem in terms of self-organizing dynamics or trust

I think a strong case can be made that both words are valid descriptions of important aspects of the natural phenomenon. And conversely, a case can be made that both are 'buzz' words. That is, they are fashionable terms or jargon that tend to be open to a broad range of interpretations and uses. 

As buzz words, both terms tend to suggest ways to reduce the complex problem into simpler terms. For example, the term self-organizing systems can suggest reducing the phenomenon to a particular model (e.g., coupled pendulums) or to a particular methodology (e.g., 1/f).  Similarly, trust can suggest reducing the problem of human-technology interactions to simple analogs of human-human interactions. In both cases, the buzz words tend to reduce the problem and to narrow attention to specific dimensions that are familiar and potentially manageable. Somehow the problem becomes less mysterious and there appear to be obvious solutions. 

While the reductions and suggestions of solutions are extremely useful for marketing work to funders and gaining resources, there are also obvious dangers. Too often, the reductions associated with buzz words tend to hide the natural complexity - trivializing the natural phenomenon. Thus, if the researchers get caught up in the 'buzz' of marketing - then research programs can end up being framed around the trivializations, rather than the real phenomenon. In the worse case, the Buzz words become the answers, rather than the questions; experiments often become demonstrations of trivial relations, rather than tests of interesting hypotheses; and the results tend to have little practical value relative to solving the actual phenomenon (e.g., how to improve performance of human-autonomy teams). 

For me, self-organization and trust suggest important questions about the nature of human-autonomy teaming. However, I get worried when I see them being marketed as 'answers.'

 

 

4

In discussions about the nature of cognition, a central question focuses on how meaning emerges from interactions between agents and their environments. It seems clear that the 'meaning' of any object depends in part on properties of the object, in part on the observer, and in part on the situation. For example, consider the following observations from Rasmussen (1986)

The way in which the functional properties of a system are perceived by a decision maker very much depends upon the goals and intentions of the person. In general, objects in the environment in fact only exist isolated from the background in the mind of a human, and the properties they are allocated depend on the actual intentions. A stone may disappear unrecognized into the general scenery; it may be recognized as a stone, maybe even a geologic specimen; it may be considered an item suitable to scare away a threatening dog; or it may be a useful weight that prevents manuscript sheets from being carried away by the wind - all depending on the needs or interests of a human subject. Each person has his own world, depending on his immediate needs.

(p. 13)

There are two subtly different ways to think about the dynamics of experience that underlies the emergence of meaning. Conventionally, constructivist approaches to cognition talk about making meaning. This makes a lot of sense in the context of language, where arbitrary signs such as a sequence of marks on a page (e.g., C - A - T) are interpreted relative to prior learning about alphabets and word definitions. The suggestion is that the meaning is the result of adding prior knowledge to the arbitrary sign to make (or construct) meaning. The implication is that the symbols are meaningless until they are interpreted.

An alternative way to think about the dynamic of experience, that reflects ecological or situated perspectives on experience, is that meaning is discovered. This perspective makes a lot of sense in terms of perceptual-motor skills. For example, we discover affordances like graspable and reachable by interacting with the objects in the environment. The underlying relations that determine whether an object will fit comfortably in the hand are not arbitrary (though the affordances of a specific object like a basketball may vary from individual to individual as a function of hand sizes). Affordances reflect meaning-full properties of the ecology - that exist independent from perception or interpretation. The intention will not be realized if the affordance is not detected, but the affordance exists and can be specified objectively, whether or not it is ever realized in action. Further the meaning can be mis-perceived, but will be corrected through the feedback that results from acting on the misperception.

The framework of meaning making makes sense if you think about the stimuli of experience as punctate instances in time (e.g., isolated frames in a movie reel). In this case, experiencing a melody requires that the significance of a particular note be constructed  by retrieving the prior notes from memory and mentally adding them together to re-construct the melody.

In contrast, the framework of meaning discovery suggests that perceptions are not punctate, but that they are extended over time so that the pattern of notes is experienced as a whole (as a chunk). This extension may go beyond the notes heard to include prior experiences with a particular memory that allow prediction or anticipation of the entire melody. The metaphor does not have to invoke memory in terms of adding the prior notes. Rather, the metaphor is one of attuning or resonating to a pattern - and recognizing a melody.

Note that the meaning discovery framework does suggest the existence of mental structures (schema or frames) - but these structures function more like filters - that resonate to some properties or patterns, as a function of prior experience. In this framework, the function of experience or learning is not about storing past instances (that can be added to new instances to construct meaning), rather it is about tuning attention to those properties of experience that have functional significance (e.g., tuning the weights in a neural net).

Back to the processing of (C-A-T). These symbols may be arbitrary in that there is no obvious physical or analogical relation to the animal that they represent. But they are NOT arbitrary in a cultural sense. If we assume that the meaning of C-A-T is created by a culture - not by a mind. Then the meaning discovery framework could be sensibly applied to language as well as to perceptual motor skills. In this sense, learning language is not about creating meaning from arbitrary signs - but about discovering the cultural significance of the signs (in the same way that discovering affordances is about discovering the significant action properties of an object).

The danger of the constructivist framework where minds make meaning is the implication that everything is meaningless until it comes in contact with a mind. There is a subtle implication that we live in a meaningless world. I can't accept that implication - and thus prefer to think of the dynamic of learning and experience as one of discovering meaning. There is a subjective dimension to meaning, but I can't accept that meaning is purely subjective.

Pirsig "Zen and the Art of Motorcycle Maintenance"

The development of and standardization of metrics was critical to the development of science. The standard metrics provided "objective" standards for describing events and experiments to ensure that they could be replicated and generalized appropriately. Without objective standards of measurement there could be no science.

Development of objective, observer independent standards of measurement was essential to the success of the physical sciences.

However, the great error in Western Science was to take the description of the world in terms of these metrics as an objective reality - in opposition to a subjective reality! The implication is that the objective distance in terms of meters is true, but the functional relations such as graspable, reachable, near or far are 'subjective.' This implies that the variability associated with individual differences along such dimensions is "noise" with regard to the "true" reality. And there is an implication that this "noise" has to be somehow filtered and added-to in order to construct a mental model of the objective truth - in relation to the standard metrics (e.g., the size in meters).

One implication is that since people and animals are not well calibrated to the standard metrics, then their perceptions of the world must be 'indirect' and therefore it is necessary for them to reconstruct the true world (recover the correct standard) in order to act appropriately.

Another implication is that many of the relations that directly impact how people make judgements about graspability (e.g., their own hand size), reachability (their arm length or height), or closeness (e.g., available modes of transportation) are less real - less basic - or that they are derivative. But of course, these relations are every bit as 'real' and every bit as specifiable as the elements comprising these relations.

These relations are part of a "whole" that can not be discovered in the components. These relations are 'emergent properties' of the whole. A central premise of ecological psychology is that these emergent properties are 'essential and fundamental' elements for a science that hopes to describe how people adapt to their ecologies. Ecological Psychology argues that the size of an object relative to a hand or the distance to a cliff relative to your height is every bit as objective as the size relative to a meter stick.

Further, ecological psychology argues that these functional relations exist in the world to be discovered and perceived directly. And that there is information (e.g., structure in optical arrays) that specifies these emergent properties. Thus, there is no need for internal processing to construct or reconstruct these relations. These are NOT mental constructions - they are functional properties of the coupling of an animal with its ecology - they are properties of the umwelt. They are affordances that can be directly experienced.

Too close as dependent on height and specified as a visual angle.

The mistake that Western Science has made is that it has taken the arbitrary metrics created to aid formal scientific enterprises as 'fundamental' and it has taken the relations that emerge from the functional interactions of people with their ecology to be 'derivative.' However, I think there is little doubt that the experiences of graspable, reachable, near or far are fundamental primitives of the human-ecology system. These pragmatic/functional relations are the raw primitives of experience. They are REAL! The metrics of objective science are also real - but they are the wrong level of description for exploring how people adapt to the functional demands of everyday living.

As Protagoras claimed: Man is the measure of all things.

In our everyday lives we directly experience the ecology in terms of the REAL properties that emerge as a function of the perception-action coupling with our ecology! We will never construct a satisfying understanding of human performance if we start by denying the reality of these essential emergent properties. Thus, the claim is that a science of human performance must be built using different bricks than those used to construct an 'objective' physical science. These bricks, these essential elements are different from those used by physicists, but they are no less real.

The essential elements for building a science of human experience are different than those that have been used successfully in building a science of an observer independent physical world. However, these elements are no less real.

The irony of using different bricks or working at different levels of description is that this may be the path that might allow us to escape from a collection of little sciences to a single, unified science, that spans the field of possibilities reflecting the joint constraints of mind and matter.

See What Matters for an exploration of the implications of these ideas for cognitive science and experience design.

Although I have used the term "wicked problems" in my writing, I only recently read Rittel & Webber's (1973) original description of this concept along with an editorial by Churchman (1967) commenting on his hearing Rittel talk about this construct.

I have little to add to the original formulation and encourage others to access and read both papers.

Rittel, H.W. & Webber, W.M. (1973). Policy Sciences, 4, 155-169.

Churchman, C. W. (1967) Wicked Problems. Management Science, 14(4), B141-B142,

Rittel and Webber list 10 attributes of wicked problem, that I will list here, but encourage readers to go to the original source for further explication.

  1. There is no definitive formulation of a wicked problem.
  2. Wicked problems have no stopping rule.
  3. Solutions to wicked problems are not true-or-false, but good-or-bad.
  4. There is no immediate and no ultimate test of a solution to a wicked problem.
  5. Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
  6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
  7. Every wicked problem is essentially unique.
  8. Every wicked problem can be considered to be a symptom of another problem.
  9. The existence of a discrepancy representing a wicked problem can be explained numerous ways. The choice of explanation determines the nature of the problem's solution.
  10. The planner has no right to be wrong.

From the Churchman article:

... the term "wicked problem" refers to that class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing. The adjective "wicked" is supposed to describe the mischievous and even evil quality of these problems, where proposed "solutions" often turn out to be worse than the symptoms.

p. B141

Churchman raises some ethical issues in the context of OR associated with approaching wicked problems piecemeal, that I think applies far more broadly than to just OR:

A better way of describing the OR solution might be to say that it tames the growl of the wicked problem: the wicked problem no longer shows its teeth before it bites.

Such a remark naturally hints at deception: the taming of the growl may deceive the innocent into believing that the wicked problem is completely tamed. Deception, in turn, suggests morality: the morality of deceiving people into thinking something is so when it is not. Deception becomes an especially strong moral issue when one deceives people into thinking that something is safe when it is highly dangerous.

The moral principle is this: whoever attempts to tame a part of a wicked problem, but not the whole, is morally wrong.

p. B141 - B142

A consequence of an increasingly networked world is that our problems are getting increasingly more wicked. These two papers should be required reading for anyone who is involved in management or design.

Fred Voorhorst has created a poster to help us organize our thoughts with respect to the design of representations that help smart people to skillfully muddle through wicked problems.  In the case of wicked problems - there is no recipe that will guarantee success - but there are things that we can do to improve our muddling skill and to shape our thinking in more productive directions.

1

Successful innovation demands more than a good strategic plan; it requires creative improvisation. Much of the “serious play” that leads to breakthrough innovations is increasingly linked to experiments with models, prototypes, and simulations. As digital technology makes prototyping more cost-effective, serious play will soon lie at the heart of all innovation strategies, influencing how businesses define themselves and their markets.”

“Serious play turns out to be not an ideal but a core competence. Boosting the bandwidth for improvisation turns out to be an invitation to innovation. Treating prototypes as conversation pieces turns out to be the best way to discover what you yourself are really trying to accomplish.

Michael Schrage (1999)

“… generative design research [is] an approach to bring the people we serve through design directly into the design process in order to ensure that we can meet their needs and dreams for the future. Generative design research gives people a language with which they can imagine and express their ideas and dreams for future experience. The ideas and dreams can, in turn, inform and inspire stakeholders in the design and development process.

(Sanders & Stappers, 2012, p. 8)

The concept of "Design Thinking" is very much in vogue these days and I share the associated optimism that there is much that everyone can learn from engaging with the experiences of designers. However, my own experiences with design suggest that the label 'thinking' is misleading. For me the ultimate lesson from design experiences is the importance of coupling perception (analysis and evaluation) with action (creation of various artifacts). For me Schrage's concept of "Serious Play" and Sanders and Stapper's concept of "Co-Creation" provide more accurate descriptions of the power of design experiences for helping people to be more productive and innovative in solving complex problems. The key idea is that thinking does not happen in a disembodied head or brain, but rather, through physically and socially engaging with the world.

A number of years ago, I was part of a brief chat with Arthur Iberall (who designed one of the first suits for astronauts) and he was asked how he approached design? His response was: "I just build it. Then I throw it against the wall and build it again. Till eventually I can't see the wall. At that point I am beginning to get a good understanding of the problem."

The experience of building artifacts is where designers have an advantage on most of us. Building artifacts and interacting with the artifacts is an essential part of the learning and discovery process. Literally grasping and interacting with concrete objects and situations is a prerequisite for mentally grasping them. Trying to build and use something provides an essential test of assumptions and design hypotheses. In fact, I would argue that the process of creating artifacts can be a stronger test of an idea, than more classical experiments. The reason is that the same wrong assumptions that led to the idea to be tested are often also informing the design of the experiment to test the idea.

Thus, an important step in assessing an idea is to get it out of your head and into some kind of physical or social artifact (e.g, a storyboard, a persona, a wireframe, a scenario, an MVP, a simulation).

As a Cognitive Systems Engineer, I am strongly convinced of the value of Cognitve Work Analysis (CWA) as described by Kim Vicente (1999) and others (Naikar, 2013; Stanton, et al., 2018). However, although not necessarily intended by Kim or others, people often treat CWA as a prerequisite for design. That is, there is an implication that a thorough CWA must be completed prior to building any thing.  However, I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, as illustrated in the Figure below.

The figure illustrates my experiences with the development of the Cardiac Consultant App that is designed to help Family Practice Physicians to assess cardiovascular health. The first phase of this development was Tim McEwen's dissertation work at Wright State University. Tim and I did an extensive evaluation of the literature on cardiovascular health as part of our CWA. I particularly remember us trying to decompose the Framingham Risk equations. Discovering a graphical representation for the Cox Hazard function underlying this model was a key to our early representation concept. With Randy Green's help, we were able to code a Minimally Viable Product (MVP) that Tim could evaluate as part of his dissertation work. Note that MVP does not mean minimal functionality. Rather, the goal is to get sufficient functionality for empirical evaluation in a realistic context with MINIMAL developmental effort. The point of the MVP is to create an artifact for testing design assumptions and for learning about the problem.

I was able to take the MVP that we generated in connection with Tim's dissertation to Mile Two, where the UX designers and developers were able to refine the MVP into a fully functioning web App (CVDi). I have to admit that I was quite surprised by how much the concept was improved through working with the UX designers at Mile Two. This involved completely abandoning the central graphic of the MVP, that had provided us with important insights into the Framingham model. We lost the graphic, but carried the insights forward and the new design allowed us to incorporate additional risk models into the representation (e.g., Reynolds Risk Model).

Despite the improvements, there was a major obstacle to implementing the design in a healthcare setting. The stand alone App required a physician to manually enter data from the patient's record (EHR system) into the App. This is where Asymmetric came into the picture. Asymmetric had extensive experience with the FIHR API and they offered to collaborate with Mile Two to link our interface directly to the EHR system - eliminating the need for physicians to manually enter data. In the course of building the FIHR backend, the UX group at Asymmetric offered suggestions for additional improvements to the interface representation, leading to the Cardiac Consultant. Again, I was pleasantly surprised by the value added by these changes.

So, the ultimate point of this story is to illustrate a Serious Play process that involves iteratively creating artifacts and then using the artifacts to elicit feedback in the analysis and discovery process. The artifacts are critical for pragmatically grounding assumptions and hypotheses. Further, the artifacts provide a concrete context for engaging a wide range of participants (e.g., domain experts and technologists) in the discovery process (participatory design).

I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, rather than a prerequisite.

At the end of the day, Design Thinking is more about doing (creating artifacts), than about 'thinking' in the conventional sense.

Works Cited

Naikar, N. 2013. Work Domain Analysis. Boca Raton, FL: CRC Press.

Sanders, E.B. -N, Stappers, P.J. (2012). Convivial Toolbox. Amsterdam, BIS Publishers.

Schrage, M. (1999). Serious Play: How the World’s Best Companies Simulate to Innovate. Cambridge, MA: Harvard Business School Press.

Stanton, N.A., Salmon, P.M., Walker, G.H. & Jenkins, D.P. (2018). Cognitive Work Analysis. Boca Raton, FL: CRC Press.

Vicente, K.J. (1999). Cognitive Work Analysis. Mahwah, NJ: Erlbaum.

2

What does an Ecological Interface Design (or an EID) look like?

As one of the people who has contributed to the development of the EID approach to interface design, I often get a variation of this question (Bennett & Flach, 2011). Unfortunately, the question is impossible to answer because it is based on a misconception of what EID is all about.  EID does not refer to a particular form of interface or representation, rather it refers to a process for exploring work domains in the hope of discovering representations to support productive thinking about complex problems.

Consider the four interfaces displayed below.  Do you see a common form? All four of these interfaces were developed using an EID approach. Yet, the forms of representation appear to be very different.

What makes these interfaces "ecological?"

The most important aspect of the EID approach is a commitment to doing a thorough Cognitive Work Analysis (CWA) with the goal of uncovering the deep structures of the work domain (i.e., the significant ecological constraints) and to designing representations in which these constraints provide a background context for evaluating complex situations.

  • In the DURESS interface, Vicente (e.g., 1999) organized the information to reflect fundamental properties of thermodynamic processes related to mass and energy balances.
  • The TERP interface, designed by Amelink and colleagues ( 2005) was inspired by innovations in the design of autopilots based on energy parameters (potential and kinetic energy). The addition of energy parameters helped to disambiguate the relative role of the throttle and stick for regulating the landing path.
  • In the CVD interface (McEwen et al., 2014) published models of cardiovascular risk (e.g. Framingham and Reynolds Risk Models) became the background for evaluating combinations of clinical values (e.g., cholesterol levels, blood pressure) and for making treatment recommendations.
  • In the RAPTOR interface, Bennett and colleagues (2008) included a Force Ratio graphic to provide a high-level view of the overall state of a conflict (e.g., who is winning).

Although interviews of operators can be a valuable part of any CWA, these are typically not sufficient. With EID the goal is not to match the operators' mental models, but rather to shape the mental models. For example, the Energy Path in the TERP interface was not inspired by interviews with pilots. In fact, most pilots were very skeptical about whether the TERP would help. The TERP was inspired by insights from Aeronautical Engineers who discovered that control systems that used energy states as feedback resulted in more stable automatic control solutions.

With EID the goal is not to match the operators' mental models, but rather to shape the mental models toward more productive ways of thinking.

A second common aspect of representations designed from an EID perspective is a configural organization. Earlier research on interface design was often framed in terms of an either/or contrast between integral versus separable representations. This suggested that you could EITHER support high-level relational perspectives (integral representations) OR provide low-level details (separable representations), but not both.  The EID process is committed to a BOTH/AND perspective, where it is assumed that it is desirable (even necessary) to provide BOTH access to detailed data AND to higher order relations among the variables. In a configural representation the detailed data is organized in a way to make BOTH the detailed data AND more global, relational constraints salient.

For example, in the CVD interface, all of the clinical values that contribute to the cardiovascular risk models are displayed and in addition to presenting a risk estimate (that is an integral function of multiple variables) the relative contribution of each variable is also shown. This allows physicians to not only see the total level of risk, but also to see how much each of the different values is contributing to the risk level.

In configural representations a goal is to leverage the powerful ability of humans to recognize patterns that reflect high-order relations while simultaneously allowing access to specific data as salient details nested within the larger patterns.

The EID process is committed to a both/and perspective, where it is assumed that it is desirable (even necessary) to provide both access to detailed data (the particular) and to higher order relations among the variables (the general).

A third feature of the EID process is the emphasis on supporting adaptive problem solving. The EID approach is based on the belief that there is no single, best way or universal procedure that will lead to a satisfying solution in all cases. Thus, rather than designing for procedural compliance, EID representations are designed to help people to explore a full range of options so that it is possible for them to creatively adapt to situations that (in some cases) could not have been anticipated in advance. Thus, representations designed from an EID perspective typically function as analog simulations that support direct manipulation. By visualizing global constraints (e.g., thermodynamic models or medical risk models) the representations help people to anticipate the consequences of actions. These representations typically allow people to test and evaluate hypotheses by manipulating features of the representation  before committing to a particular course of action or, at least, before going too far down an unproductive or dangerous path of action.

Rather than designing for procedural compliance, EID representations are designed to help people to explore a full range of options so that it is possible for them to creatively adapt to situations that (in some cases) could not have been anticipated in advance.

It should not be too surprising that the forms of representations designed from the EID perspective may be very different. This is because the domains that they are representing can be very different.  The EID approach does not reflect a commitment to any particular form of representation. Rather it is a commitment to providing representations that reflect the deep structure of work domains, including both detailed data and more global, relational constraints. The goal is to provide the kind of insights (i.e., situation awareness) that will allow people to creatively respond to the surprising situations that inevitably arise in complex work domains.

Works Cited

Amelink, H.J.M., Mulder, M., van Paassen, M.M., Flach, J.M. (2005). Theoretical foundations for total energy-based perspective flight-path displays for aircraft guidance. International Journal of Aviation Psychology, 15, 205 – 231.

Bennett, K.B., and Flach, J.M. (2011). Display and Interface Design: Subtle Science, Exact Art. London: Taylor & Francis.

Bennett, K.B., Posey, S.M. & Shattuck, L.G. (2008). Ecological interface design for military command and control. Journal of Cognitive Engineering and Decision Making, 2(4), 349-385.

McEwen, T., Flach, J.M. & Elder, N. (2014). Interfaces to medical information systems: Supporting evidence-based practice. IEEE: Systems, Man, & Cybernetics Annual Meeting, 341-346. San Diego, CA. (Oct 5-8).

Vicente, K.J. (1999). Cognitive Work Analysis. Mahwah, NJ: Erlbaum.