Skip to content

4

Ever since the Cybernetic Hypothesis was introduced to Psychology, there has been greater appreciation of the "intentional" nature of cognitive systems. Yet, despite this awareness, causal (or stimulus-response) forms of explanation continue to dominate the way many people think about how humans (and other animals) process information. For example, most cognition texts begin with sensation and then follow 'stimulation' through successively deeper levels of processing (perception, decision-making ...).

A result of this framing is an at least implicit suggestion that sensations cause action. And there is a danger that people fail to appreciate many of the significant aspects of the circular coupling of perception and action (e.g., self-organization to skillfully and creatively adapt to the ecology) that differentiate animals from plants. 

While it is true that in a circular coupling, there is no sense in which any element in the circle must be given priority as "the cause," I wonder if a simple reorganization of how we depict the dynamic would help people to break away from conventional notions of causality and to better appreciate the intentional dynamic of cognitive systems.

Perhaps, the most important implication of the Cybernetic Hypothesis is that 'action' becomes the prime mover of the dynamic. In a framing that gives priority to action, looking becomes the prerequisite for seeing, and the function of our senses is to serve, rather than to cause action.  

Ideally, science is motivated by the curiosity of individuals and success depends on their ability to formulate well structured questions about important phenomenon. But in practice, science requires resources and these resources depend on the ability of individuals to convince those with the resources that they have an answer to some important contemporary problem. 

The process of convincing the people with the resources to fund your curiosity often hinges on the ability to provide a simple, easy to understand answer to the complex problem. This is where having the right buzz word can make all the difference. For example, in seeking funds to explore the teaming of humans with autonomous systems, one might frame the problem in terms of self-organizing dynamics or trust

I think a strong case can be made that both words are valid descriptions of important aspects of the natural phenomenon. And conversely, a case can be made that both are 'buzz' words. That is, they are fashionable terms or jargon that tend to be open to a broad range of interpretations and uses. 

As buzz words, both terms tend to suggest ways to reduce the complex problem into simpler terms. For example, the term self-organizing systems can suggest reducing the phenomenon to a particular model (e.g., coupled pendulums) or to a particular methodology (e.g., 1/f).  Similarly, trust can suggest reducing the problem of human-technology interactions to simple analogs of human-human interactions. In both cases, the buzz words tend to reduce the problem and to narrow attention to specific dimensions that are familiar and potentially manageable. Somehow the problem becomes less mysterious and there appear to be obvious solutions. 

While the reductions and suggestions of solutions are extremely useful for marketing work to funders and gaining resources, there are also obvious dangers. Too often, the reductions associated with buzz words tend to hide the natural complexity - trivializing the natural phenomenon. Thus, if the researchers get caught up in the 'buzz' of marketing - then research programs can end up being framed around the trivializations, rather than the real phenomenon. In the worse case, the Buzz words become the answers, rather than the questions; experiments often become demonstrations of trivial relations, rather than tests of interesting hypotheses; and the results tend to have little practical value relative to solving the actual phenomenon (e.g., how to improve performance of human-autonomy teams). 

For me, self-organization and trust suggest important questions about the nature of human-autonomy teaming. However, I get worried when I see them being marketed as 'answers.'

 

 

4

In discussions about the nature of cognition, a central question focuses on how meaning emerges from interactions between agents and their environments. It seems clear that the 'meaning' of any object depends in part on properties of the object, in part on the observer, and in part on the situation. For example, consider the following observations from Rasmussen (1986)

The way in which the functional properties of a system are perceived by a decision maker very much depends upon the goals and intentions of the person. In general, objects in the environment in fact only exist isolated from the background in the mind of a human, and the properties they are allocated depend on the actual intentions. A stone may disappear unrecognized into the general scenery; it may be recognized as a stone, maybe even a geologic specimen; it may be considered an item suitable to scare away a threatening dog; or it may be a useful weight that prevents manuscript sheets from being carried away by the wind - all depending on the needs or interests of a human subject. Each person has his own world, depending on his immediate needs.

(p. 13)

There are two subtly different ways to think about the dynamics of experience that underlies the emergence of meaning. Conventionally, constructivist approaches to cognition talk about making meaning. This makes a lot of sense in the context of language, where arbitrary signs such as a sequence of marks on a page (e.g., C - A - T) are interpreted relative to prior learning about alphabets and word definitions. The suggestion is that the meaning is the result of adding prior knowledge to the arbitrary sign to make (or construct) meaning. The implication is that the symbols are meaningless until they are interpreted.

An alternative way to think about the dynamic of experience, that reflects ecological or situated perspectives on experience, is that meaning is discovered. This perspective makes a lot of sense in terms of perceptual-motor skills. For example, we discover affordances like graspable and reachable by interacting with the objects in the environment. The underlying relations that determine whether an object will fit comfortably in the hand are not arbitrary (though the affordances of a specific object like a basketball may vary from individual to individual as a function of hand sizes). Affordances reflect meaning-full properties of the ecology - that exist independent from perception or interpretation. The intention will not be realized if the affordance is not detected, but the affordance exists and can be specified objectively, whether or not it is ever realized in action. Further the meaning can be mis-perceived, but will be corrected through the feedback that results from acting on the misperception.

The framework of meaning making makes sense if you think about the stimuli of experience as punctate instances in time (e.g., isolated frames in a movie reel). In this case, experiencing a melody requires that the significance of a particular note be constructed  by retrieving the prior notes from memory and mentally adding them together to re-construct the melody.

In contrast, the framework of meaning discovery suggests that perceptions are not punctate, but that they are extended over time so that the pattern of notes is experienced as a whole (as a chunk). This extension may go beyond the notes heard to include prior experiences with a particular memory that allow prediction or anticipation of the entire melody. The metaphor does not have to invoke memory in terms of adding the prior notes. Rather, the metaphor is one of attuning or resonating to a pattern - and recognizing a melody.

Note that the meaning discovery framework does suggest the existence of mental structures (schema or frames) - but these structures function more like filters - that resonate to some properties or patterns, as a function of prior experience. In this framework, the function of experience or learning is not about storing past instances (that can be added to new instances to construct meaning), rather it is about tuning attention to those properties of experience that have functional significance (e.g., tuning the weights in a neural net).

Back to the processing of (C-A-T). These symbols may be arbitrary in that there is no obvious physical or analogical relation to the animal that they represent. But they are NOT arbitrary in a cultural sense. If we assume that the meaning of C-A-T is created by a culture - not by a mind. Then the meaning discovery framework could be sensibly applied to language as well as to perceptual motor skills. In this sense, learning language is not about creating meaning from arbitrary signs - but about discovering the cultural significance of the signs (in the same way that discovering affordances is about discovering the significant action properties of an object).

The danger of the constructivist framework where minds make meaning is the implication that everything is meaningless until it comes in contact with a mind. There is a subtle implication that we live in a meaningless world. I can't accept that implication - and thus prefer to think of the dynamic of learning and experience as one of discovering meaning. There is a subjective dimension to meaning, but I can't accept that meaning is purely subjective.

Pirsig "Zen and the Art of Motorcycle Maintenance"

The development of and standardization of metrics was critical to the development of science. The standard metrics provided "objective" standards for describing events and experiments to ensure that they could be replicated and generalized appropriately. Without objective standards of measurement there could be no science.

Development of objective, observer independent standards of measurement was essential to the success of the physical sciences.

However, the great error in Western Science was to take the description of the world in terms of these metrics as an objective reality - in opposition to a subjective reality! The implication is that the objective distance in terms of meters is true, but the functional relations such as graspable, reachable, near or far are 'subjective.' This implies that the variability associated with individual differences along such dimensions is "noise" with regard to the "true" reality. And there is an implication that this "noise" has to be somehow filtered and added-to in order to construct a mental model of the objective truth - in relation to the standard metrics (e.g., the size in meters).

One implication is that since people and animals are not well calibrated to the standard metrics, then their perceptions of the world must be 'indirect' and therefore it is necessary for them to reconstruct the true world (recover the correct standard) in order to act appropriately.

Another implication is that many of the relations that directly impact how people make judgements about graspability (e.g., their own hand size), reachability (their arm length or height), or closeness (e.g., available modes of transportation) are less real - less basic - or that they are derivative. But of course, these relations are every bit as 'real' and every bit as specifiable as the elements comprising these relations.

These relations are part of a "whole" that can not be discovered in the components. These relations are 'emergent properties' of the whole. A central premise of ecological psychology is that these emergent properties are 'essential and fundamental' elements for a science that hopes to describe how people adapt to their ecologies. Ecological Psychology argues that the size of an object relative to a hand or the distance to a cliff relative to your height is every bit as objective as the size relative to a meter stick.

Further, ecological psychology argues that these functional relations exist in the world to be discovered and perceived directly. And that there is information (e.g., structure in optical arrays) that specifies these emergent properties. Thus, there is no need for internal processing to construct or reconstruct these relations. These are NOT mental constructions - they are functional properties of the coupling of an animal with its ecology - they are properties of the umwelt. They are affordances that can be directly experienced.

Too close as dependent on height and specified as a visual angle.

The mistake that Western Science has made is that it has taken the arbitrary metrics created to aid formal scientific enterprises as 'fundamental' and it has taken the relations that emerge from the functional interactions of people with their ecology to be 'derivative.' However, I think there is little doubt that the experiences of graspable, reachable, near or far are fundamental primitives of the human-ecology system. These pragmatic/functional relations are the raw primitives of experience. They are REAL! The metrics of objective science are also real - but they are the wrong level of description for exploring how people adapt to the functional demands of everyday living.

As Protagoras claimed: Man is the measure of all things.

In our everyday lives we directly experience the ecology in terms of the REAL properties that emerge as a function of the perception-action coupling with our ecology! We will never construct a satisfying understanding of human performance if we start by denying the reality of these essential emergent properties. Thus, the claim is that a science of human performance must be built using different bricks than those used to construct an 'objective' physical science. These bricks, these essential elements are different from those used by physicists, but they are no less real.

The essential elements for building a science of human experience are different than those that have been used successfully in building a science of an observer independent physical world. However, these elements are no less real.

The irony of using different bricks or working at different levels of description is that this may be the path that might allow us to escape from a collection of little sciences to a single, unified science, that spans the field of possibilities reflecting the joint constraints of mind and matter.

See What Matters for an exploration of the implications of these ideas for cognitive science and experience design.

Although I have used the term "wicked problems" in my writing, I only recently read Rittel & Webber's (1973) original description of this concept along with an editorial by Churchman (1967) commenting on his hearing Rittel talk about this construct.

I have little to add to the original formulation and encourage others to access and read both papers.

Rittel, H.W. & Webber, W.M. (1973). Policy Sciences, 4, 155-169.

Churchman, C. W. (1967) Wicked Problems. Management Science, 14(4), B141-B142,

Rittel and Webber list 10 attributes of wicked problem, that I will list here, but encourage readers to go to the original source for further explication.

  1. There is no definitive formulation of a wicked problem.
  2. Wicked problems have no stopping rule.
  3. Solutions to wicked problems are not true-or-false, but good-or-bad.
  4. There is no immediate and no ultimate test of a solution to a wicked problem.
  5. Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
  6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
  7. Every wicked problem is essentially unique.
  8. Every wicked problem can be considered to be a symptom of another problem.
  9. The existence of a discrepancy representing a wicked problem can be explained numerous ways. The choice of explanation determines the nature of the problem's solution.
  10. The planner has no right to be wrong.

From the Churchman article:

... the term "wicked problem" refers to that class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing. The adjective "wicked" is supposed to describe the mischievous and even evil quality of these problems, where proposed "solutions" often turn out to be worse than the symptoms.

p. B141

Churchman raises some ethical issues in the context of OR associated with approaching wicked problems piecemeal, that I think applies far more broadly than to just OR:

A better way of describing the OR solution might be to say that it tames the growl of the wicked problem: the wicked problem no longer shows its teeth before it bites.

Such a remark naturally hints at deception: the taming of the growl may deceive the innocent into believing that the wicked problem is completely tamed. Deception, in turn, suggests morality: the morality of deceiving people into thinking something is so when it is not. Deception becomes an especially strong moral issue when one deceives people into thinking that something is safe when it is highly dangerous.

The moral principle is this: whoever attempts to tame a part of a wicked problem, but not the whole, is morally wrong.

p. B141 - B142

A consequence of an increasingly networked world is that our problems are getting increasingly more wicked. These two papers should be required reading for anyone who is involved in management or design.

Fred Voorhorst has created a poster to help us organize our thoughts with respect to the design of representations that help smart people to skillfully muddle through wicked problems.  In the case of wicked problems - there is no recipe that will guarantee success - but there are things that we can do to improve our muddling skill and to shape our thinking in more productive directions.

Successful innovation demands more than a good strategic plan; it requires creative improvisation. Much of the “serious play” that leads to breakthrough innovations is increasingly linked to experiments with models, prototypes, and simulations. As digital technology makes prototyping more cost-effective, serious play will soon lie at the heart of all innovation strategies, influencing how businesses define themselves and their markets.”

“Serious play turns out to be not an ideal but a core competence. Boosting the bandwidth for improvisation turns out to be an invitation to innovation. Treating prototypes as conversation pieces turns out to be the best way to discover what you yourself are really trying to accomplish.

Michael Schrage (1999)

“… generative design research [is] an approach to bring the people we serve through design directly into the design process in order to ensure that we can meet their needs and dreams for the future. Generative design research gives people a language with which they can imagine and express their ideas and dreams for future experience. The ideas and dreams can, in turn, inform and inspire stakeholders in the design and development process.

(Sanders & Stappers, 2012, p. 8)

The concept of "Design Thinking" is very much in vogue these days and I share the associated optimism that there is much that everyone can learn from engaging with the experiences of designers. However, my own experiences with design suggest that the label 'thinking' is misleading. For me the ultimate lesson from design experiences is the importance of coupling perception (analysis and evaluation) with action (creation of various artifacts). For me Schrage's concept of "Serious Play" and Sanders and Stapper's concept of "Co-Creation" provide more accurate descriptions of the power of design experiences for helping people to be more productive and innovative in solving complex problems. The key idea is that thinking does not happen in a disembodied head or brain, but rather, through physically and socially engaging with the world.

A number of years ago, I was part of a brief chat with Arthur Iberall (who designed one of the first suits for astronauts) and he was asked how he approached design? His response was: "I just build it. Then I throw it against the wall and build it again. Till eventually I can't see the wall. At that point I am beginning to get a good understanding of the problem."

The experience of building artifacts is where designers have an advantage on most of us. Building artifacts and interacting with the artifacts is an essential part of the learning and discovery process. Literally grasping and interacting with concrete objects and situations is a prerequisite for mentally grasping them. Trying to build and use something provides an essential test of assumptions and design hypotheses. In fact, I would argue that the process of creating artifacts can be a stronger test of an idea, than more classical experiments. The reason is that the same wrong assumptions that led to the idea to be tested are often also informing the design of the experiment to test the idea.

Thus, an important step in assessing an idea is to get it out of your head and into some kind of physical or social artifact (e.g, a storyboard, a persona, a wireframe, a scenario, an MVP, a simulation).

As a Cognitive Systems Engineer, I am strongly convinced of the value of Cognitve Work Analysis (CWA) as described by Kim Vicente (1999) and others (Naikar, 2013; Stanton, et al., 2018). However, although not necessarily intended by Kim or others, people often treat CWA as a prerequisite for design. That is, there is an implication that a thorough CWA must be completed prior to building any thing.  However, I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, as illustrated in the Figure below.

The figure illustrates my experiences with the development of the Cardiac Consultant App that is designed to help Family Practice Physicians to assess cardiovascular health. The first phase of this development was Tim McEwen's dissertation work at Wright State University. Tim and I did an extensive evaluation of the literature on cardiovascular health as part of our CWA. I particularly remember us trying to decompose the Framingham Risk equations. Discovering a graphical representation for the Cox Hazard function underlying this model was a key to our early representation concept. With Randy Green's help, we were able to code a Minimally Viable Product (MVP) that Tim could evaluate as part of his dissertation work. Note that MVP does not mean minimal functionality. Rather, the goal is to get sufficient functionality for empirical evaluation in a realistic context with MINIMAL developmental effort. The point of the MVP is to create an artifact for testing design assumptions and for learning about the problem.

I was able to take the MVP that we generated in connection with Tim's dissertation to Mile Two, where the UX designers and developers were able to refine the MVP into a fully functioning web App (CVDi). I have to admit that I was quite surprised by how much the concept was improved through working with the UX designers at Mile Two. This involved completely abandoning the central graphic of the MVP, that had provided us with important insights into the Framingham model. We lost the graphic, but carried the insights forward and the new design allowed us to incorporate additional risk models into the representation (e.g., Reynolds Risk Model).

Despite the improvements, there was a major obstacle to implementing the design in a healthcare setting. The stand alone App required a physician to manually enter data from the patient's record (EHR system) into the App. This is where Asymmetric came into the picture. Asymmetric had extensive experience with the FIHR API and they offered to collaborate with Mile Two to link our interface directly to the EHR system - eliminating the need for physicians to manually enter data. In the course of building the FIHR backend, the UX group at Asymmetric offered suggestions for additional improvements to the interface representation, leading to the Cardiac Consultant. Again, I was pleasantly surprised by the value added by these changes.

So, the ultimate point of this story is to illustrate a Serious Play process that involves iteratively creating artifacts and then using the artifacts to elicit feedback in the analysis and discovery process. The artifacts are critical for pragmatically grounding assumptions and hypotheses. Further, the artifacts provide a concrete context for engaging a wide range of participants (e.g., domain experts and technologists) in the discovery process (participatory design).

I have found that it is impossible to do a thorough CWA without building things along the way. In my experience, it is best to think of CWA as a co-requisite with design in an iterative process, rather than a prerequisite.

At the end of the day, Design Thinking is more about doing (creating artifacts), than about 'thinking' in the conventional sense.

Works Cited

Naikar, N. 2013. Work Domain Analysis. Boca Raton, FL: CRC Press.

Sanders, E.B. -N, Stappers, P.J. (2012). Convivial Toolbox. Amsterdam, BIS Publishers.

Schrage, M. (1999). Serious Play: How the World’s Best Companies Simulate to Innovate. Cambridge, MA: Harvard Business School Press.

Stanton, N.A., Salmon, P.M., Walker, G.H. & Jenkins, D.P. (2018). Cognitive Work Analysis. Boca Raton, FL: CRC Press.

Vicente, K.J. (1999). Cognitive Work Analysis. Mahwah, NJ: Erlbaum.

2

What does an Ecological Interface Design (or an EID) look like?

As one of the people who has contributed to the development of the EID approach to interface design, I often get a variation of this question (Bennett & Flach, 2011). Unfortunately, the question is impossible to answer because it is based on a misconception of what EID is all about.  EID does not refer to a particular form of interface or representation, rather it refers to a process for exploring work domains in the hope of discovering representations to support productive thinking about complex problems.

Consider the four interfaces displayed below.  Do you see a common form? All four of these interfaces were developed using an EID approach. Yet, the forms of representation appear to be very different.

What makes these interfaces "ecological?"

The most important aspect of the EID approach is a commitment to doing a thorough Cognitive Work Analysis (CWA) with the goal of uncovering the deep structures of the work domain (i.e., the significant ecological constraints) and to designing representations in which these constraints provide a background context for evaluating complex situations.

  • In the DURESS interface, Vicente (e.g., 1999) organized the information to reflect fundamental properties of thermodynamic processes related to mass and energy balances.
  • The TERP interface, designed by Amelink and colleagues ( 2005) was inspired by innovations in the design of autopilots based on energy parameters (potential and kinetic energy). The addition of energy parameters helped to disambiguate the relative role of the throttle and stick for regulating the landing path.
  • In the CVD interface (McEwen et al., 2014) published models of cardiovascular risk (e.g. Framingham and Reynolds Risk Models) became the background for evaluating combinations of clinical values (e.g., cholesterol levels, blood pressure) and for making treatment recommendations.
  • In the RAPTOR interface, Bennett and colleagues (2008) included a Force Ratio graphic to provide a high-level view of the overall state of a conflict (e.g., who is winning).

Although interviews of operators can be a valuable part of any CWA, these are typically not sufficient. With EID the goal is not to match the operators' mental models, but rather to shape the mental models. For example, the Energy Path in the TERP interface was not inspired by interviews with pilots. In fact, most pilots were very skeptical about whether the TERP would help. The TERP was inspired by insights from Aeronautical Engineers who discovered that control systems that used energy states as feedback resulted in more stable automatic control solutions.

With EID the goal is not to match the operators' mental models, but rather to shape the mental models toward more productive ways of thinking.

A second common aspect of representations designed from an EID perspective is a configural organization. Earlier research on interface design was often framed in terms of an either/or contrast between integral versus separable representations. This suggested that you could EITHER support high-level relational perspectives (integral representations) OR provide low-level details (separable representations), but not both.  The EID process is committed to a BOTH/AND perspective, where it is assumed that it is desirable (even necessary) to provide BOTH access to detailed data AND to higher order relations among the variables. In a configural representation the detailed data is organized in a way to make BOTH the detailed data AND more global, relational constraints salient.

For example, in the CVD interface, all of the clinical values that contribute to the cardiovascular risk models are displayed and in addition to presenting a risk estimate (that is an integral function of multiple variables) the relative contribution of each variable is also shown. This allows physicians to not only see the total level of risk, but also to see how much each of the different values is contributing to the risk level.

In configural representations a goal is to leverage the powerful ability of humans to recognize patterns that reflect high-order relations while simultaneously allowing access to specific data as salient details nested within the larger patterns.

The EID process is committed to a both/and perspective, where it is assumed that it is desirable (even necessary) to provide both access to detailed data (the particular) and to higher order relations among the variables (the general).

A third feature of the EID process is the emphasis on supporting adaptive problem solving. The EID approach is based on the belief that there is no single, best way or universal procedure that will lead to a satisfying solution in all cases. Thus, rather than designing for procedural compliance, EID representations are designed to help people to explore a full range of options so that it is possible for them to creatively adapt to situations that (in some cases) could not have been anticipated in advance. Thus, representations designed from an EID perspective typically function as analog simulations that support direct manipulation. By visualizing global constraints (e.g., thermodynamic models or medical risk models) the representations help people to anticipate the consequences of actions. These representations typically allow people to test and evaluate hypotheses by manipulating features of the representation  before committing to a particular course of action or, at least, before going too far down an unproductive or dangerous path of action.

Rather than designing for procedural compliance, EID representations are designed to help people to explore a full range of options so that it is possible for them to creatively adapt to situations that (in some cases) could not have been anticipated in advance.

It should not be too surprising that the forms of representations designed from the EID perspective may be very different. This is because the domains that they are representing can be very different.  The EID approach does not reflect a commitment to any particular form of representation. Rather it is a commitment to providing representations that reflect the deep structure of work domains, including both detailed data and more global, relational constraints. The goal is to provide the kind of insights (i.e., situation awareness) that will allow people to creatively respond to the surprising situations that inevitably arise in complex work domains.

Works Cited

Amelink, H.J.M., Mulder, M., van Paassen, M.M., Flach, J.M. (2005). Theoretical foundations for total energy-based perspective flight-path displays for aircraft guidance. International Journal of Aviation Psychology, 15, 205 – 231.

Bennett, K.B., and Flach, J.M. (2011). Display and Interface Design: Subtle Science, Exact Art. London: Taylor & Francis.

Bennett, K.B., Posey, S.M. & Shattuck, L.G. (2008). Ecological interface design for military command and control. Journal of Cognitive Engineering and Decision Making, 2(4), 349-385.

McEwen, T., Flach, J.M. & Elder, N. (2014). Interfaces to medical information systems: Supporting evidence-based practice. IEEE: Systems, Man, & Cybernetics Annual Meeting, 341-346. San Diego, CA. (Oct 5-8).

Vicente, K.J. (1999). Cognitive Work Analysis. Mahwah, NJ: Erlbaum.

Human Error (HE) has long been offered as an explanation for why accidents happen in complex systems. In fact, it is often suggested to be the leading cause of accidents in domains such as aviation and healthcare. As Jens Rasmussen has noted the human is in a very unfortunate position with respect to common explanations for system failures. This is because when you trace backwards in time along any trajectory of events leading to an accident, you will almost always find some act that someone did that contributed to the accident or an act that they failed to do that might have prevented the accident. This act or failure to act is typically labeled as a human error and it is typically credited to be the CAUSE of the accident.

Note that the behavior is only noticed for consideration in hindsight (if there is an accident), otherwise it is typically just unremarkable work behavior.

However, many people (e.g., Dekker, Hollnagel, Rasmussen, Woods) now understand that this explanation trivializes the complexities of work and that blaming humans rarely leads to safety improvements. In a previous blog I noted the parallels between stamping out forest fires and stamping out human error. Stamping out forest fires does not necessarily lead to healthy forests; and stamping out human error does not necessarily lead to safer systems. And in fact, such approaches may actually set the conditions for catastrophic accidents (due to fuel building up in forests, and due to failure to disclose near misses and to learn from experience in complex systems).

While I fully appreciate this intellectually, I had a recent experience on a trip to France that reminded me how powerful the illusion of Human Error can be.

Shortly after my arrival at Charles de Gaulle Airport, as I managed the trains into Paris, my wallet was stolen. It had all my cash and all my credit cards. I was penniless in a country where I didn't know the language. It was quite an experience. The important thing relative to this post was my powerful feeling that I was at fault. Why wasn't I more careful? Why didn't I move my wallet from my back pocket, where I normally carry it (and where it is most comfortable) to my front pockets (as I normally do when I am in dangerous areas)? Why did I have all my money and credit cards in the same place? What a fool I am? It's all my fault!

The illusion is powerful! I guess this reflects a need to believe that I am in control. I know intellectually that this is an illusion. I know that life is a delicate balancing act where a small perturbation can knock us off our feet. I know that when things work, it is not a simple function of my control actions, but the result of an extensive network of social and cultural supports. And I should know that when things don't work, it is typically the result of a cascade of small perturbations in this network of support (e.g., the loss of a nail).

The human error illusion is the flip side of the illusion that we are in control. It is an illusion that trivializes complexity - minimizing the risks of failure and exaggerating the power of control.

Fortunately, I got by with a lot of help from my friends, and my trip to France was not ruined by this event. It turned out to be a great trip and a valuable learning experience.

This is the six and final post in a series of blogs to examine some of the implications of a CSE perspective on sociotechnical systems and the implications for design. The table below summarizes some of the ways that CSE has expanded our vision of humans. In this blog the focus will be on design implications.

One of the well known mantras of the Human Factors profession has been:

Know thy user.

This has typically meant that a fundamental role for human factors has been to make sure that system designers are aware of computational limitations (e.g., perceptual thresholds, working memory capacity, potential violations of classical logic due to reliance on heuristics) and expectations (e.g., population stereotypes, mental models) that bound human performance.

It is important to note that these limitations have generally been validated with a wealth of scientific research. Thus, it is important that these limitations be considered by designers. It is important to design information systems so that relevant information is perceptually salient, so that working memory is not over-taxed, and so that expectations and population stereotypes are not violated.

The emphasis on the bounds of human rationality, however, tends to put human factors at the back of the innovation parade. While others are touting the opportunities of emerging technologies, HF is apologizing for the weaknesses of the humans. This feeds into a narrative in which automation becomes the 'hero' and humans are pushed into the background as the weakest link - a source of error and an obstacle to innovation. From the perspective of the technologists - the world would be so much better if we could simply engineer the humans out of the system (e.g., get human drivers off the road in order to increase highway safety).

But of course, we know that this is a false narrative. Bounded rationality is not unique to humans - all technical systems are bounded (e.g., by the assumptions of their designers or in the case of neural nets by the bounds of their training/experience). It is important to understand that the bounds of rationality are a function of the complexity or requisite variety of nature. It is the high dimensionality and interconnectedness of the natural world that creates the bounds on any information processing system (human or robot/automaton) that is challenged to cope in this natural world. In nature there are always potential sources of information that will be beyond the limits of any computational system.

The implication for designing sociotechnical systems is that designers need to take advantage of whatever resources are available to cope with this requisite variety of nature. For CSE, the creative problem solving abilities of humans and human social systems is considered to be one of the resources that designers should be leveraging.  Thus, the muddling of humans (i.e., incrementalism) described by Lindblom is NOT considered to be a weakness, but rather a strength of humans.

Most critics of incrementalism believe that doing better usually means turning away from incrementalism. Incrementalists believe that for complex problem solving it usually means practicing incrementalism more skillfully and turning away from it only rarely. (C.E. Lindblom, 1979, p. 517)

Thus, for designing systems for coping with complex natural problems (e.g., healthcare, economics, security) it is important to appreciate the information limitations of all systems involved (human and otherwise). However, this is not enough. It is also important to consider the capabilities of all systems involved. One of these capabilities is the creative, problem solving capacity of smart humans and human social systems. A goal for design needs to be to support this creative capacity by helping humans to tune into the deep structure of natural problems so that they can skillfully muddle through with the potential of discovering smart solutions to problems that even the designers could not have anticipated.

In order to 'know thy user' it is not sufficient to simply catalog all the limitations. Knowing thy users also entails appreciating the capabilities that users offer with respect to coping with the complexities of nature.

This often involves constructing interface representations that shape human mental models or expectations toward smarter more productive ways of thinking. In other words, the goal of interface design is to provide insights into the deep structure or meaningful dimensions of a problem, so that humans can learn from mistakes and eventually discover clever strategies for coping with the unavoidable complexities of the natural world.

Lindblom, C.E. (1979). Still muddling, not yet through. Public Administration Review, 39(6), 517-526.