Skip to content

Human Error (HE) has long been offered as an explanation for why accidents happen in complex systems. In fact, it is often suggested to be the leading cause of accidents in domains such as aviation and healthcare. As Jens Rasmussen has noted the human is in a very unfortunate position with respect to common explanations for system failures. This is because when you trace backwards in time along any trajectory of events leading to an accident, you will almost always find some act that someone did that contributed to the accident or an act that they failed to do that might have prevented the accident. This act or failure to act is typically labeled as a human error and it is typically credited to be the CAUSE of the accident.

Note that the behavior is only noticed for consideration in hindsight (if there is an accident), otherwise it is typically just unremarkable work behavior.

However, many people (e.g., Dekker, Hollnagel, Rasmussen, Woods) now understand that this explanation trivializes the complexities of work and that blaming humans rarely leads to safety improvements. In a previous blog I noted the parallels between stamping out forest fires and stamping out human error. Stamping out forest fires does not necessarily lead to healthy forests; and stamping out human error does not necessarily lead to safer systems. And in fact, such approaches may actually set the conditions for catastrophic accidents (due to fuel building up in forests, and due to failure to disclose near misses and to learn from experience in complex systems).

While I fully appreciate this intellectually, I had a recent experience on a trip to France that reminded me how powerful the illusion of Human Error can be.

Shortly after my arrival at Charles de Gaulle Airport, as I managed the trains into Paris, my wallet was stolen. It had all my cash and all my credit cards. I was penniless in a country where I didn't know the language. It was quite an experience. The important thing relative to this post was my powerful feeling that I was at fault. Why wasn't I more careful? Why didn't I move my wallet from my back pocket, where I normally carry it (and where it is most comfortable) to my front pockets (as I normally do when I am in dangerous areas)? Why did I have all my money and credit cards in the same place? What a fool I am? It's all my fault!

The illusion is powerful! I guess this reflects a need to believe that I am in control. I know intellectually that this is an illusion. I know that life is a delicate balancing act where a small perturbation can knock us off our feet. I know that when things work, it is not a simple function of my control actions, but the result of an extensive network of social and cultural supports. And I should know that when things don't work, it is typically the result of a cascade of small perturbations in this network of support (e.g., the loss of a nail).

The human error illusion is the flip side of the illusion that we are in control. It is an illusion that trivializes complexity - minimizing the risks of failure and exaggerating the power of control.

Fortunately, I got by with a lot of help from my friends, and my trip to France was not ruined by this event. It turned out to be a great trip and a valuable learning experience.


This is the six and final post in a series of blogs to examine some of the implications of a CSE perspective on sociotechnical systems and the implications for design. The table below summarizes some of the ways that CSE has expanded our vision of humans. In this blog the focus will be on design implications.

One of the well known mantras of the Human Factors profession has been:

Know thy user.

This has typically meant that a fundamental role for human factors has been to make sure that system designers are aware of computational limitations (e.g., perceptual thresholds, working memory capacity, potential violations of classical logic due to reliance on heuristics) and expectations (e.g., population stereotypes, mental models) that bound human performance.

It is important to note that these limitations have generally been validated with a wealth of scientific research. Thus, it is important that these limitations be considered by designers. It is important to design information systems so that relevant information is perceptually salient, so that working memory is not over-taxed, and so that expectations and population stereotypes are not violated.

The emphasis on the bounds of human rationality, however, tends to put human factors at the back of the innovation parade. While others are touting the opportunities of emerging technologies, HF is apologizing for the weaknesses of the humans. This feeds into a narrative in which automation becomes the 'hero' and humans are pushed into the background as the weakest link - a source of error and an obstacle to innovation. From the perspective of the technologists - the world would be so much better if we could simply engineer the humans out of the system (e.g., get human drivers off the road in order to increase highway safety).

But of course, we know that this is a false narrative. Bounded rationality is not unique to humans - all technical systems are bounded (e.g., by the assumptions of their designers or in the case of neural nets by the bounds of their training/experience). It is important to understand that the bounds of rationality are a function of the complexity or requisite variety of nature. It is the high dimensionality and interconnectedness of the natural world that creates the bounds on any information processing system (human or robot/automaton) that is challenged to cope in this natural world. In nature there are always potential sources of information that will be beyond the limits of any computational system.

The implication for designing sociotechnical systems is that designers need to take advantage of whatever resources are available to cope with this requisite variety of nature. For CSE, the creative problem solving abilities of humans and human social systems is considered to be one of the resources that designers should be leveraging.  Thus, the muddling of humans (i.e., incrementalism) described by Lindblom is NOT considered to be a weakness, but rather a strength of humans.

Most critics of incrementalism believe that doing better usually means turning away from incrementalism. Incrementalists believe that for complex problem solving it usually means practicing incrementalism more skillfully and turning away from it only rarely. (C.E. Lindblom, 1979, p. 517)

Thus, for designing systems for coping with complex natural problems (e.g., healthcare, economics, security) it is important to appreciate the information limitations of all systems involved (human and otherwise). However, this is not enough. It is also important to consider the capabilities of all systems involved. One of these capabilities is the creative, problem solving capacity of smart humans and human social systems. A goal for design needs to be to support this creative capacity by helping humans to tune into the deep structure of natural problems so that they can skillfully muddle through with the potential of discovering smart solutions to problems that even the designers could not have anticipated.

In order to 'know thy user' it is not sufficient to simply catalog all the limitations. Knowing thy users also entails appreciating the capabilities that users offer with respect to coping with the complexities of nature.

This often involves constructing interface representations that shape human mental models or expectations toward smarter more productive ways of thinking. In other words, the goal of interface design is to provide insights into the deep structure or meaningful dimensions of a problem, so that humans can learn from mistakes and eventually discover clever strategies for coping with the unavoidable complexities of the natural world.

Lindblom, C.E. (1979). Still muddling, not yet through. Public Administration Review, 39(6), 517-526.

“An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself.” — Herbert Simon

In some respect, it is debatable whether psychology completely escaped from the strictures of a Behaviorist perspective with the dawning of an information processing age. Although the computer and information processing metaphors legitimized the study of mental phenomena, these phenomena tended to be visualized as activities or mental behaviors (e.g., encoding, remembering, planning, deciding, etc.). Thus, for human factors, cognitive task analysis tended to focus on specifying the mental activities associated with work.

However, as Simon's parable of the ant suggests, this might lead to the appearance or inference that the cognition is very complex. When in reality, the ultimate source of that complexity may not be in the cognitive system, but in the work situations that provide the context for the activity (i.e., the beach). Thus, Simon's parable is the motivation for work analysis as a necessary complement to task analysis. The focus of work analysis is to describe the functional constraints (e.g., goals, affordances, regulatory constraints, social constraints, etc.) that are shaping the physical and mental activities of workers. The focus of work analysis is on describing work situations, as a necessary context for evaluating both awareness (rationality) and behavior.

While classical task analysis describes what people do, it provides little insight into why they do something. This is the primary value of work analysis, to provide insight into the functional context shaping work behaviors.

A common experience upon watching activities in an unfamiliar work domain is puzzlement at activities that seem somewhat irrational. Why did they do that? However, as one becomes more familiar with the domain, one often discovers that these puzzling activities are actually smart responses to constraints that the experienced workers were attuned to - but that were invisible to an outsider.

Thus, for those of us who see humans as incredibly adaptive systems - it is natural to look to the ecology that the humans are adapting to as the first source for hypotheses to explain that behavior. And for those of us who hope to design information technologies that enhance that adaptive capacity - it is critical that the tools not simply be tuned to human capabilities, but that these tools are also tuned to the demands of the work ecology. For example, a shovel must not only fit the human hands, but it must also fit well with the materials that are to be manipulated.

Thus, work analysis is essential for Cognitive Systems Engineering. It reflects the belief that understanding situations is a prerequisite for understanding awareness.

This is the fourth in a series of entries to explore the differences between an Information Processing Approach to Human Factors and a Meaning Processing Approach to Cognitive Systems Engineering. The table below lists some contrasts between these two perspectives. This entry will focus on the third contrast in the table - the shift from a focus on 'workload' to a focus on 'situation awareness.'

The concept of information developed in this theory at first seems disappointing and bizarre - disappointing because it has nothing to do with meaning, and bizarre because it deals not with a single message but rather with the statistical character of a whole assemble of messages, bizarre also because in these statistical terms the two words information and uncertainty find themselves to be partners. Warren Weaver (1963, p. 27)

The construct of 'workload' is a natural focus for an approach that emphasizes describing and quantifying internal constraints of the human and that assumes that these constraints are independent of the particulars of any specific situation or work context. This fits well with the engineering perspective on quantifying information and for specifying the capacity of fixed information channels as developed by Shannon.  However, the downside of this perspective is that in making the construct of workload independent of 'context,' it thus becomes independent of 'meaning' as suggested in the above quote from Warren Weaver.

Those interested in the impact of context on human cognition became dissatisfied with a framework that focused only on internal constraints (e.g., bandwidth, resources, modality) without consideration for how those constraints interacted with situations. Thus, the construct of Situation Awareness (SA) evolved as an alternative to workload. Unfortunately, many who have been steeped in the information processing tradition have framed SA in terms of internal constraints (e.g., treating levels of SA as components internal to the processing system).

However, others have taken the construct of SA as an opportunity to consider the dynamic couplings of humans and work domains (or situations).  For them, the construct of SA reflects a need to 'situate' cognition within a work ecology and to consider how constraints in that ecology create demands and opportunities for cognitive systems. In this framework, it is assumed that cognitive systems can intelligently adapt to the constraints of situations - utilizing structure in situations to 'chunk' information and as the basis for smart heuristics that reduce the computational burden, allowing people to deal effectively with situations that would overwhelm the channel capacity of a system not tuned to these structural constraints (see aiming off example).

There is no question that humans have limited working memory capacity as suggested by the workload construct. However, CSE recognizes the ability of people to discover and use situated constraints (e.g., patterns) in ways that allow them to do complex work (e.g., play chess, pilot aircraft, drive in dense traffic, play a musical instrument) despite these internal constraints. It is this capacity to attune to structure associated with specific work domains that leads to expert performance.

The design implication of an approach that focuses on workload is to protect the system against human limitations (e.g., bottlenecks) by either distributing the work among multiple people or by replacing humans with automated systems with higher bandwidth. The key is to make sure that people are not overwhelmed by too much data!

The design implication of an approach that focuses on SA is to make the meaningful work domain constraints salient in order to facilitate attunement processes. This can be done through the design of interfaces or through training. The result is to heighten human engagement with the domain structure to facilitate skill and expertise. The key is to make sure that people are well-tuned to the meaningful aspects of work (e.g., constraints and patterns) that allow them to 'see' what needs to be done.


First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break (or at least stretch) this informational bottleneck.

Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. George Miller (1956) p. 96-97, (emphasis added).

This continues the discussion about the differences between CSE (meaning processing) and more classical HF (information processing) approaches to human performance summarized in the table below. This post will focus on the second line in the table - shifting emphasis from human limitations to include greater consideration of human capabilities.

Anybody who has taken an intro class in Psychology or Human Factors is familiar with Miller's famous number 7 plus or minus 2 chunks that specifies the capacity of working memory. This is one of the few numbers that we can confidently provide to system designers as a spec of the human operator that needs to be respected. However, this number has little practical value for design unless you can also specify what constitutes a 'chunk.'

Although people know Miller's number, few appreciate the important points that Miller makes in the second half of the paper about the power of 'recoding' and the implications for the functional capacity of working memory. As noted in the opening quote to this blog - people have the ability to 'stretch' memory capacity through chunking. The intro texts emphasize the "limitation," but much less attention has been paid to the recoding "capability" that allows experts to extend their functional memory capacity to deal with large amounts of information (e.g., that allows an expert chess player to recall all the pieces on a chess board based on a very short glance).

Cataloguing visual illusions has been a major thrust of research in perception. Similarly, cataloguing biases has been a major thrust of research in decision making. However, people such as James Gibson have argued that these collections of illusions do not add up to a satisfying theory of how perception works to guide action (e.g., in the control of locomotion). In a similar vein, people such as Gerd Gigerenzer have made a similar case for decision making - that collections of biases (the dark side of heuristics) do not add up to a satisfying theory of decision making in every day life and work. One reason is that in every day life, there is often missing and ambiguous data and incommensurate variables that make it difficult or impossible to apply more normative algorithmic approaches.

One result of the work of Tversky and Kahneman in focusing on decision errors is that the term 'heuristic' is often treated as if it is synonymous with 'bias.' Thus, heuristics illustrate the 'weakness' of human cognition - the bounds of rationality. However, Herbert Simon's early work in artificial intelligence treated heuristics as the signature of human intelligence. A computer program was only considered intelligent if it took advantage of heuristics that creatively used problem constraints to find short cuts to solutions - as opposed to applying mathematical algorithms that solved problems by mechanical brute force.

This emphasis on illusions and biases tends to support the MYTH that humans are the weak link in any sociotechnical system and it leads many to seek ways to replace the human with supposedly more reliable 'automated' systems. For example, recent initiatives to introduce autonomous cars often cite 'reducing human errors and making the system safer' as a motivation for pursuing automatic control solutions.

Thus, the general concept of rationality tends to be idealized around the things that computers do well (use precise data to solve complex algorithms and logical puzzles) and it tends to underplay those aspects of rationality where humans excel (detecting patterns and deviations and creatively adapting to surprise/novelty/ambiguity).

CSE recognizes the importance of respecting the bounds of rationality for humans when designing systems - but also appreciates the limitations of automation (and the fact that when the automation fails - it will typically fall to the human to fill the gap). Further, CSE starts with the premise that 'human errors' are best understood in relation to human abilities. On one hand, a catalogue of human errors will never add up to a satisfying understanding of human thinking. On the other hand, a deeper understanding may be possible if the 'errors' are seen as 'bounds' on abilities.  In other words, CSE assumes that a theory of human rationality must start with a consideration of how thinking 'works,' in order to give coherence to the collection of errors that emerge at the margins.

The implication is that CSE tends to be framed in terms of designing to maximize human engagement and to fully leverage human capabilities, as opposed to more classical human factors that tends to emphasize the need to protect systems against human errors and limitations. This does not need to be framed as human versus machine, rather it should be framed in terms of human-machine collaboration. The ultimate design goal is to leverage the strengths of both humans and technologies to create sociotechnical systems that extend the bounds of rationality beyond the range of either of the components.

A rather detailed account of the nineteenth-century history of the steam engine with governor may help the reader to understand both the circuits and the blindness of the inventors. Some sort of governor was added to the early steam engine, but the engineers ran into difficulties. They came to Clark Maxwell with the complaint that they could not draw a blueprint for an engine with a governor. They had no theoretical base from which to predict how the machine that they had drawn would behave when built and running.

There were several possible sorts of behavior: Some machines went into runaway, exponentially maximizing their speed until they broke or slowing down until they stopped. Others oscillated and seemed unable to settle to any mean Others - still worse - embarked on sequences of behavior in which the amplitude of their oscillation would itself oscillate or would become greater and greater.

Maxwell examined the problem. He wrote out formal equations for relations between the variables at each successive step around the circuit. He found, as the engineers had found, that combining this set of equations would not solve the problem. Finally, he found that the engineers were at fault in not considering time. Every given system embodied relations to time, that is was characterized by time constants determined by the given whole. These constants were not determined by the equations of relationship between successive parts but were emergent properties of the system.

... a subtle change has occurred in the subject of discourse .... It is a difference between talking in a language which a physicist might use to describe how one variable acts upon another and talking in another language about the circuit as a whole which reduces or increases difference. When we say that the system exhibits "steady state" (i.e., that in spite of variation, it retains a median value), we are talking about the circuit as a whole, not about the variations within it. Similarly the question which the engineers brought up to Clark Maxwell was about the circuit as a whole: How can we plan it to achieve a steady state? They expected the answer to be in terms of relations between the individual variables. What was needed and supplied by Maxwell was an answer in terms of time constants of the total circuit. This was the bridge between the two levels of discourse. Gregory Bateson (2002) p. 99-101.

This is a continuation of the previous posting and the discussion of what is unique to a CSE approach relative to more traditional human factors. This post will address the first line in the table below that is repeated from the prior post.

Norbert Wiener's (1948) classic, introducing the Cybernetic Hypothesis, was subtitled "On Control and Communication in the Animal and Machine." The ideas in this book about control systems (machines that were designed to achieve and maintain a goal state) and communication (the idea that information can be quantified) had a significant impact on the framing of research in psychology. It helped shift the focus from behavior to cognition (e.g., Miller, Gallanter, Pribram, 1960).

However, though psychologists began to include feedback in their images of cognitive systems, the early program of research tended to be dominated by the image of an open-loop communication system and the research program tended to focus on identifying stimulus-response associations or transfer functions (e.g., bandwidth) for each component in a series of discrete information processing stages. A major thrust of this research program was to identify the limitations of each subsystem in terms of storage capacity (7 + or - 2 chunk capacity of working memory) and information processing rates (e.g., Hick-Hyman Law, Fitts' Law).

Thus, the Cybernetic Hypothesis inspired psychology to consider "intentions" and other internal aspects associated with thinking, however, it did not free psychology from using the language of the physicist in describing the causal interaction between one variable and another, rather than thinking in terms of properties of the circuit as a whole (i.e., appreciating the emergent properties that arise from the coupling of perception and action). The language of information processing psychology was framed in terms of a dyadic semiotic system for processing symbols as illustrated below.

In contrast, CSE was framed from the start in the language of control theory. This reflected an interest in the role of humans in closing the loop as pilots of aircraft and supervisors of energy production processes. From a control theoretic perspective it was natural to frame the problems of meaning processing as a triadic semiotic system, where the function of cognition was to achieve stable equilibrium with a problem ecology. Note that the triadic semiotic model emerged as a result of the work of functional psychologists (e.g., James & Dewey) and pragmatic philosophers (Peirce), who were most interested in 'mind' as a means for adapting to the pragmatic demands of everyday living.  Dewey's (1896) classic paper on the reflex arc examines the implications of Maxwell's insights (described in the opening quote from Bateson) for psychology:

The discussion up to this point may be summarized by saying that the reflex arc idea, as commonly employed, is defective in that it assumes sensory stimulus and motor response as distinct psychical existences, while in reality they are always inside a coordination and have their significance purely from the part played in maintaining or reconstituting the coordination; and (secondly) in assuming that the quale of experience which precedes the 'motor' phase and that which succeeds it are two different states, instead of the last being always the first reconstituted, the motor phase coming in only for the sake of such mediation. The result is that the reflex arc idea leaves us with a disjointed psychology, whether viewed from the standpoint of development in the individual or in the race, or from that of the analysis of the mature consciousness. As to the former, in its failure to see that the arc of which it talks is virtually a circuit, a continual reconstitution, it breaks continuity and leaves us nothing but a series of jerks, the origin of each jerk to be sought outside the process of experience itself, in either an external pressure of 'environment,' or else in an unaccountable spontaneous variation from within the 'soul' or the 'organism.'  As to the latter, failing to see unity of activity, no matter how much it may prate of unity, it still leaves us with sensation or peripheral stimulus; idea, or central process (the equivalent of attention); and motor response, or act, as three disconnected existences, having to be somehow adjusted to each other, whether through the intervention of an extra experimental soul, or by mechanical push and pull.

Many cognitive scientists and many human factors engineers continue to speak in the language associated with causal, stimulus-response interactions (i.e., jerks) with out an appreciation for the larger system in which perception and action are coupled. They are still hoping to concatenate these isolated pieces into a more complete picture of cognition. In contrast, CSE starts with a view of the whole - of the coupling of perception and action through an ecology - as a necessary context from which to appreciate variations at more elemental levels.

In the early 1960s, we realized from analyses of industrial accidents the need for an integrated approach to the design of human-machine systems. However, we very rapidly encountered great difficulties in our efforts to bridge the gap between the methodology and concepts of control engineering and those from various branches of psychology. Because of its kinship to classical experimental psychology and its behavioristic claim for exclusive use of objective data representing overt activity, the traditional human factors field had very little to offer (Rasmussen, p. ix, 1986).


Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. (Norman, p. 31, 1986).


The growth of computer applications has radically changed the nature of the man-machine interface. First, through increased automation, the nature of the human’s task has shifted from an emphasis on perceptual-motor skills to an emphasis on cognitive activities, e.g. problem solving and decision making…. Second, through the increasing sophistication of computer applications, the man-machine interface is gradually becoming the interaction of two cognitive systems. (Hollnagel & Woods, p. 340, 1999).

As reflected in the above quotes, through the 1980s and 90s there was a growing sense that the nature of human-machine systems was changing, and that this change was creating the demand for a new approach to analysis and design. This new approach was given the label of Cognitive Engineering (CE) or Cognitive Systems Engineering (CSE). Table 1, illustrates one way to characterize the changing role of the human factor in sociotechnical systems. As technologies became more powerful with respect to information processing capabilities, the role of the humans increasingly involved supervision and fault detection. Thus, there was increasing demand for humans to relate the activities of the automated processes to functional goals and values (i.e., pragmatic meaning) and to intervene when circumstances arose (e.g., faults or unexpected situations) such that the automated processes were no longer serving the design goals for the system.  Under these unexpected circumstances, it was often necessary for the humans to create or invent new procedures on the fly, in order to avoid potential hazards or to take advantage of potential opportunities. This demand to relate activities to functional goals and values and to improvise new procedures required meaning processing.

In a following series of posts I will elaborate the differences between information processing and meaning processing as outlined in Table 1. However, before I get into the specific contrasts, it is important to emphasize that relative to early human factors, CSE is an evolutionary change, NOT a revolutionary change. That is, the concerns about information processing that motivated earlier human factors efforts were not wrong and they continue to be important with regards to design. The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is that design should not simply be about avoiding overloading the limited information capacities of humans, but it should also seek to leverage the unique meaning processing capabilities of humans. These meaning processing capabilities reflect peoples' ability to make sense of the world relative to their values and intentions, to adapt to surprises, and to improvise in order to take advantage of new opportunities or to avoid new threats. In following posts I will make the case that the overall vision of a meaning processing approach is more expansive than an information processing approach. This broader context will sometimes change the significance of specific information limitations. It will also provide a wider range of options for by-passing these limitations and for innovating to improve the range and quality of performance of sociotechnical systems.

In other words, I will argue for a systems perspective - where the information processing limitations must be interpreted in light of the larger meaning processing context. I will argue that constructs such as 'expertise' can only be fully understood in terms of qualities that emerge at the level of meaning processing.

Hollnagel, E. & Woods, D.D. (1999). Cognitive Systems Engineering: New wine in new bottles. In. J. Human-Computer Studies, 51, 339-356.

Norman, D.A. (1986). Cognitve Engineering. In D.A. Norman & S.W. Draper (Eds). User-Centered System Design, 31-61, Hillsdale, NJ: Erlbaum.

Rasmussen, J. (1986). Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. New York: North Holland.

Another giant in the field of Cognitive Systems Engineering has been lost. Jens Rasmussen created one of the most comprehensive and integrated foundations for understanding the dynamics of sociotechnical systems available today. Drawing from the fields of semiotics (Eco), control engineering, and human performance, his framework was based on a triadic semiotic dynamic, which he parsed in terms of three overlapping perspectives. The Abstraction Hiearchy (AH) provided a way to characterize the system relative to a problem ecology and the consequences and possibilities for action. He introduced the constructs Skills-, Rules-, and Knowledge (SRK) as a way to emphasize the constraints on the system from the perspective of the observers-actors. Finally, he introduced the construct of Ecological Interface Design (EID) as a way to emphasize the constraints on the system from the perspective of representations.

Jens had a comprehensive vision of the sociotechnical system and we have only begun to plumb the depths of this framework and to fully appreciate its value as both a basic theory of systems and as a pragmatic guide for the engineering and design of safer more efficient systems.

Jens death is a very personal loss for me. He was a valued mentor who saw potential in a very naive, young researcher long before it was evident or deserved. He opened doors for me and created opportunities that proved to be essential steps in my education and professional development. Although I may never realize the potential that Jens envisioned, he set me on a path that has proved to be both challenging and satisfying. I am a better man for having had him as a friend and mentor.

A website has been created to allow future researchers to benefit from Jens' work.


The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website


Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.