Skip to content

“An ant, viewed as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself.” — Herbert Simon

In some respect, it is debatable whether psychology completely escaped from the strictures of a Behaviorist perspective with the dawning of an information processing age. Although the computer and information processing metaphors legitimized the study of mental phenomena, these phenomena tended to be visualized as activities or mental behaviors (e.g., encoding, remembering, planning, deciding, etc.). Thus, for human factors, cognitive task analysis tended to focus on specifying the mental activities associated with work.

However, as Simon's parable of the ant suggests, this might lead to the appearance or inference that the cognition is very complex. When in reality, the ultimate source of that complexity may not be in the cognitive system, but in the work situations that provide the context for the activity (i.e., the beach). Thus, Simon's parable is the motivation for work analysis as a necessary complement to task analysis. The focus of work analysis is to describe the functional constraints (e.g., goals, affordances, regulatory constraints, social constraints, etc.) that are shaping the physical and mental activities of workers. The focus of work analysis is on describing work situations, as a necessary context for evaluating both awareness (rationality) and behavior.

While classical task analysis describes what people do, it provides little insight into why they do something. This is the primary value of work analysis, to provide insight into the functional context shaping work behaviors.

A common experience upon watching activities in an unfamiliar work domain is puzzlement at activities that seem somewhat irrational. Why did they do that? However, as one becomes more familiar with the domain, one often discovers that these puzzling activities are actually smart responses to constraints that the experienced workers were attuned to - but that were invisible to an outsider.

Thus, for those of us who see humans as incredibly adaptive systems - it is natural to look to the ecology that the humans are adapting to as the first source for hypotheses to explain that behavior. And for those of us who hope to design information technologies that enhance that adaptive capacity - it is critical that the tools not simply be tuned to human capabilities, but that these tools are also tuned to the demands of the work ecology. For example, a shovel must not only fit the human hands, but it must also fit well with the materials that are to be manipulated.

Thus, work analysis is essential for Cognitive Systems Engineering. It reflects the belief that understanding situations is a prerequisite for understanding awareness.

This is the fourth in a series of entries to explore the differences between an Information Processing Approach to Human Factors and a Meaning Processing Approach to Cognitive Systems Engineering. The table below lists some contrasts between these two perspectives. This entry will focus on the third contrast in the table - the shift from a focus on 'workload' to a focus on 'situation awareness.'

The concept of information developed in this theory at first seems disappointing and bizarre - disappointing because it has nothing to do with meaning, and bizarre because it deals not with a single message but rather with the statistical character of a whole assemble of messages, bizarre also because in these statistical terms the two words information and uncertainty find themselves to be partners. Warren Weaver (1963, p. 27)

The construct of 'workload' is a natural focus for an approach that emphasizes describing and quantifying internal constraints of the human and that assumes that these constraints are independent of the particulars of any specific situation or work context. This fits well with the engineering perspective on quantifying information and for specifying the capacity of fixed information channels as developed by Shannon.  However, the downside of this perspective is that in making the construct of workload independent of 'context,' it thus becomes independent of 'meaning' as suggested in the above quote from Warren Weaver.

Those interested in the impact of context on human cognition became dissatisfied with a framework that focused only on internal constraints (e.g., bandwidth, resources, modality) without consideration for how those constraints interacted with situations. Thus, the construct of Situation Awareness (SA) evolved as an alternative to workload. Unfortunately, many who have been steeped in the information processing tradition have framed SA in terms of internal constraints (e.g., treating levels of SA as components internal to the processing system).

However, others have taken the construct of SA as an opportunity to consider the dynamic couplings of humans and work domains (or situations).  For them, the construct of SA reflects a need to 'situate' cognition within a work ecology and to consider how constraints in that ecology create demands and opportunities for cognitive systems. In this framework, it is assumed that cognitive systems can intelligently adapt to the constraints of situations - utilizing structure in situations to 'chunk' information and as the basis for smart heuristics that reduce the computational burden, allowing people to deal effectively with situations that would overwhelm the channel capacity of a system not tuned to these structural constraints (see aiming off example).

There is no question that humans have limited working memory capacity as suggested by the workload construct. However, CSE recognizes the ability of people to discover and use situated constraints (e.g., patterns) in ways that allow them to do complex work (e.g., play chess, pilot aircraft, drive in dense traffic, play a musical instrument) despite these internal constraints. It is this capacity to attune to structure associated with specific work domains that leads to expert performance.

The design implication of an approach that focuses on workload is to protect the system against human limitations (e.g., bottlenecks) by either distributing the work among multiple people or by replacing humans with automated systems with higher bandwidth. The key is to make sure that people are not overwhelmed by too much data!

The design implication of an approach that focuses on SA is to make the meaningful work domain constraints salient in order to facilitate attunement processes. This can be done through the design of interfaces or through training. The result is to heighten human engagement with the domain structure to facilitate skill and expertise. The key is to make sure that people are well-tuned to the meaningful aspects of work (e.g., constraints and patterns) that allow them to 'see' what needs to be done.


First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break (or at least stretch) this informational bottleneck.

Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. George Miller (1956) p. 96-97, (emphasis added).

This continues the discussion about the differences between CSE (meaning processing) and more classical HF (information processing) approaches to human performance summarized in the table below. This post will focus on the second line in the table - shifting emphasis from human limitations to include greater consideration of human capabilities.

Anybody who has taken an intro class in Psychology or Human Factors is familiar with Miller's famous number 7 plus or minus 2 chunks that specifies the capacity of working memory. This is one of the few numbers that we can confidently provide to system designers as a spec of the human operator that needs to be respected. However, this number has little practical value for design unless you can also specify what constitutes a 'chunk.'

Although people know Miller's number, few appreciate the important points that Miller makes in the second half of the paper about the power of 'recoding' and the implications for the functional capacity of working memory. As noted in the opening quote to this blog - people have the ability to 'stretch' memory capacity through chunking. The intro texts emphasize the "limitation," but much less attention has been paid to the recoding "capability" that allows experts to extend their functional memory capacity to deal with large amounts of information (e.g., that allows an expert chess player to recall all the pieces on a chess board based on a very short glance).

Cataloguing visual illusions has been a major thrust of research in perception. Similarly, cataloguing biases has been a major thrust of research in decision making. However, people such as James Gibson have argued that these collections of illusions do not add up to a satisfying theory of how perception works to guide action (e.g., in the control of locomotion). In a similar vein, people such as Gerd Gigerenzer have made a similar case for decision making - that collections of biases (the dark side of heuristics) do not add up to a satisfying theory of decision making in every day life and work. One reason is that in every day life, there is often missing and ambiguous data and incommensurate variables that make it difficult or impossible to apply more normative algorithmic approaches.

One result of the work of Tversky and Kahneman in focusing on decision errors is that the term 'heuristic' is often treated as if it is synonymous with 'bias.' Thus, heuristics illustrate the 'weakness' of human cognition - the bounds of rationality. However, Herbert Simon's early work in artificial intelligence treated heuristics as the signature of human intelligence. A computer program was only considered intelligent if it took advantage of heuristics that creatively used problem constraints to find short cuts to solutions - as opposed to applying mathematical algorithms that solved problems by mechanical brute force.

This emphasis on illusions and biases tends to support the MYTH that humans are the weak link in any sociotechnical system and it leads many to seek ways to replace the human with supposedly more reliable 'automated' systems. For example, recent initiatives to introduce autonomous cars often cite 'reducing human errors and making the system safer' as a motivation for pursuing automatic control solutions.

Thus, the general concept of rationality tends to be idealized around the things that computers do well (use precise data to solve complex algorithms and logical puzzles) and it tends to underplay those aspects of rationality where humans excel (detecting patterns and deviations and creatively adapting to surprise/novelty/ambiguity).

CSE recognizes the importance of respecting the bounds of rationality for humans when designing systems - but also appreciates the limitations of automation (and the fact that when the automation fails - it will typically fall to the human to fill the gap). Further, CSE starts with the premise that 'human errors' are best understood in relation to human abilities. On one hand, a catalogue of human errors will never add up to a satisfying understanding of human thinking. On the other hand, a deeper understanding may be possible if the 'errors' are seen as 'bounds' on abilities.  In other words, CSE assumes that a theory of human rationality must start with a consideration of how thinking 'works,' in order to give coherence to the collection of errors that emerge at the margins.

The implication is that CSE tends to be framed in terms of designing to maximize human engagement and to fully leverage human capabilities, as opposed to more classical human factors that tends to emphasize the need to protect systems against human errors and limitations. This does not need to be framed as human versus machine, rather it should be framed in terms of human-machine collaboration. The ultimate design goal is to leverage the strengths of both humans and technologies to create sociotechnical systems that extend the bounds of rationality beyond the range of either of the components.

A rather detailed account of the nineteenth-century history of the steam engine with governor may help the reader to understand both the circuits and the blindness of the inventors. Some sort of governor was added to the early steam engine, but the engineers ran into difficulties. They came to Clark Maxwell with the complaint that they could not draw a blueprint for an engine with a governor. They had no theoretical base from which to predict how the machine that they had drawn would behave when built and running.

There were several possible sorts of behavior: Some machines went into runaway, exponentially maximizing their speed until they broke or slowing down until they stopped. Others oscillated and seemed unable to settle to any mean Others - still worse - embarked on sequences of behavior in which the amplitude of their oscillation would itself oscillate or would become greater and greater.

Maxwell examined the problem. He wrote out formal equations for relations between the variables at each successive step around the circuit. He found, as the engineers had found, that combining this set of equations would not solve the problem. Finally, he found that the engineers were at fault in not considering time. Every given system embodied relations to time, that is was characterized by time constants determined by the given whole. These constants were not determined by the equations of relationship between successive parts but were emergent properties of the system.

... a subtle change has occurred in the subject of discourse .... It is a difference between talking in a language which a physicist might use to describe how one variable acts upon another and talking in another language about the circuit as a whole which reduces or increases difference. When we say that the system exhibits "steady state" (i.e., that in spite of variation, it retains a median value), we are talking about the circuit as a whole, not about the variations within it. Similarly the question which the engineers brought up to Clark Maxwell was about the circuit as a whole: How can we plan it to achieve a steady state? They expected the answer to be in terms of relations between the individual variables. What was needed and supplied by Maxwell was an answer in terms of time constants of the total circuit. This was the bridge between the two levels of discourse. Gregory Bateson (2002) p. 99-101.

This is a continuation of the previous posting and the discussion of what is unique to a CSE approach relative to more traditional human factors. This post will address the first line in the table below that is repeated from the prior post.

Norbert Wiener's (1948) classic, introducing the Cybernetic Hypothesis, was subtitled "On Control and Communication in the Animal and Machine." The ideas in this book about control systems (machines that were designed to achieve and maintain a goal state) and communication (the idea that information can be quantified) had a significant impact on the framing of research in psychology. It helped shift the focus from behavior to cognition (e.g., Miller, Gallanter, Pribram, 1960).

However, though psychologists began to include feedback in their images of cognitive systems, the early program of research tended to be dominated by the image of an open-loop communication system and the research program tended to focus on identifying stimulus-response associations or transfer functions (e.g., bandwidth) for each component in a series of discrete information processing stages. A major thrust of this research program was to identify the limitations of each subsystem in terms of storage capacity (7 + or - 2 chunk capacity of working memory) and information processing rates (e.g., Hick-Hyman Law, Fitts' Law).

Thus, the Cybernetic Hypothesis inspired psychology to consider "intentions" and other internal aspects associated with thinking, however, it did not free psychology from using the language of the physicist in describing the causal interaction between one variable and another, rather than thinking in terms of properties of the circuit as a whole (i.e., appreciating the emergent properties that arise from the coupling of perception and action). The language of information processing psychology was framed in terms of a dyadic semiotic system for processing symbols as illustrated below.

In contrast, CSE was framed from the start in the language of control theory. This reflected an interest in the role of humans in closing the loop as pilots of aircraft and supervisors of energy production processes. From a control theoretic perspective it was natural to frame the problems of meaning processing as a triadic semiotic system, where the function of cognition was to achieve stable equilibrium with a problem ecology. Note that the triadic semiotic model emerged as a result of the work of functional psychologists (e.g., James & Dewey) and pragmatic philosophers (Peirce), who were most interested in 'mind' as a means for adapting to the pragmatic demands of everyday living.  Dewey's (1896) classic paper on the reflex arc examines the implications of Maxwell's insights (described in the opening quote from Bateson) for psychology:

The discussion up to this point may be summarized by saying that the reflex arc idea, as commonly employed, is defective in that it assumes sensory stimulus and motor response as distinct psychical existences, while in reality they are always inside a coordination and have their significance purely from the part played in maintaining or reconstituting the coordination; and (secondly) in assuming that the quale of experience which precedes the 'motor' phase and that which succeeds it are two different states, instead of the last being always the first reconstituted, the motor phase coming in only for the sake of such mediation. The result is that the reflex arc idea leaves us with a disjointed psychology, whether viewed from the standpoint of development in the individual or in the race, or from that of the analysis of the mature consciousness. As to the former, in its failure to see that the arc of which it talks is virtually a circuit, a continual reconstitution, it breaks continuity and leaves us nothing but a series of jerks, the origin of each jerk to be sought outside the process of experience itself, in either an external pressure of 'environment,' or else in an unaccountable spontaneous variation from within the 'soul' or the 'organism.'  As to the latter, failing to see unity of activity, no matter how much it may prate of unity, it still leaves us with sensation or peripheral stimulus; idea, or central process (the equivalent of attention); and motor response, or act, as three disconnected existences, having to be somehow adjusted to each other, whether through the intervention of an extra experimental soul, or by mechanical push and pull.

Many cognitive scientists and many human factors engineers continue to speak in the language associated with causal, stimulus-response interactions (i.e., jerks) with out an appreciation for the larger system in which perception and action are coupled. They are still hoping to concatenate these isolated pieces into a more complete picture of cognition. In contrast, CSE starts with a view of the whole - of the coupling of perception and action through an ecology - as a necessary context from which to appreciate variations at more elemental levels.

In the early 1960s, we realized from analyses of industrial accidents the need for an integrated approach to the design of human-machine systems. However, we very rapidly encountered great difficulties in our efforts to bridge the gap between the methodology and concepts of control engineering and those from various branches of psychology. Because of its kinship to classical experimental psychology and its behavioristic claim for exclusive use of objective data representing overt activity, the traditional human factors field had very little to offer (Rasmussen, p. ix, 1986).


Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. (Norman, p. 31, 1986).


The growth of computer applications has radically changed the nature of the man-machine interface. First, through increased automation, the nature of the human’s task has shifted from an emphasis on perceptual-motor skills to an emphasis on cognitive activities, e.g. problem solving and decision making…. Second, through the increasing sophistication of computer applications, the man-machine interface is gradually becoming the interaction of two cognitive systems. (Hollnagel & Woods, p. 340, 1999).

As reflected in the above quotes, through the 1980s and 90s there was a growing sense that the nature of human-machine systems was changing, and that this change was creating the demand for a new approach to analysis and design. This new approach was given the label of Cognitive Engineering (CE) or Cognitive Systems Engineering (CSE). Table 1, illustrates one way to characterize the changing role of the human factor in sociotechnical systems. As technologies became more powerful with respect to information processing capabilities, the role of the humans increasingly involved supervision and fault detection. Thus, there was increasing demand for humans to relate the activities of the automated processes to functional goals and values (i.e., pragmatic meaning) and to intervene when circumstances arose (e.g., faults or unexpected situations) such that the automated processes were no longer serving the design goals for the system.  Under these unexpected circumstances, it was often necessary for the humans to create or invent new procedures on the fly, in order to avoid potential hazards or to take advantage of potential opportunities. This demand to relate activities to functional goals and values and to improvise new procedures required meaning processing.

In a following series of posts I will elaborate the differences between information processing and meaning processing as outlined in Table 1. However, before I get into the specific contrasts, it is important to emphasize that relative to early human factors, CSE is an evolutionary change, NOT a revolutionary change. That is, the concerns about information processing that motivated earlier human factors efforts were not wrong and they continue to be important with regards to design. The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is not that an information processing approach is wrong, but rather that it is insufficient.

The point of CSE is that design should not simply be about avoiding overloading the limited information capacities of humans, but it should also seek to leverage the unique meaning processing capabilities of humans. These meaning processing capabilities reflect peoples' ability to make sense of the world relative to their values and intentions, to adapt to surprises, and to improvise in order to take advantage of new opportunities or to avoid new threats. In following posts I will make the case that the overall vision of a meaning processing approach is more expansive than an information processing approach. This broader context will sometimes change the significance of specific information limitations. It will also provide a wider range of options for by-passing these limitations and for innovating to improve the range and quality of performance of sociotechnical systems.

In other words, I will argue for a systems perspective - where the information processing limitations must be interpreted in light of the larger meaning processing context. I will argue that constructs such as 'expertise' can only be fully understood in terms of qualities that emerge at the level of meaning processing.

Hollnagel, E. & Woods, D.D. (1999). Cognitive Systems Engineering: New wine in new bottles. In. J. Human-Computer Studies, 51, 339-356.

Norman, D.A. (1986). Cognitve Engineering. In D.A. Norman & S.W. Draper (Eds). User-Centered System Design, 31-61, Hillsdale, NJ: Erlbaum.

Rasmussen, J. (1986). Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering. New York: North Holland.

Another giant in the field of Cognitive Systems Engineering has been lost. Jens Rasmussen created one of the most comprehensive and integrated foundations for understanding the dynamics of sociotechnical systems available today. Drawing from the fields of semiotics (Eco), control engineering, and human performance, his framework was based on a triadic semiotic dynamic, which he parsed in terms of three overlapping perspectives. The Abstraction Hiearchy (AH) provided a way to characterize the system relative to a problem ecology and the consequences and possibilities for action. He introduced the constructs Skills-, Rules-, and Knowledge (SRK) as a way to emphasize the constraints on the system from the perspective of the observers-actors. Finally, he introduced the construct of Ecological Interface Design (EID) as a way to emphasize the constraints on the system from the perspective of representations.

Jens had a comprehensive vision of the sociotechnical system and we have only begun to plumb the depths of this framework and to fully appreciate its value as both a basic theory of systems and as a pragmatic guide for the engineering and design of safer more efficient systems.

Jens death is a very personal loss for me. He was a valued mentor who saw potential in a very naive, young researcher long before it was evident or deserved. He opened doors for me and created opportunities that proved to be essential steps in my education and professional development. Although I may never realize the potential that Jens envisioned, he set me on a path that has proved to be both challenging and satisfying. I am a better man for having had him as a friend and mentor.

A website has been created to allow future researchers to benefit from Jens' work.


The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website


Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

I just finished Simon Sinek's (2009) "Start with Why." I was struck by the similarities between Sinek's 'Golden Circle' and Jens Rasmussen's 'Abstraction Hierarchy' (see figure). Both parse systems in terms of  a hierarchical association between why, what, and how.

For Rasmussen - 'what' represented the level of the system being attended; 'why' represented a higher-level of abstraction that reflected the significance of the 'what' relative to the whole system; with the highest level of abstraction reflecting the ultimate WHY of a system - its purpose.  In Rasmussen's system,  'how' represented a more concrete description of significant components of the level being attended (e.g., the component processes serving the 'what' level above).

Sinek's 'why' corresponds with the pinnacle of Rasmussen's Abstraction Hiearchy. It represents the ultimate purpose of the system. However, Sinek reverses Rasmussen's 'what' and 'how.' For Sinek, 'how' represents the processes serving the 'why;' and the 'what' represents the products of these processes.

Although I have been teaching Rasmussen's approach to Cognitive Systems Engineering (CSE) for over 30 years, I think that Sinek's WHY-HOW-WHAT partitioning conforms more naturally with common usage of the terms 'how' and 'what.' So, I think this is a pedagogical improvement on Rasmussen's framework.

However, I found that the overall gist of Sinek's "Start with Why" reinforced many of the central themes of CSE. That is, for a 'cognitive system' the purpose (i.e., the WHY) sets the ultimate context for parsing the system (e.g., processes and objects) into meaningful components. This is an important contrast to classical (objective) scientific approaches to physical systems. Classical scientific approaches have dismissed the 'why' as subjective! The 'why' reflected the 'biases' of observers. But for a cognitive science, the observers are the objects of study!

Thus, cognitive science and cognitive engineering must always start with WHY!

Early American Functionalist Psychologists, such as William James and John Dewey, viewed cognition through a Pragmatic lens. Thus, for them cognition involved making sense of the world in terms of its functional significance: What can be done? What will the consequences be? More recently, James Gibson (1979) introduced the word “Affordance” to reflect this functionalist perspective; where the term affordance is used to describe an object relative to the possible actions that can be performed on or with the object and the possible consequences of those actions. Don Norman (1988) has introduced the concept of affordance to designers who have found it to be a useful concept for thinking about how a design is experienced by people.

Formalizing Functional Structure

This Functionalist view of the world has been formalized by the philosopher, William Wimsatt (1972), in terms of seven dimensions or attributes for characterizing any object; and by the cognitive systems engineer, Jens Rasmussen (1986), in terms of an Abstraction-Decomposition Space. Figure 1 illustrates some of the parallels between these two methods for characterizing functional properties of an object. The vertical dimension of the Abstraction-Decomposition Space reflects five levels of abstraction that are coupled in terms of a nesting of means-ends constraints. The top level, Functional Purpose, specifies the value constraints on the functional system – what is the ultimate value that is achievable or what is the intended goal or purpose? As you move to lower levels in this hierarchy the focus is successively narrowed down to the specific, physical properties of objects at the lowest Physical Form level of abstraction.

Figure 1. An illustration of how Wimsatt’s functional attributes map into Rasmussen’s Abstraction-Decomposition Space.

An important inspiration for creating the Abstraction-Decomposition Space was Rasmussen’s observations of the reasoning processes of people doing trouble-shooting or fault diagnosis. He observed that the reasoning tended to move along the diagonal in this space. People tended to consider holistic properties of a system at high levels of abstraction (e.g., the primary function of an electronic device) in order to make sense of relations at lower levels of abstraction (e.g., the arrangements of parts). In essence, higher levels of abstraction tended to provide the context for understanding WHY the parts were configured in a certain way. People tended to consider lower levels of abstraction to understand how the arrangements or parts served the higher-level purposes. In essence, lower levels of abstraction provided clues to HOW a particular function would be achieved.

Rasmussen found that in the process of trouble shooting an electronic system, the reasoning tended to move up and down the diagonal of the Abstraction-Decomposition Space. Moving up in abstraction tended to broaden the perspective and to suggest dimensions for selecting properties at lower levels. In essence, the higher level was a kind of filter that determined significance at the lower levels. This filter effectively guided attention and determined how to chunk information and what attributes should be salient at the lower levels. Thus, in the process of diagnosing a fault, experts tended to shift attention across different levels of abstraction until eventually zeroing-in on the specific location of a fault (e.g., finding the break in the circuit or the failed part).

Wimsatt’s formalism for characterizing an object or item in functional terms is summarized in the following statement:

According to theory T, a function of item i, in producing behaviour B, in system S in environment E relative to purpose P is to bring about consequence C.

Figure 1 suggests how Wimsatt’s seven functional attributes of an object might fit within Rasmussen’s Abstraction-Decomposition Space. The object or item (i) as a physical entity corresponds with the lowest level of abstraction and the most specific level of decomposition. The purpose (P) corresponds with the highest level of abstraction at a more global level of decomposition. The Theory (T) and System (S) attributes introduce additional constraints for making sense of the relation between the object and the purpose. Theory (T) provides the kind of holonomic constraints (e.g., physical laws) that Rasmussen considered at the Abstract Function Level. These constraints set limits on how a purpose might be achieved (e.g., the laws of aerodynamics set constraints on how airplanes or wings can serve the purpose of safe travel). The System (S) attributes provide the kind of organizational constraints that Rasmussen considered at the General Function level. These constrains describe the object’s role in relation to other parts of a system in order to serve the higher-level Purpose (P) (e.g., a general function of the wing is to generate lift). The Behavior (B) attribute fits with Rasmussen’s Physical Function level that describes the physical constraints relative to the object’s role as a part of the organization (e.g., here the distinction between fixed and rotary wings comes into play). The Environment (E) attribute crosses levels of abstraction as a way of providing the ‘context of use’ for the object. Finally, the Consequence (C) attribute provides the specific effect that the object produces relative to achieving the purpose (e.g., the specific lift coefficient for a wing of a certain size and shape).

While the details of the mapping in Figure 1 might be debated, there seems to be little doubt that the formalisms suggested by Wimsatt and Rasmussen are rooted in very similar intuitions about how the process of making sense of the world is rooted in a functionalist perspective in which ‘meaning’ is grounded in a network of means-ends relations that associates objects with the higher-level purposes and values that they might serve. This connection between ‘meaning’ and higher levels of abstraction has also been recognized by S.I. Hayakawa with his formalism of the Abstraction Ladder.

Hayakawa used the case of Bessie the Cow to illustrate how higher levels of abstraction provide a broader context for understanding the meaning of a specific object in relation to progressively broader systems of associations (See Figure 2).

Figure 2. An illustration of how Hayakawa’s Abstraction Ladder maps into Rasmussen’s Abstraction-Decomposition Space.

Figure 2 illustrates how the distinctions that Hayakawa introduced with his Abstraction Ladder might map to Rasmussen’s Abstraction-Decomposition Space. It has been noted by Hayakawa and others that building engaging narratives involves moving up and down the Abstraction Ladder (or equivalently moving along the diagonal in the Abstraction Decomposition Space). This is consistent with Rasmussen’s observations about trouble-shooting. Thus, the common intuition is that the process of sensemaking is intimately associated with unpacking the different layers of relations between an object and the larger functional networks or contexts in which it is nested.

The Nature of Expertise

The parallels between expert behaviors in trouble shooting and fault diagnosis by Rasmussen and observations about the implications of Hayakawa’s Abstraction Ladder for constructing interesting narratives might help to explain why case-based learning (Bransford, Brown & Cocking, 2000) is particularly effective for communicating expertise and why narrative approaches for knowledge elicitation (e.g., Klein, 2003; Kurtz & Snowden, 2003) are so effective for uncovering expertise. Even more significantly, perhaps the ‘intuitions’ or ‘gut feel’ of experts may reflect a higher degree of attunement with constraints at higher levels of abstraction. That is, while journeymen may know what to do and how to do it, they may not have the deeper understanding of why one way is better than another (e.g., Sinek, 2009) that differentiates the true experts in a field. In other words, the ‘gut feel’ might reflect the ability of experts to appreciate the coupling between objects and actions with ultimate values and higher-level purposes. Further, this link to value and purpose may have an important emotional component (e.g., Damasio, 1999). This suggests that expertise is not simply a function of knowing more, it may also require caring more. 


As Wieck (1995) noted, an important aspect of sensemaking is what Schön (1983) called problem setting. Weick wrote:

When we set the problem, we select what we will treat as “things” of the situation, we set the boundaries of our attention to it, and we impose upon it a coherence which allows us to say what is wrong and in what directions the situation needs to be changed. Problem setting is a process in which, interactively, we name the things to which we will attend and frame the context in which we will attend to them (Weick, 1995, p. 9).

The fundamental point is that the construct of function as reflected in the formalisms of Wimsatt, Rasmussen, and Hayakawa may provide important clues into the nature of how people set the problem as part of a sensemaking process. In particular, the diagonal in Rasmussen’s Abstraction-Decomposition space may provide clues for how people parse the details of complex situations using filters at different layers of abstraction to ultimately make sense relative to higher functional values and purposes.

Thus, here are some important implications:

  • A functionalist perspective provides important insights into the sensemaking process.
  • This is the common intuition underlying the formalisms introduced by Gibson, Wimsatt, Rasmussen, and Hayakawa.
  • Sensemaking involves navigating across levels of abstraction and levels of detail to identify functional or means-ends relations within a larger network of associations between objects (parts) and contexts (wholes).
  • Links between higher levels of abstraction (values, purposes) and lower levels of abstraction (general functions, components and behaviors) may reflect the significance of couplings between emotions, knowledge, and skill.
  • The various formalisms described here provide important frameworks for understanding any sensemaking process (e.g., fault diagnosis, storytelling, or intel analysis) and have important implications for both eliciting knowledge from experts and for representing information to facilitate the development of expertise through training and interface design. 

Key Sources

  1. Bransford, J. D., Brown, A. L., and Cocking, R. (2000). How People Learn, National Academy Press, Washington, DC.
  2. Damasio, A. (1999). The Feeling of What Happens: Body and emotion in the making of consciousness. Orlando, FL: Harcourt.
  3. Flach, J.M. & Voorhorst, F.A. (2016). What Matters: Putting common sense to work. Dayton, OH: Wright State Library.
  4. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. New York: Houghton Mifflin.
  5. Hayakawa, S.I. (1990). Language in Thought and Action. 5th New York: Houghton Mifflin Harcourt.
  6. Klein, G. (2003). Intuition at Work. New York: Doubleday.
  7. Kurtz, C.F. & Snowden, D.J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42, 462-483.
  8. Norman, D.A. (1988). The Psychology of Everyday Things. New York: Basic Books.
  9. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: North-Holland.
  10. Schön, D.A. (1983). The Reflective Practitioner. New York: Basic Books.
  11. Sinek, S. (2009). Start with Why: How great leaders inspire everyone to take action. New York: Penguin.
  12. Weick, K.E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage.
  13. Wimsatt, W.C. (1972). Teleology and the logical structure of function statements. Hist. Phil. Sci., 3, no. 1, 1-80.