Skip to content

Another giant in the field of Cognitive Systems Engineering has been lost. Jens Rasmussen created one of the most comprehensive and integrated foundations for understanding the dynamics of sociotechnical systems available today. Drawing from the fields of semiotics (Eco), control engineering, and human performance, his framework was based on a triadic semiotic dynamic, which he parsed in terms of three overlapping perspectives. The Abstraction Hiearchy (AH) provided a way to characterize the system relative to a problem ecology and the consequences and possibilities for action. He introduced the constructs Skills-, Rules-, and Knowledge (SRK) as a way to emphasize the constraints on the system from the perspective of the observers-actors. Finally, he introduced the construct of Ecological Interface Design (EID) as a way to emphasize the constraints on the system from the perspective of representations.

Jens had a comprehensive vision of the sociotechnical system and we have only begun to plumb the depths of this framework and to fully appreciate its value as both a basic theory of systems and as a pragmatic guide for the engineering and design of safer more efficient systems.

Jens death is a very personal loss for me. He was a valued mentor who saw potential in a very naive, young researcher long before it was evident or deserved. He opened doors for me and created opportunities that proved to be essential steps in my education and professional development. Although I may never realize the potential that Jens envisioned, he set me on a path that has proved to be both challenging and satisfying. I am a better man for having had him as a friend and mentor.

A website has been created to allow future researchers to benefit from Jens' work.

http://www.jensrasmussen.org/

1

The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website

 

Symbols help us make tangible that which is intangible. And the only reason symbols have meaning is because we infuse them with meaning. That meaning lives in our minds, not in the item itself. Only when the purpose, cause or belief is clear can the symbol command great power   (Sinek, 2009, p. 160)

As this quote from Sinek suggests, symbols (e.g., alphabets, flags, icons) are created by humans. Thus, the 'meaning' of the symbols will typically reflect the intentions or purposes motivating their creation. For example, as a symbol, a country's flag might represent the abstract principles on which the country is founded (e.g., liberty and freedom for all). However, it would be a mistake to conclude from this (as many cognitive scientists have) that all 'meaning' lives in our minds. While symbols may be a creation of humans - meaning is NOT.

Let me state this again for emphasis:

Meaning is NOT a product of mind!

As the triadic model of a semiotic system illustrated in the figure below emphasizes meaning emerges from the functional coupling between agents and situations. Further, as Rasmussen (1986) has emphasized this coupling involves not only symbols, but also signs and signals.

Signs (as used by Rasmussen) are different than 'symbols' in that they are grounded in social conventions. So, the choice of a color to represent safe or dangerous, or of an icon to represent 'save' or 'delete' has its origins in the head of a designer. At some point, someone chose 'red' to represent 'danger,' or chose a 'floppy disk' image to represent save.  However, over time this 'choice' of the designer can become established as a social convention.  At that point, the meaning of the the color or the icon is no longer arbitrary. It is no longer in the head of the individual observer. It has a grounding in the social world - it is established as a social convention or as a cultural expectation. People outside the culture may not 'pick-up' the correct meaning, but the meaning is not arbitrary.

Rasmussen used the term sign to differentiate this role in a semiotic system from that of 'symbols' whose meaning is open to interpretation by an observer. The meaning of a sign is not in the head of an observer, for a sign the meaning has been established by a priori rules (social or cultural conventions).

for a sign the meaning has been established by a priori rules (social or cultural conventions)

Signals (as used by Rasmussen) are different than both 'symbols' and 'signs' in that they are directly grounded in the perception-action coupling with the world. So, the information bases for braking your automobile to avoid a potential collision, or for catching a fly ball, or for piloting an aircraft to a safe touchdown on a runway are NOT in our minds! For example, structures in optical flow fields (e.g., angle, angular rate, tau, horizon ratio) provide the state information that allows people to skillfully move through the environment. The optical flow field and the objects and events specified by the invariant structures are NOT in the mind of the observer. These relations are available to all animals with eyes and can be leveraged in automatic control systems with optical sensors. These signals are every bit as meaningful as any symbol or sign yet these are not human inventions. Humans and other animals can discover the meanings of these relations through interaction with the world, and they can utilize these meanings to achieve satisfying interactions with the world (e.g. avoiding collisions, catching balls, landing aircraft), but the human does not 'create' the meaning in these cases.

for a signal the meaning emerges naturally from the coupling of perception and action in a triadic semiotic system. It is not an invention of the mind, but it can be discovered by a mind.

In the field of cognitive science debates have often been cast in terms of whether humans are 'symbol processors,' such that meaning is constructed through mental computations, or whether humans are capable of 'direct perception,' such that meaning is 'picked-up' through interaction with the ecology.  One side places meaning exclusively in the mind, ignoring or at least minimizing the role of structure in the ecology. The other side places meaning in the ecology, minimizing the creative computational powers of mind.

This framing of the question in either/or terms has proven to be an obstacle to progress in cognitive science. Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

Recognizing that the perception-action loop can be closed through symbols, signs, and signals opens the path to a both/and approach with the promise of a deeper understanding of human cognition.

I just finished Simon Sinek's (2009) "Start with Why." I was struck by the similarities between Sinek's 'Golden Circle' and Jens Rasmussen's 'Abstraction Hierarchy' (see figure). Both parse systems in terms of  a hierarchical association between why, what, and how.

For Rasmussen - 'what' represented the level of the system being attended; 'why' represented a higher-level of abstraction that reflected the significance of the 'what' relative to the whole system; with the highest level of abstraction reflecting the ultimate WHY of a system - its purpose.  In Rasmussen's system,  'how' represented a more concrete description of significant components of the level being attended (e.g., the component processes serving the 'what' level above).

Sinek's 'why' corresponds with the pinnacle of Rasmussen's Abstraction Hiearchy. It represents the ultimate purpose of the system. However, Sinek reverses Rasmussen's 'what' and 'how.' For Sinek, 'how' represents the processes serving the 'why;' and the 'what' represents the products of these processes.

Although I have been teaching Rasmussen's approach to Cognitive Systems Engineering (CSE) for over 30 years, I think that Sinek's WHY-HOW-WHAT partitioning conforms more naturally with common usage of the terms 'how' and 'what.' So, I think this is a pedagogical improvement on Rasmussen's framework.

However, I found that the overall gist of Sinek's "Start with Why" reinforced many of the central themes of CSE. That is, for a 'cognitive system' the purpose (i.e., the WHY) sets the ultimate context for parsing the system (e.g., processes and objects) into meaningful components. This is an important contrast to classical (objective) scientific approaches to physical systems. Classical scientific approaches have dismissed the 'why' as subjective! The 'why' reflected the 'biases' of observers. But for a cognitive science, the observers are the objects of study!

Thus, cognitive science and cognitive engineering must always start with WHY!

Early American Functionalist Psychologists, such as William James and John Dewey, viewed cognition through a Pragmatic lens. Thus, for them cognition involved making sense of the world in terms of its functional significance: What can be done? What will the consequences be? More recently, James Gibson (1979) introduced the word “Affordance” to reflect this functionalist perspective; where the term affordance is used to describe an object relative to the possible actions that can be performed on or with the object and the possible consequences of those actions. Don Norman (1988) has introduced the concept of affordance to designers who have found it to be a useful concept for thinking about how a design is experienced by people.

Formalizing Functional Structure

This Functionalist view of the world has been formalized by the philosopher, William Wimsatt (1972), in terms of seven dimensions or attributes for characterizing any object; and by the cognitive systems engineer, Jens Rasmussen (1986), in terms of an Abstraction-Decomposition Space. Figure 1 illustrates some of the parallels between these two methods for characterizing functional properties of an object. The vertical dimension of the Abstraction-Decomposition Space reflects five levels of abstraction that are coupled in terms of a nesting of means-ends constraints. The top level, Functional Purpose, specifies the value constraints on the functional system – what is the ultimate value that is achievable or what is the intended goal or purpose? As you move to lower levels in this hierarchy the focus is successively narrowed down to the specific, physical properties of objects at the lowest Physical Form level of abstraction.

Figure 1. An illustration of how Wimsatt’s functional attributes map into Rasmussen’s Abstraction-Decomposition Space.

An important inspiration for creating the Abstraction-Decomposition Space was Rasmussen’s observations of the reasoning processes of people doing trouble-shooting or fault diagnosis. He observed that the reasoning tended to move along the diagonal in this space. People tended to consider holistic properties of a system at high levels of abstraction (e.g., the primary function of an electronic device) in order to make sense of relations at lower levels of abstraction (e.g., the arrangements of parts). In essence, higher levels of abstraction tended to provide the context for understanding WHY the parts were configured in a certain way. People tended to consider lower levels of abstraction to understand how the arrangements or parts served the higher-level purposes. In essence, lower levels of abstraction provided clues to HOW a particular function would be achieved.

Rasmussen found that in the process of trouble shooting an electronic system, the reasoning tended to move up and down the diagonal of the Abstraction-Decomposition Space. Moving up in abstraction tended to broaden the perspective and to suggest dimensions for selecting properties at lower levels. In essence, the higher level was a kind of filter that determined significance at the lower levels. This filter effectively guided attention and determined how to chunk information and what attributes should be salient at the lower levels. Thus, in the process of diagnosing a fault, experts tended to shift attention across different levels of abstraction until eventually zeroing-in on the specific location of a fault (e.g., finding the break in the circuit or the failed part).

Wimsatt’s formalism for characterizing an object or item in functional terms is summarized in the following statement:

According to theory T, a function of item i, in producing behaviour B, in system S in environment E relative to purpose P is to bring about consequence C.

Figure 1 suggests how Wimsatt’s seven functional attributes of an object might fit within Rasmussen’s Abstraction-Decomposition Space. The object or item (i) as a physical entity corresponds with the lowest level of abstraction and the most specific level of decomposition. The purpose (P) corresponds with the highest level of abstraction at a more global level of decomposition. The Theory (T) and System (S) attributes introduce additional constraints for making sense of the relation between the object and the purpose. Theory (T) provides the kind of holonomic constraints (e.g., physical laws) that Rasmussen considered at the Abstract Function Level. These constraints set limits on how a purpose might be achieved (e.g., the laws of aerodynamics set constraints on how airplanes or wings can serve the purpose of safe travel). The System (S) attributes provide the kind of organizational constraints that Rasmussen considered at the General Function level. These constrains describe the object’s role in relation to other parts of a system in order to serve the higher-level Purpose (P) (e.g., a general function of the wing is to generate lift). The Behavior (B) attribute fits with Rasmussen’s Physical Function level that describes the physical constraints relative to the object’s role as a part of the organization (e.g., here the distinction between fixed and rotary wings comes into play). The Environment (E) attribute crosses levels of abstraction as a way of providing the ‘context of use’ for the object. Finally, the Consequence (C) attribute provides the specific effect that the object produces relative to achieving the purpose (e.g., the specific lift coefficient for a wing of a certain size and shape).

While the details of the mapping in Figure 1 might be debated, there seems to be little doubt that the formalisms suggested by Wimsatt and Rasmussen are rooted in very similar intuitions about how the process of making sense of the world is rooted in a functionalist perspective in which ‘meaning’ is grounded in a network of means-ends relations that associates objects with the higher-level purposes and values that they might serve. This connection between ‘meaning’ and higher levels of abstraction has also been recognized by S.I. Hayakawa with his formalism of the Abstraction Ladder.

Hayakawa used the case of Bessie the Cow to illustrate how higher levels of abstraction provide a broader context for understanding the meaning of a specific object in relation to progressively broader systems of associations (See Figure 2).

Figure 2. An illustration of how Hayakawa’s Abstraction Ladder maps into Rasmussen’s Abstraction-Decomposition Space.

Figure 2 illustrates how the distinctions that Hayakawa introduced with his Abstraction Ladder might map to Rasmussen’s Abstraction-Decomposition Space. It has been noted by Hayakawa and others that building engaging narratives involves moving up and down the Abstraction Ladder (or equivalently moving along the diagonal in the Abstraction Decomposition Space). This is consistent with Rasmussen’s observations about trouble-shooting. Thus, the common intuition is that the process of sensemaking is intimately associated with unpacking the different layers of relations between an object and the larger functional networks or contexts in which it is nested.

The Nature of Expertise

The parallels between expert behaviors in trouble shooting and fault diagnosis by Rasmussen and observations about the implications of Hayakawa’s Abstraction Ladder for constructing interesting narratives might help to explain why case-based learning (Bransford, Brown & Cocking, 2000) is particularly effective for communicating expertise and why narrative approaches for knowledge elicitation (e.g., Klein, 2003; Kurtz & Snowden, 2003) are so effective for uncovering expertise. Even more significantly, perhaps the ‘intuitions’ or ‘gut feel’ of experts may reflect a higher degree of attunement with constraints at higher levels of abstraction. That is, while journeymen may know what to do and how to do it, they may not have the deeper understanding of why one way is better than another (e.g., Sinek, 2009) that differentiates the true experts in a field. In other words, the ‘gut feel’ might reflect the ability of experts to appreciate the coupling between objects and actions with ultimate values and higher-level purposes. Further, this link to value and purpose may have an important emotional component (e.g., Damasio, 1999). This suggests that expertise is not simply a function of knowing more, it may also require caring more. 

Conclusions

As Wieck (1995) noted, an important aspect of sensemaking is what Schön (1983) called problem setting. Weick wrote:

When we set the problem, we select what we will treat as “things” of the situation, we set the boundaries of our attention to it, and we impose upon it a coherence which allows us to say what is wrong and in what directions the situation needs to be changed. Problem setting is a process in which, interactively, we name the things to which we will attend and frame the context in which we will attend to them (Weick, 1995, p. 9).

The fundamental point is that the construct of function as reflected in the formalisms of Wimsatt, Rasmussen, and Hayakawa may provide important clues into the nature of how people set the problem as part of a sensemaking process. In particular, the diagonal in Rasmussen’s Abstraction-Decomposition space may provide clues for how people parse the details of complex situations using filters at different layers of abstraction to ultimately make sense relative to higher functional values and purposes.

Thus, here are some important implications:

  • A functionalist perspective provides important insights into the sensemaking process.
  • This is the common intuition underlying the formalisms introduced by Gibson, Wimsatt, Rasmussen, and Hayakawa.
  • Sensemaking involves navigating across levels of abstraction and levels of detail to identify functional or means-ends relations within a larger network of associations between objects (parts) and contexts (wholes).
  • Links between higher levels of abstraction (values, purposes) and lower levels of abstraction (general functions, components and behaviors) may reflect the significance of couplings between emotions, knowledge, and skill.
  • The various formalisms described here provide important frameworks for understanding any sensemaking process (e.g., fault diagnosis, storytelling, or intel analysis) and have important implications for both eliciting knowledge from experts and for representing information to facilitate the development of expertise through training and interface design. 

Key Sources

  1. Bransford, J. D., Brown, A. L., and Cocking, R. (2000). How People Learn, National Academy Press, Washington, DC.
  2. Damasio, A. (1999). The Feeling of What Happens: Body and emotion in the making of consciousness. Orlando, FL: Harcourt.
  3. Flach, J.M. & Voorhorst, F.A. (2016). What Matters: Putting common sense to work. Dayton, OH: Wright State Library.
  4. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. New York: Houghton Mifflin.
  5. Hayakawa, S.I. (1990). Language in Thought and Action. 5th New York: Houghton Mifflin Harcourt.
  6. Klein, G. (2003). Intuition at Work. New York: Doubleday.
  7. Kurtz, C.F. & Snowden, D.J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42, 462-483.
  8. Norman, D.A. (1988). The Psychology of Everyday Things. New York: Basic Books.
  9. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction. New York: North-Holland.
  10. Schön, D.A. (1983). The Reflective Practitioner. New York: Basic Books.
  11. Sinek, S. (2009). Start with Why: How great leaders inspire everyone to take action. New York: Penguin.
  12. Weick, K.E. (1995). Sensemaking in Organizations. Thousand Oaks, CA: Sage.
  13. Wimsatt, W.C. (1972). Teleology and the logical structure of function statements. Hist. Phil. Sci., 3, no. 1, 1-80.

 

Cognitive Systems Engineering (CSE) emerged from Human Factors as researchers began to realize that in order to fully understand human-computer interaction it was necessary to understand the 'work to be done' on the other side of the computer. They began to realize that for an interface to be effective, it had to map into both a 'mind' and onto a 'problem domain.'  They began to realize that a representation only leads to productive thinking if it makes the 'deep structure' of the work domain salient.  Thus, the design of the representation had to be motivated by a deep understanding of the domain (as well as a deep understanding of the mind).

User-eXperience Design (UXD) emerged from Product Design as designers began to realize that they were not simply creating 'objects.' They were creating experiences. They began to realize that products were embedded in a larger context, and that the ultimate measure of the quality of their design was the impact on this larger context - on the user experience. They began to realize that the quality of their designs did not simply lie in the object, but rather in the impact that the object had on the larger experience that it engendered. Designers began to realize that they were not simply shaping objects, but they were shaping experiences. Thus, the design of the object had to be motivated by a deep understanding of the context of use (as well as a deep understanding of the materials or technologies).

The common ground is the user-experience.  CSE and UXD are both about designing experiences. They both require that designers deal with minds, objects, and contexts or ecologies. The motivating contexts have been different, with CSE emerging largely from experiences in safety critical systems (e.g., aviation, nuclear power); and UXD emerging largely from experiences with consumer products (e.g., tooth brushes, doors). However, the common realization is that 'context matters.' The common realization is that the constraints of the 'mind' and the constraints of the 'object' can only be fully understood in relation to a 'context of use.'  The common realization is that 'functions matter.' And that 'functions' are relations between agents, tools, and ecologies.

The CSE and UXD communities have both come to realize that the qualities that matter are not in either the mind or the object, but rather in the experience. They have discovered that the proof of the pudding is in the eating.

Over the last 20 years or so, the vision of how to help organizations improve safety has been changing from a focus on 'stamping out errors' to a focus on 'managing the quality of work.'

This change reflects a similar evolution in how the Forestry service manages fire safety. There was a period when the focus was on 'stamping out forrest fires,' and the poster child for these efforts was Smokey the Bear (Only you can prevent forrest fires). However, the forestry service has learned that a side-effect of an approach that focusses exclusively on preventing fires is the build up of fuel on the forest floors. Because of this build up, when a fire inevitably occurs it can burn at levels that can be catastrophic for forest health. The forest will not naturally recover from the burn.

Smokey the Bear Effect

The forestry service now understands that low intensity fires can be integral to the long term health of a forrest. These low intensity fires help to prevent the build up of fuel and also can promote germination of seeds and new growth.

The alternative to 'stamping out fires' is to manage forrest health. This sometimes involves introducing controlled burns or letting low intensity fires burn themselves out.

The general implication of this is that safety programs should be guided by a vision of health or quality, rather than be simply a reaction to errors. With respect to improving safety, programs focused on health/quality will have greater impacts, than programs designed to 'stamp out errors.' Programs designed to stamp out errors, tend to also end up stamping out the information (e.g., feedback) that is essential for systems to learn from mistakes and to tune to complex, dynamic situations. Like low intensity fires, learning from mistakes and near misses actually contributes to the overall health of a high reliability organization.

This new perspective is beautifully illustrated in Sidney Dekker's new movie that can be viewed on YouTube:

Safety Differently

The CVDi display for evaluating heart health has been updated. The new version includes an option for SI units.  Also, some of the interaction dynamics have been updated. This is still a work in progress, so we welcome feedback and suggestions for how to improve and expand this interface.

https://mile-two.gitlab.io/CVDI/

 

 

The Big Data Problem and Visualization

The digitization of healthcare data using Electronic Healthcare Record (EHR) systems is a great boon to medical researchers. Prior to EHR systems, researchers were responsible for collecting and archiving the patient data necessary to build models for guiding healthcare decisions (e.g., the Framingham Study of Cardiovascular Health). However, with EHR systems, the job of collecting and archiving patient data is off-loaded from the researchers, freeing them to focus on the BIG DATA PROBLEM. Thus, there is a lot of excitement in the healthcare community about the coming BIG DATA REVOLUTION and computer scientists are enthusiastically embracing the challenge of providing tools for BIG DATA VISUALIZATION.

It is very likely that the availability of data and the application of advanced visualization tools will stimulate significant advances in the science of healthcare. However, will these advances translate into better patient care? Recent experiences with EHR systems suggest that the answer is "NO! Not unless we also solve the LITTLE DATA PROBLEM."

The Little Data Problem in Healthcare

Compared to the excitement about embracing the BIG DATA PROBLEM, healthcare technologists and in particular EHR developers have paid relatively little attention to visualization problems on the front end of EHR systems. The EHR interfaces to the frontline healthcare workers consist almost exclusively of text, dialog boxes, and pull-down menus. These interfaces are designed for ‘data input-output.’ They do very little to help physicians to make sense of the data relative to judging risk and making treatment decisions. For example, the current EHR interfaces do little to help physicians to ‘see’ what the data ‘mean’ relative to the risk of a cardiac event; or to ‘see’ the recommended treatment options for a specific patient.

The LITTLE DATA PROBLEM for healthcare involves creative design of interfaces to help physicians to visualize the data for a specific patient in light of the current medical research. The goal is for the interface representations to support the physician in making well-informed treatment decisions and for communicating those decisions to patients. For example, the interface representations should allow a physician to ‘see’ patient data relative to risk models (e.g., Framingham model) and relative to published standards of care (e.g., Adult Treatment Panel IV), so that the decisions made are informed by the evidence-base. In addition, the representation should facilitate discussions with patients to explain and recommend treatment options, to engender trust, and ultimately to increase the likelihood of compliance.

Thus, while EHRs are making things better for medical research, they are making the everyday work of healthcare more difficult. The benefits with respect to the ‘Big Data Problem’ are coming at the expense of increased burden on frontline healthcare workers who have to enter the data and access it through clumsy interfaces. In many cases, the technology is becoming a barrier to communications with the patients, because time spend interacting with the technology is reducing the time available for interacting directly with patients (Arndt, et al, 2017).

At Mile Two, we are bringing Cognitive Systems Engineering (CSE), UX Design, and Agile Development processes together to tackle the LITTLE DATA PROBLEM. Follow this link to see an example of a direct manipulation interface that illustrates how interfaces to EHR systems might better serve the needs of both frontline healthcare workers and patients: CVDi.

Conclusion

The major point is that advances resulting from the BIG DATA REVOLUTION will have little impact on the quality of everyday healthcare if we don't also solve the LITTLE DATA PROBLEM associated with EHR systems.

2

New study finds that physicians spend twice as much time interacting with EHR systems as interacting directly with patients.

http://www.beckershospitalreview.com/ehrs/primary-care-physicians-spend-close-to-6-hours-performing-ehr-tasks-study-finds.html

http://www.annfammed.org/content/15/5/419.full

This is a classical example of clumsy automation. That is, automation that disrupts the normal flow of work, rather than facilitating it. It is unfortunate that healthcare is far behind other industries when it comes to understanding how to use IT to enhance the quality of every day work. While the healthcare industry promotes the potential wonders of "big data," the needs of everyday clinical physicians have been largely overlooked.

EHR systems have been designed around the problem of 'data management' and the problems of 'healthcare management' have been largely unrecognized or unappreciated by the designers of EHR systems.

In solving the 'data' problem, the healthcare IT industry has actually made the 'meaning' problem more difficult for clinical physicians.

This should be a great opportunity for Cognitive Systems Engineering innovations, IF anyone in the healthcare industry is willing to listen.