Skip to content

1

The fields of psychology, human factors, and cognitive systems engineering have lost another leader who did much to shape the fields that we know today. I was very fortunate to overlap with Neville Moray on the faculty at the University of Illinois. To a large extent, my ideals for what it means to be a professor were shaped by Neville.  From his examples, I learned that curiosity does not stop at the threshold of the experimental laboratory and that training graduate students requires engagement beyond the laboratory and classroom. Neville was able to bridge the gulfs between basic and applied psychology, between science and art, and between work and play - in a way that made us all question why these gulfs existed at all.

More commentary on Neville's life and the impact he had on many in our field can be found on the HFES website

 

I just finished Simon Sinek's (2009) "Start with Why." I was struck by the similarities between Sinek's 'Golden Circle' and Jens Rasmussen's 'Abstraction Hierarchy' (see figure). Both parse systems in terms of  a hierarchical association between why, what, and how.

For Rasmussen - 'what' represented the level of the system being attended; 'why' represented a higher-level of abstraction that reflected the significance of the 'what' relative to the whole system; with the highest level of abstraction reflecting the ultimate WHY of a system - its purpose.  In Rasmussen's system,  'how' represented a more concrete description of significant components of the level being attended (e.g., the component processes serving the 'what' level above).

Sinek's 'why' corresponds with the pinnacle of Rasmussen's Abstraction Hiearchy. It represents the ultimate purpose of the system. However, Sinek reverses Rasmussen's 'what' and 'how.' For Sinek, 'how' represents the processes serving the 'why;' and the 'what' represents the products of these processes.

Although I have been teaching Rasmussen's approach to Cognitive Systems Engineering (CSE) for over 30 years, I think that Sinek's WHY-HOW-WHAT partitioning conforms more naturally with common usage of the terms 'how' and 'what.' So, I think this is a pedagogical improvement on Rasmussen's framework.

However, I found that the overall gist of Sinek's "Start with Why" reinforced many of the central themes of CSE. That is, for a 'cognitive system' the purpose (i.e., the WHY) sets the ultimate context for parsing the system (e.g., processes and objects) into meaningful components. This is an important contrast to classical (objective) scientific approaches to physical systems. Classical scientific approaches have dismissed the 'why' as subjective! The 'why' reflected the 'biases' of observers. But for a cognitive science, the observers are the objects of study!

Thus, cognitive science and cognitive engineering must always start with WHY!

The Big Data Problem and Visualization

The digitization of healthcare data using Electronic Healthcare Record (EHR) systems is a great boon to medical researchers. Prior to EHR systems, researchers were responsible for collecting and archiving the patient data necessary to build models for guiding healthcare decisions (e.g., the Framingham Study of Cardiovascular Health). However, with EHR systems, the job of collecting and archiving patient data is off-loaded from the researchers, freeing them to focus on the BIG DATA PROBLEM. Thus, there is a lot of excitement in the healthcare community about the coming BIG DATA REVOLUTION and computer scientists are enthusiastically embracing the challenge of providing tools for BIG DATA VISUALIZATION.

It is very likely that the availability of data and the application of advanced visualization tools will stimulate significant advances in the science of healthcare. However, will these advances translate into better patient care? Recent experiences with EHR systems suggest that the answer is "NO! Not unless we also solve the LITTLE DATA PROBLEM."

The Little Data Problem in Healthcare

Compared to the excitement about embracing the BIG DATA PROBLEM, healthcare technologists and in particular EHR developers have paid relatively little attention to visualization problems on the front end of EHR systems. The EHR interfaces to the frontline healthcare workers consist almost exclusively of text, dialog boxes, and pull-down menus. These interfaces are designed for ‘data input-output.’ They do very little to help physicians to make sense of the data relative to judging risk and making treatment decisions. For example, the current EHR interfaces do little to help physicians to ‘see’ what the data ‘mean’ relative to the risk of a cardiac event; or to ‘see’ the recommended treatment options for a specific patient.

The LITTLE DATA PROBLEM for healthcare involves creative design of interfaces to help physicians to visualize the data for a specific patient in light of the current medical research. The goal is for the interface representations to support the physician in making well-informed treatment decisions and for communicating those decisions to patients. For example, the interface representations should allow a physician to ‘see’ patient data relative to risk models (e.g., Framingham model) and relative to published standards of care (e.g., Adult Treatment Panel IV), so that the decisions made are informed by the evidence-base. In addition, the representation should facilitate discussions with patients to explain and recommend treatment options, to engender trust, and ultimately to increase the likelihood of compliance.

Thus, while EHRs are making things better for medical research, they are making the everyday work of healthcare more difficult. The benefits with respect to the ‘Big Data Problem’ are coming at the expense of increased burden on frontline healthcare workers who have to enter the data and access it through clumsy interfaces. In many cases, the technology is becoming a barrier to communications with the patients, because time spend interacting with the technology is reducing the time available for interacting directly with patients (Arndt, et al, 2017).

At Mile Two, we are bringing Cognitive Systems Engineering (CSE), UX Design, and Agile Development processes together to tackle the LITTLE DATA PROBLEM. Follow this link to see an example of a direct manipulation interface that illustrates how interfaces to EHR systems might better serve the needs of both frontline healthcare workers and patients: CVDi.

Conclusion

The major point is that advances resulting from the BIG DATA REVOLUTION will have little impact on the quality of everyday healthcare if we don't also solve the LITTLE DATA PROBLEM associated with EHR systems.

New study finds that physicians spend twice as much time interacting with EHR systems as interacting directly with patients.

http://www.beckershospitalreview.com/ehrs/primary-care-physicians-spend-close-to-6-hours-performing-ehr-tasks-study-finds.html

http://www.annfammed.org/content/15/5/419.full

This is a classical example of clumsy automation. That is, automation that disrupts the normal flow of work, rather than facilitating it. It is unfortunate that healthcare is far behind other industries when it comes to understanding how to use IT to enhance the quality of every day work. While the healthcare industry promotes the potential wonders of "big data," the needs of everyday clinical physicians have been largely overlooked.

EHR systems have been designed around the problem of 'data management' and the problems of 'healthcare management' have been largely unrecognized or unappreciated by the designers of EHR systems.

In solving the 'data' problem, the healthcare IT industry has actually made the 'meaning' problem more difficult for clinical physicians.

This should be a great opportunity for Cognitive Systems Engineering innovations, IF anyone in the healthcare industry is willing to listen.

The team at Mile Two recently created an App (CVDi) to help people to make sense of clinical values associated with cardiovascular health. The App is a direct manipulation interface that allows people to enter and change clinical values and to get immediate feedback about the impact on overall health and treatment options.

The feedback about overall health is provided in the form of three Risk Models from published research on cardiovascular health. Each model is based on longitudinal models that have tracked the statistical relations between various clinical measures (e.g., age, total cholesterol, blood pressure) and incidents of cardiovascular disease (e.g., heart attacks or strokes).  However, the three models each use different subsets of that data to predict risk, and thus the risk estimates can be quite varied.

A number of people who have reviewed the CVDi App have suggested that this variation among the models might be a source of confusion to users or it might lead people to cherry-pick the value that fits their preconceptions (e.g., someone who is skeptical about medicine might take the best value as justification for not going to the doctor; while a hypochondriac might take the worst value as justification for his fears). In essence, the suggestion is that the variability among the risk estimates is NOISE that will reduce the likelihood that people will make good decisions. These people suggest that we pick one (e.g., the 'best') model and drop the other two.

We have an alternative hypothesis. We believe that the variation among the models is INFORMATION that provides the potential for deeper insight into the complex problem of cardiovascular health. Our hypothesis is that the variation will lead people to consider the basis for each model (e.g., whether it is based on lipids, or BMI, or whether C-reactive proteins are included).  Our interface is designed so that it is easy to SEE the contribution of each of these variables to each of the models. For example, a big difference in risk estimates between the lipid-based models and the BMI-based model might signify the degree to which weight or lipids is contributing to the risk.  We believe this is useful information in selecting an appropriate treatment option (e.g., statins or diet).

The larger question here is the function of MODELS in cognitive systems or decision support systems. Should the function of models be to give people THE ANSWER; or should the function of models be to provide insight into the complexity so that people are well-informed about the problem - so that they are better able to muddle through to discover a satisfying answer.

Although there is great awareness that human rationality is bounded, there is less appreciation of the fact that all computational models are bounded. While we tend to be skeptical about human judgment, there is a tendency to take the output of computational models as the answer or as the truth. I believe this tendency is dangerous! I believe it is unwise to think that there is a single correct answer to a complex problem!

As I have argued in previous posts, I believe that muddling through is the best approach to complex problems. And thus, the purpose of modeling should be to guide the muddling process, NOT to short-circuit the muddling process with THE ANSWER. The purpose of the model is to enhance situation awareness, helping people to muddle well and increasing the likelihood that they will make well-informed choices.

Long ago we made the case that for supporting complex decision making, models should be used to suggest a variety of alternatives - to provide deeper insight into possible solutions - rather than to provide answers:

Brill, E.D. Jr., Flach, J.M., Hopkins, L.D., & Ranjithan, S. (1990). MGA: A decision support system for complex, incompletely defined problems. IEEE Transactions on Systems, Man, and Cybernetics, 20(4), 745-757.

Link to the CVDi interface: CVDi

 

I just completed my first month of work outside the walls of the Ivory Tower of academia. After more than forty years as a student and academic, I left Wright State University and joined a small tech start-up in Dayton - Mile Two. I had some trepidations about life on the outside, but my first month has exceeded my highest expectations. It is quite exciting to be part of a team with talented UX designers and programmers, who can translate my vague ideas into concrete products.

For the last 6 years my students and I had been struggling to translate principles of Ecological Interface Design (EID) into healthcare solutions. Tim McEwen generated a first concept for such an application in his 2012 dissertation and ever since then we have been trying to get support to extend this work. But our efforts were blocked at every turn (two failed proposals to NIH and two failures with NSF). We made countless pleas to various companies and healthcare organizations. We got lots of pats on the back and kind words (some outstanding reviews at NSF), but no follow through with support for continuing the work.

Then in two weeks at Mile Two the team was able to translate our ideas into a web app. Here is a link: CVDi.  Please try it out.  We see this as one step forward in an iterative design process and are eager to get feedback. Please play with this app and let us know about any problems or give us suggestions for improvements.

I am glad that I am no longer a prisoner of an 'institution' and I am excited by all the new possibilities that being at Mile Two will afford. I am looking forward to this new phase of my education.

It's never crowded along the extra mile. (Wayne W. Dyer)

In "The Pleasure of Finding Things Out" Richard Feynman tells this story about things his father did that helped teach him to think like a scientist:

He had taught me to notice things and one day when I was playing with what we call an express wagon, which is a little wagon which has a railing around it for children to play with that they can pull around. It had a ball in it - I remember this - it had a ball in it, and I pulled the wagon and I noticed something about the way the ball moved, so I went to my father and I said, "Say, Pop, I noticed something: When I pull the wagon the ball rolls to the back of the wagon, and when I'm pulling it along and I suddenly stop, the ball rolls to the front of the wagon." and I says, "why is that?" And he said, "That nobody knows," he said. "The general principle is that things that are moving try to keep on moving and things that are standing still tend to stand still unless you push on them hard." And he says, "This tendency is called inertia but nobody knows why it's true." Now that's a deep understanding - he doesn't give me a name, he knew the difference between knowing the name of something and knowing something, which I learnt very early.

Unfortunately, there are many scientists, especially in the social sciences who fail to appreciate the difference between naming something and knowing something. In the social sciences we have lots of names to describe things that we observer - workload, situation awareness, trust, intuitive thinking, analytical thinking, etc. In most of these cases, the words are very good descriptions of the phenomena being observed.

However, it can be problematic when these descriptions are 'reified' as explanations (e.g., "the pilot crashed because he lost situation awareness"); or objectified as internal mechanisms (e.g., System 1 and System 2). The words describe phenomenon that are of central concern for applied cognitive psychology and design. They are aspects of human experience that we hope to understand. But they reflect the beginnings of scientific enquiry - not the end!  They are not answers!

Scientists who use these words as explanations fail to appreciates the lessons Feynman's father taught him at a very early age about "the pleasure of finding things out."

2

Moshe Vardi (a computer science professor at Rice University in Texas) is reported to have made the following claim at a meeting of the American Association for the Advancement of Science: "We are approaching the time when machines will be able to outperform humans at almost any task." He continued, suggesting that this raises an important question of society: "If machines are capable of doing almost any work humans can do, what will humans do?"

These claims illustrate the common assumption that an ultimate impact of increasingly capable automation will be to replace humans in the workplace. It is generally assumed that machines will not simply replace humans, but that they will eventually supersede humans in terms of increased speed, increased reliability (e.g., eliminating human errors), and reduced cost.  Thus, making the humans superfluous.

However, anyone familiar with the history of automation in safety critical systems such as aviation or process control will be very skeptical about the assumption that automation will replace humans. The history of automation in those domains shows that automation displaces humans, changing their role and responsibilities. However, the value of having smart humans in these systems can't be over-estimated. While the automated systems increasingly know HOW to do things; they are less able to understand WHY to do things. Thus, automated systems are less able to know whether something ought to be done - especially when situations change in ways that were not anticipated in the design of the automation.

If you look at the history of personal computing (e.g., Dealers of Lightning by Hiltzik) you will see that the major break-throughs were associated with innovations at the interface (e.g., the GUI), not in the power of the processors. The real power of computers is the ability to more fully engage humans with complex problems - to allow us to see things and think about things in ways that we never could before. For example, to see patterns in healthcare data that allow us to anticipate problems and to refine and target treatments to enhance the positive benefits.

Yes, automation will definitely change the nature of human work in the future. However, fears that humans will be replaced are ill-grounded. The ultimate impact of increasing automation will be to change which aspects of human capabilities will be most valuable - muscular strength will be less valued; classical mathematical or logical ability will be less valued (e.g., the machines can do the Bayesiam calculations much faster than we can); BUT creative thinking, empathy, and wisdom will be increasingly valued. The automation can compute what the risks of a choice are (e.g., likelihood of a new medical procedure succeeding), but when it comes to complex questions associated with the quality of life the automation cannot tell us when the risk is worth taking.

Automation will get work done more efficiently and reliably than humans could do without it, BUT we still need smart humans to decide what work is worth doing! There is little benefit in getting to the wrong place faster.  The more powerful the automation, the more critical will human discernment be in order to point it in the right direction. A system where the technologies are used to engage humans more deeply in complex problems will be much wiser than any fully automated system! 

Dr. Robert Wears was an Emergency Room Physician and a leading advocate/practitioner of Cognitive Systems Engineering in the healthcare domain.  He was committed to applying state of the art theory and technologies to improve patient safety. This is a great loss to Medicine and a great loss to the field of CSE.

I greatly value recent correspondences with Bob about decision-making in healthcare. I will miss his wisdom and guidance.

http://jacksonville.com/metro/health-and-fitness/2017-07-18/robert-wears-one-nation-s-leading-patient-safety-experts-dies-70

 

 

2

The term “Cognitive Engineering” was first suggested by Don Norman and it is the title of a chapter in a book (User-Centered System Design -1986) that he co-edited with Stephen Draper. Norman (1986) writes:

Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. It is a type of applied Cognitive Science, trying to apply what is known from science to the design and construction of machines. It is a surprising business. On the one hand, there actually is a lot known in Cognitive Science that can be applied. On the other hand, our lack of knowledge is appalling. On the one hand, computers are ridiculously difficult to use. On the other hand, many devices are difficult to use - the problem is not restricted to computers, there are fundamental difficulties in understanding and using most complex devices. So the goal of Cognitive Engineering is to come to understand the issues, to show how to make better choices when they exist, and to show what the tradeoffs are when, as is the usual case, an improvement in one domain leads to deficits in another (p. 31).

Norman (1986) continues to specify two major goals that he had as a Cognitive Systems Engineer:

  1. To understand the fundamental principles behind human action and performance that are relevant for the development of engineering principles in design.
  2. To devise systems that are pleasant to use - the goal is neither efficiency nor ease nor power, although these are all to be desired, but rather systems that are pleasant, even fun: to produce what Laurel calls “pleasurable engagement” (p. 32).

Rasmussen (1986) was less interested in pleasurable engagement and more interested in safety - noting the accidents at Three Mile Island and Bhopal as important motivations for different ways to think about human performance and work. As a controls engineer Rasmussen was concerned that the increased utilization of centralized, automatic control systems in many industries (particularly nuclear power) was changing the role of humans in those systems. He noted that the increased use of automation was moving humans “from the immediate control of system operation to higher-level supervisory tasks and to long-term maintenance and planning tasks” (p. 1). Because of his background in controls engineering, Rasmussen understood the limitations of the automated control systems and he recognized that these systems would eventually face situations that their designers had not anticipated (i.e., situations for which the ‘rules’ or ‘programs’ embedded in these systems were inadequate). He knew that it would be up to the human supervisors to detect and diagnose the problems that would thus ensue and to intervene (e.g., creating new rules on the fly) to avert potential catastrophes.

The challenge that he saw for CSE was to improve the interfaces between the humans and the automated control systems in order to support supervisory control. He wrote:

Use of computer-based information technology to support decision making in supervisory systems control necessarily implies an attempt to match the information processes of the computer to the mental decision processes of an operator. This approach does not imply that computers should process information in the same way as humans would. On the contrary, the processes used by computers and humans will have to match difference resource characteristics. However, to support human decision making and supervisory control, the results of computer processing must be communicated at appropriate steps of the decision sequence and in a form that is compatible with the human decision strategy. Therefore, the designer has to predict, in one way or another, which decision strategy an operator will choose. If the designer succeeds in this prediction, a very effective human-machine cooperation may result; if not, the operator may be worse off with the new support than he or she was in the traditional system … (p. 2).

Note that the information technologies that were just beginning to change the nature of work in the nuclear power industry in the 1980’s when Rasmussen made these observations have now become significant parts of almost every aspect of modern life - from preparing a meal (e.g., Chef Watson), to maintaining personal social networks (e.g., Facebook and Instagram), to healthcare (e.g., electronic health record systems), to manufacturing (e.g., flexible, just-in-time systems), to shaping the political dialog (e.g., President Trump’s use of Twitter). Today, most of us have more computing power in our pocket (our smart phones) than was available for even the most modern nuclear power plants in the 1980s. In particular, the display technology in 1980s was extremely primitive relative to the interactive graphics that are available today on smart phones and tablets.

A major source of confusion that has arisen in defining this relatively new field of CSE has been described in a blog post by Erik Hollnagel (2017):

The dilemma can be illustrated by considering two ways of parsing CSE. One parsing is as C(SE), meaning cognitive (systems engineering) or systems engineering from a cognitive point of view. The other is (CS)E, meaning the engineering of (cognitive systems), or the design and building of joint (cognitive) systems.

From the earliest beginnings of CSE, Hollnagel and Woods (1982; 1999) were very clear about what they thought was the appropriate parsing. Here is their description of a cognitive system:

A cognitive system produces “intelligent action,” that is, its behavior is goal oriented, based on symbol manipulation and used knowledge of the world (heuristic knowledge) for guidance. Furthermore, a cognitive system is adaptive and able to view a problem in more than one way. A cognitive system operates using knowledge about itself and the environment, in the sense that it is able to plan and modify its actions on the basis of that knowledge. It is thus not only data driven, but also concept driven. Man is obviously a cognitive system. Machines are potentially if not actually, cognitive systems. An MMS [Man-Machine System] regarded as a whole is definitely a cognitive system (p. 345)

Unfortunately, there are still many who don’t quite fully appreciate the significance of treating the whole sociotechnical system as a unified system where the cognitive functions are emergent properties that depend on coordination among the components. Many have been so entrained in classical reductionistic approaches that they can’t resist the temptation to break the larger sociotechnical system into components . For these people, CSE is simply systems engineering techniques applied to cognitive components within the larger sociotechnical system.  This approach fits with the classical disciplinary divisions in universities where the social sciences and the physical sciences are separate domains of research and knowledge. People generally recognize the significant role of humans in many sociotechnical systems (most notably as a source of error); and they typically advocate that designers account for the ‘human factor’ in the design of technologies. However, they fail to appreciate the self-organizing dynamics that emerge when smart people and smart technologies work together. They fail to recognize that what matters with respect to successful cognitive functioning of this system are emergent properties that cannot be discovered in any of the components. Just as a sports team is not simply a collection of people, a sociotechnical system is not simply a collection of things (e.g., humans and automation).

The ultimate challenge of CSE as formulated by Rasmussen, Norman, Hollnagel, Woods and others is to develop a new framework where the ‘cognitive system’ is a fundamental unit of analysis. It is not a collection of people and machines - rather it is an adapting organism with a life of its own (i.e., dynamic properties that arise from relations among the components). CSE reflects a desire to understand these dynamic properties and to use that understanding to design systems that are increasingly safe, efficient, and pleasant to use.