Models: To Limit or Inform Human Muddling

The team at Mile Two recently created an App (CVDi) to help people to make sense of clinical values associated with cardiovascular health. The App is a direct manipulation interface that allows people to enter and change clinical values and to get immediate feedback about the impact on overall health and treatment options.

The feedback about overall health is provided in the form of three Risk Models from published research on cardiovascular health. Each model is based on longitudinal models that have tracked the statistical relations between various clinical measures (e.g., age, total cholesterol, blood pressure) and incidents of cardiovascular disease (e.g., heart attacks or strokes).  However, the three models each use different subsets of that data to predict risk, and thus the risk estimates can be quite varied.

A number of people who have reviewed the CVDi App have suggested that this variation among the models might be a source of confusion to users or it might lead people to cherry-pick the value that fits their preconceptions (e.g., someone who is skeptical about medicine might take the best value as justification for not going to the doctor; while a hypochondriac might take the worst value as justification for his fears). In essence, the suggestion is that the variability among the risk estimates is NOISE that will reduce the likelihood that people will make good decisions. These people suggest that we pick one (e.g., the ‘best’) model and drop the other two.

We have an alternative hypothesis. We believe that the variation among the models is INFORMATION that provides the potential for deeper insight into the complex problem of cardiovascular health. Our hypothesis is that the variation will lead people to consider the basis for each model (e.g., whether it is based on lipids, or BMI, or whether C-reactive proteins are included).  Our interface is designed so that it is easy to SEE the contribution of each of these variables to each of the models. For example, a big difference in risk estimates between the lipid-based models and the BMI-based model might signify the degree to which weight or lipids is contributing to the risk.  We believe this is useful information in selecting an appropriate treatment option (e.g., statins or diet).

The larger question here is the function of MODELS in cognitive systems or decision support systems. Should the function of models be to give people THE ANSWER; or should the function of models be to provide insight into the complexity so that people are well-informed about the problem – so that they are better able to muddle through to discover a satisfying answer.

Although there is great awareness that human rationality is bounded, there is less appreciation of the fact that all computational models are bounded. While we tend to be skeptical about human judgment, there is a tendency to take the output of computational models as the answer or as the truth. I believe this tendency is dangerous! I believe it is unwise to think that there is a single correct answer to a complex problem!

As I have argued in previous posts, I believe that muddling through is the best approach to complex problems. And thus, the purpose of modeling should be to guide the muddling process, NOT to short-circuit the muddling process with THE ANSWER. The purpose of the model is to enhance situation awareness, helping people to muddle well and increasing the likelihood that they will make well-informed choices.

Long ago we made the case that for supporting complex decision making, models should be used to suggest a variety of alternatives – to provide deeper insight into possible solutions – rather than to provide answers:

Brill, E.D. Jr., Flach, J.M., Hopkins, L.D., & Ranjithan, S. (1990). MGA: A decision support system for complex, incompletely defined problems. IEEE Transactions on Systems, Man, and Cybernetics, 20(4), 745-757.

Link to the CVDi interface: CVDi

 

Uncategorized

Going the Second Mile

I just completed my first month of work outside the walls of the Ivory Tower of academia. After more than forty years as a student and academic, I left Wright State University and joined a small tech start-up in Dayton – Mile Two. I had some trepidations about life on the outside, but my first month has exceeded my highest expectations. It is quite exciting to be part of a team with talented UX designers and programmers, who can translate my vague ideas into concrete products.

For the last 6 years my students and I had been struggling to translate principles of Ecological Interface Design (EID) into healthcare solutions. Tim McEwen generated a first concept for such an application in his 2012 dissertation and ever since then we have been trying to get support to extend this work. But our efforts were blocked at every turn (two failed proposals to NIH and two failures with NSF). We made countless pleas to various companies and healthcare organizations. We got lots of pats on the back and kind words (some outstanding reviews at NSF), but no follow through with support for continuing the work.

Then in two weeks at Mile Two the team was able to translate our ideas into a web app. Here is a link: CVDi.  Please try it out.  We see this as one step forward in an iterative design process and are eager to get feedback. Please play with this app and let us know about any problems or give us suggestions for improvements.

I am glad that I am no longer a prisoner of an ‘institution’ and I am excited by all the new possibilities that being at Mile Two will afford. I am looking forward to this new phase of my education.

It’s never crowded along the extra mile. (Wayne W. Dyer)

Uncategorized

The Reification Problem: The Difference Between Naming and Understanding

In “The Pleasure of Finding Things Out” Richard Feynman tells this story about things his father did that helped teach him to think like a scientist:

He had taught me to notice things and one day when I was playing with what we call an express wagon, which is a little wagon which has a railing around it for children to play with that they can pull around. It had a ball in it – I remember this – it had a ball in it, and I pulled the wagon and I noticed something about the way the ball moved, so I went to my father and I said, “Say, Pop, I noticed something: When I pull the wagon the ball rolls to the back of the wagon, and when I’m pulling it along and I suddenly stop, the ball rolls to the front of the wagon.” and I says, “why is that?” And he said, “That nobody knows,” he said. “The general principle is that things that are moving try to keep on moving and things that are standing still tend to stand still unless you push on them hard.” And he says, “This tendency is called inertia but nobody knows why it’s true.” Now that’s a deep understanding – he doesn’t give me a name, he knew the difference between knowing the name of something and knowing something, which I learnt very early.

Unfortunately, there are many scientists, especially in the social sciences who fail to appreciate the difference between naming something and knowing something. In the social sciences we have lots of names to describe things that we observer – workload, situation awareness, trust, intuitive thinking, analytical thinking, etc. In most of these cases, the words are very good descriptions of the phenomena being observed.

However, it can be problematic when these descriptions are ‘reified’ as explanations (e.g., “the pilot crashed because he lost situation awareness”); or objectified as internal mechanisms (e.g., System 1 and System 2). The words describe phenomenon that are of central concern for applied cognitive psychology and design. They are aspects of human experience that we hope to understand. But they reflect the beginnings of scientific enquiry – not the end!  They are not answers!

Scientists who use these words as explanations fail to appreciates the lessons Feynman’s father taught him at a very early age about “the pleasure of finding things out.”

Uncategorized

The Ultimate Promise of Automation: To Replace or To Engage Humans?

Moshe Vardi (a computer science professor at Rice University in Texas) is reported to have made the following claim at a meeting of the American Association for the Advancement of Science: “We are approaching the time when machines will be able to outperform humans at almost any task.” He continued, suggesting that this raises an important question of society: “If machines are capable of doing almost any work humans can do, what will humans do?

These claims illustrate the common assumption that an ultimate impact of increasingly capable automation will be to replace humans in the workplace. It is generally assumed that machines will not simply replace humans, but that they will eventually supersede humans in terms of increased speed, increased reliability (e.g., eliminating human errors), and reduced cost.  Thus, making the humans superfluous.

However, anyone familiar with the history of automation in safety critical systems such as aviation or process control will be very skeptical about the assumption that automation will replace humans. The history of automation in those domains shows that automation displaces humans, changing their role and responsibilities. However, the value of having smart humans in these systems can’t be over-estimated. While the automated systems increasingly know HOW to do things; they are less able to understand WHY to do things. Thus, automated systems are less able to know whether something ought to be done – especially when situations change in ways that were not anticipated in the design of the automation.

If you look at the history of personal computing (e.g., Dealers of Lightning by Hiltzik) you will see that the major break-throughs were associated with innovations at the interface (e.g., the GUI), not in the power of the processors. The real power of computers is the ability to more fully engage humans with complex problems – to allow us to see things and think about things in ways that we never could before. For example, to see patterns in healthcare data that allow us to anticipate problems and to refine and target treatments to enhance the positive benefits.

Yes, automation will definitely change the nature of human work in the future. However, fears that humans will be replaced are ill-grounded. The ultimate impact of increasing automation will be to change which aspects of human capabilities will be most valuable – muscular strength will be less valued; classical mathematical or logical ability will be less valued (e.g., the machines can do the Bayesiam calculations much faster than we can); BUT creative thinking, empathy, and wisdom will be increasingly valued. The automation can compute what the risks of a choice are (e.g., likelihood of a new medical procedure succeeding), but when it comes to complex questions associated with the quality of life the automation cannot tell us when the risk is worth taking.

Automation will get work done more efficiently and reliably than humans could do without it, BUT we still need smart humans to decide what work is worth doing! There is little benefit in getting to the wrong place faster.  The more powerful the automation, the more critical will human discernment be in order to point it in the right direction. A system where the technologies are used to engage humans more deeply in complex problems will be much wiser than any fully automated system! 

Uncategorized

Robert Wears leading expert on patient safety passes unexpectedly.

Dr. Robert Wears was an Emergency Room Physician and a leading advocate/practitioner of Cognitive Systems Engineering in the healthcare domain.  He was committed to applying state of the art theory and technologies to improve patient safety. This is a great loss to Medicine and a great loss to the field of CSE.

I greatly value recent correspondences with Bob about decision-making in healthcare. I will miss his wisdom and guidance.

http://jacksonville.com/metro/health-and-fitness/2017-07-18/robert-wears-one-nation-s-leading-patient-safety-experts-dies-70

 

 

Uncategorized

Engineering ‘Cognitive Systems’

The term “Cognitive Engineering” was first suggested by Don Norman and it is the title of a chapter in a book (User-Centered System Design -1986) that he co-edited with Stephen Draper. Norman (1986) writes:

Cognitive Engineering, a term invented to reflect the enterprise I find myself engaged in: neither Cognitive Psychology, nor Cognitive Science, nor Human Factors. It is a type of applied Cognitive Science, trying to apply what is known from science to the design and construction of machines. It is a surprising business. On the one hand, there actually is a lot known in Cognitive Science that can be applied. On the other hand, our lack of knowledge is appalling. On the one hand, computers are ridiculously difficult to use. On the other hand, many devices are difficult to use – the problem is not restricted to computers, there are fundamental difficulties in understanding and using most complex devices. So the goal of Cognitive Engineering is to come to understand the issues, to show how to make better choices when they exist, and to show what the tradeoffs are when, as is the usual case, an improvement in one domain leads to deficits in another (p. 31).

Norman (1986) continues to specify two major goals that he had as a Cognitive Systems Engineer:

  1. To understand the fundamental principles behind human action and performance that are relevant for the development of engineering principles in design.
  2. To devise systems that are pleasant to use – the goal is neither efficiency nor ease nor power, although these are all to be desired, but rather systems that are pleasant, even fun: to produce what Laurel calls “pleasurable engagement” (p. 32).

Rasmussen (1986) was less interested in pleasurable engagement and more interested in safety – noting the accidents at Three Mile Island and Bhopal as important motivations for different ways to think about human performance and work. As a controls engineer Rasmussen was concerned that the increased utilization of centralized, automatic control systems in many industries (particularly nuclear power) was changing the role of humans in those systems. He noted that the increased use of automation was moving humans “from the immediate control of system operation to higher-level supervisory tasks and to long-term maintenance and planning tasks” (p. 1). Because of his background in controls engineering, Rasmussen understood the limitations of the automated control systems and he recognized that these systems would eventually face situations that their designers had not anticipated (i.e., situations for which the ‘rules’ or ‘programs’ embedded in these systems were inadequate). He knew that it would be up to the human supervisors to detect and diagnose the problems that would thus ensue and to intervene (e.g., creating new rules on the fly) to avert potential catastrophes.

The challenge that he saw for CSE was to improve the interfaces between the humans and the automated control systems in order to support supervisory control. He wrote:

Use of computer-based information technology to support decision making in supervisory systems control necessarily implies an attempt to match the information processes of the computer to the mental decision processes of an operator. This approach does not imply that computers should process information in the same way as humans would. On the contrary, the processes used by computers and humans will have to match difference resource characteristics. However, to support human decision making and supervisory control, the results of computer processing must be communicated at appropriate steps of the decision sequence and in a form that is compatible with the human decision strategy. Therefore, the designer has to predict, in one way or another, which decision strategy an operator will choose. If the designer succeeds in this prediction, a very effective human-machine cooperation may result; if not, the operator may be worse off with the new support than he or she was in the traditional system … (p. 2).

Note that the information technologies that were just beginning to change the nature of work in the nuclear power industry in the 1980’s when Rasmussen made these observations have now become significant parts of almost every aspect of modern life – from preparing a meal (e.g., Chef Watson), to maintaining personal social networks (e.g., Facebook and Instagram), to healthcare (e.g., electronic health record systems), to manufacturing (e.g., flexible, just-in-time systems), to shaping the political dialog (e.g., President Trump’s use of Twitter). Today, most of us have more computing power in our pocket (our smart phones) than was available for even the most modern nuclear power plants in the 1980s. In particular, the display technology in 1980s was extremely primitive relative to the interactive graphics that are available today on smart phones and tablets.

A major source of confusion that has arisen in defining this relatively new field of CSE has been described in a blog post by Erik Hollnagel (2017):

The dilemma can be illustrated by considering two ways of parsing CSE. One parsing is as C(SE), meaning cognitive (systems engineering) or systems engineering from a cognitive point of view. The other is (CS)E, meaning the engineering of (cognitive systems), or the design and building of joint (cognitive) systems.

From the earliest beginnings of CSE, Hollnagel and Woods (1982; 1999) were very clear about what they thought was the appropriate parsing. Here is their description of a cognitive system:

A cognitive system produces “intelligent action,” that is, its behavior is goal oriented, based on symbol manipulation and used knowledge of the world (heuristic knowledge) for guidance. Furthermore, a cognitive system is adaptive and able to view a problem in more than one way. A cognitive system operates using knowledge about itself and the environment, in the sense that it is able to plan and modify its actions on the basis of that knowledge. It is thus not only data driven, but also concept driven. Man is obviously a cognitive system. Machines are potentially if not actually, cognitive systems. An MMS [Man-Machine System] regarded as a whole is definitely a cognitive system (p. 345)

Unfortunately, there are still many who don’t quite fully appreciate the significance of treating the whole sociotechnical system as a unified system where the cognitive functions are emergent properties that depend on coordination among the components. Many have been so entrained in classical reductionistic approaches that they can’t resist the temptation to break the larger sociotechnical system into components . For these people, CSE is simply systems engineering techniques applied to cognitive components within the larger sociotechnical system.  This approach fits with the classical disciplinary divisions in universities where the social sciences and the physical sciences are separate domains of research and knowledge. People generally recognize the significant role of humans in many sociotechnical systems (most notably as a source of error); and they typically advocate that designers account for the ‘human factor’ in the design of technologies. However, they fail to appreciate the self-organizing dynamics that emerge when smart people and smart technologies work together. They fail to recognize that what matters with respect to successful cognitive functioning of this system are emergent properties that cannot be discovered in any of the components. Just as a sports team is not simply a collection of people, a sociotechnical system is not simply a collection of things (e.g., humans and automation).

The ultimate challenge of CSE as formulated by Rasmussen, Norman, Hollnagel, Woods and others is to develop a new framework where the ‘cognitive system’ is a fundamental unit of analysis. It is not a collection of people and machines – rather it is an adapting organism with a life of its own (i.e., dynamic properties that arise from relations among the components). CSE reflects a desire to understand these dynamic properties and to use that understanding to design systems that are increasingly safe, efficient, and pleasant to use.

Uncategorized

Cognitive Systems Engineering: More than 25 years in

It is a bit of a jolt for me to realize that it has now been more than 25 years since the publication of Rasmussen, Pejtersen and Goodstein’s (1994) important book “Cognitive Systems Engineering.” It is partly a jolt because it is a reminder of how old I am getting. I used early, pre-publication drafts of that book in a graduate course on Engineering Psychology that I taught as a young assistant professor at the University of Illinois. And even before that, I used Rasmussen’s (1986) previous book “Information Processing and Human-Machine Interaction: An Approach to Cognitive Engineering” in the same course. Yes, I have been learning about and talking about Cognitive Systems Engineering (CSE) for a long time.

It is also a jolt when I realize that many people who are actively involved in the design of complex sociotechnical systems have very little understanding of what CSE is or why it is valuable.  For example, they might ask how is CSE different than Human Factors? It seems despite the efforts of me and many others, the message has not reached many of the communities who are involved in shaping the future of sociotechnical systems (e.g., computer scientists, human factors engineers, designers, chief technology officers).

It is also a jolt when I consider how technology has changed over the past 25 years. A common theme in Rasmussen’s arguments for why CSE was necessary was the fast pace of change due to advances in modern information technologies. Boy did he get that right. Twenty-five years ago the Internet was in its infancy. Google was a research project, not formally incorporated as a company till 1998. There was no Facebook, which wasn’t launched until 2004. No YouTube – the first video was uploaded in 2005. No Twitter – the first tweet wasn’t sent until 2006. And perhaps most significantly, there were no smart phones. The first iPhone wasn’t released until 2007.

I’m not sure if Jens (or anyone else) fully anticipated the full impact of the rapid advances of technology on work and life today. However, I do feel that most of his ideas about the implications of these advances for doing research on and for designing sociotechnical systems are still relevant. In fact, the relevance has only grown with the changes that information technologies have fostered.  Despite the emergence of other terms for new ways to think about work, such as UX design and Resilience Engineering, I contend that having an understanding of CSE remains essential to anyone who is interested in studying or designing sociotechnical systems.

Uncategorized

Design: The Ultimate Test of Theory

Kurt Lewin’s quote that “nothing is more practical than a good theory” has been repeated so often that it has become trite. However, few appreciate the complementary implication of this truism. That is, that “the strongest test of a theory is design.” In other words, the ultimate test of a theory is whether it can be put to practical use.  In fact, Pragmatists such as William James, C.S. Pierce, and John Dewey might have argued that ‘practice’ is the ultimate test of ‘truth.’

William James was always skeptical about what he called “brass instrument” psychology (ala Wundt and others). In experimental science, the experiment is often ‘biased’ by the same assumptions that motivated the theory being tested. The result is that most experiments turn out to be demonstrations of the plausibility of a theory, NOT tests of the theory.  That is, in deciding what variables to control, what variables to vary, and what variables to measure the scientist has played a significant role in shaping the ultimate results.  For example, in testing the hypotheses that humans are information processors – the experiments often put people into situations (choice reaction time) where successfully doing the task, requires that the human behaves like an information processing system. Thus, in experiments, hypotheses are tested against the reality as imagined by the scientist. The experiment rarely tests the limits of that imagination – because the scientist creates the experiment.

However, in design the hypothesis runs up against a reality that is beyond the imagination of the designer.  A design works well, or it doesn’t. It changes things in a positive way or it doesn’t. When a design is implemented in practice, the designer is often ‘surprised’ to discover that in framing her hypothesis she didn’t consider an important dimension of the problem. Sometimes these surprises result in failures (i.e., products that do not meet the functional goals of the designers). But sometimes these surprises result in innovations (i.e., products that turn out to be useful in ways that the designer hadn’t anticipated). Texting on smart phones is a classical example. Who would have imagined before the smart phone that people would prefer texting to speaking over a phone?

Experiments are typically designed to minimize the possibilities for surprise. Design tends to do the opposite. Design challenges tend to generate surprises. In fact, I would define ‘design innovation’ as simply a pleasant surprise!

So, I suggest a variation on Yogi Berra’s quote “If you don’t know where you’re going, you might not get there.”

If you don’t know where you’re going you might be headed for a pleasant surprise (design innovation). 

And if you don’t reach a pleasant surprise on this iteration, simply keep going (iterating) until you do!

Uncategorized

Not My Last Lecture

As of May 1st I have retired from Wright State University. I accepted an early retirement incentive that was offered due to severe economic conditions at the university. I am not at all ready to retire, but I am eager for a change from WSU.  I hope I still have things to offer and I know there is still much for me to learn.

It was great to see many of my former students at a research celebration that the Department of Psychology hosted in my honor on May 7th. It is amazing to see the work that these former students are doing.  Clearly, I didn’t do too much damage!

I am looking forward to the next adventure!  Just waiting for the right door to open.

Uncategorized

Toward an Ecological Theory of Rationality: Debunking the hot hand “illusion”

Taleri Hammack, Jehengar Cooper, John M. Flach & Joseph Houpt

ABSTRACT

This paper explores the ‘hot hand illusion’ from the perspective of ecological rationality. Monte Carlo simulations were used to test the sensitivity of typical tests for randomness to plausible constraints (e.g., Wald=Wolfowitz) on sequences of binary events (e.g., basketball shots). Most of the constraints were detected when sample sizes were large. However, when the range of improvement was limited to reflect natural performance bounds, these tests did not detect a success dependent learning process. In addition, a series of experiments assessed people’s ability to discriminate between random and constrained sequences of binary events. The result showed that in all cases human performance was better than chance, even for the constraints that were missed by the standard tests. The case is made that, as with perception, it is important to ground research on human cognition in the demands of adaptively responding to ecological constraints. In this context, it is suggested that a ‘bias’ or ‘default’ that assumes that nature is ‘structured’ or ‘constrained’ is a very rational approach for an adaptive system whose survival depends on assembling smart mechanisms to solve complex problems.

Download PDF

Uncategorized