Skip to content

Yes - following on the previous post - I do believe that Cognitive Systems Engineering (CSE) generates juice that is well worth the squeeze. However, I think that it is important to distinguish between CSE as an academic enterprise exploring basic issues about the nature of work and the nature of human cognition versus a component of a design process.

When implemented in a design process, a CSE work analysis is sometimes mistakenly implemented as a prerequisite to other aspects of design (e.g., prototyping). The problem with this is that work analysis is never done. Work domains are not static - they are constantly changing due to new opportunities and new challenges associated with evolving technologies and operational contexts. Thus, there are always new depths to explore and often one question leads to even more questions. If you delay other aspects of design until the work analysis is complete - nothing will ever get built. 

Thus, work analysis should be implemented as a co-requisite to other aspects of design.  For example, customers or operators often have a difficult time articulating why they make certain choices, how new technologies might be helpful, or what they need to work more effectively without some concrete context. One way to provide that context is to create concrete scenarios (e.g., critical incidents). Another way is to provide them with a concrete model or prototype that they can manipulate. Even crude models (e.g., back of the napkin sketches or paper prototypes) can be very effective. In the process of reviewing a scenario or interacting with a prototype customers will sometimes be able to recognize and articulate new insights about the utility of the prototype or potential problems with it. This is reflected in Michael Schrage's concept of 'Serious Play.' in essence, prototypes can help to engage operators and allow them to participate in the idea generation process. This can be a valuable source of knowledge about a work domain. Prototypes can greatly enhance knowledge elicitation and work analysis. 

So, it is not a question of doing work analysis OR building design prototypes - success typically requires BOTH work analysis AND prototyping. And further, there is no fixed precedence. Ideally, the work analysis should be tightly coupled with more concrete aspects of design (e.g., wire framing, prototyping). In this coupling, work analysis can be both feedforward (generating hypotheses) and feedback (evaluating operator responses to concrete implementations). 

With the modern explosion of technologies for managing complex information, work domains are rapidly changing. This requires a CSE perspective to assess the changing opportunities and risks and to generate alternative hypotheses for how to leverage these technologies more effectively to reduce risks and to stay competitive. This ongoing work analysis can be a resource for designing new interfaces and decision tools, for designing alternative concepts of operation, and for developing more effective training processes. However, design decisions can not wait for this work analysis to be complete, because it will never be complete. 

In sum, CSE is both an academic enterprise and a field of practice. As an academic enterprise it focuses on understanding cognition situated in the context of the complexities of work environments. As such, it often challenges the conventional wisdom of a cognitive science based on reductive methods that utilize laboratory puzzles to decouple information processing stages from the dynamics of natural situations. As a field of practice, CSE has to function as a co-requisite of other components of design to probe the complexities of work domains. To be effective in practice, cognitive systems engineers have to learn to be team players and they must be able to coordinate and integrate the work analysis processes with the other design processes.

To be effective in practice cognitive systems engineers have to function on interdisciplinary design teams as humble experts, rather than know-it all academics who want to lead the parade.  

Unfortunately, we run into a variation of this question with every customer and every project. And sometimes, we also hear this from other disciplines who participate with us on design teams. There seems to be a general assumption that any time that a design team is not laying out interfaces (wire framing) or writing code is wasted time. There seems to be a general assumption that time invested to gain a deeper understanding of the nature of the work and to generate multiple hypotheses about alternative representations or innovative ways to employ new technologies is wasted. There is an implicit assumption that the customer can specify exactly what they need – they know the answers – they just need to have someone else write the code for them. Or if they are looking for innovation – customers believe it is possible to get instant solutions without any upfront investment in what Michael Schrage calls “Serious Play.”

Smart people think otherwise:

Give me six hours to chop down a tree and I will spend the first four sharpening the axe. Abraham Lincoln

If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions. Albert Einstein

I am not sure that the world is more complex today than ever before, but it is clear that advanced technologies have opened up possibilities for dealing with the complexities that have never before been available. And further, it seems clear that organizations who stick to old strategies (e.g., large inventory buffers, or hierarchical central command structures), who fail to take advantage of new possibilities and new strategies will not succeed in competition with organizations who do take advantage of them.

Today, an auto manufacturer that offers customers “any color they want as long as it’s black” will not be competitive. Today, a military organization that fails to leverage advanced communication networks and advanced approaches to command and control (e.g., mission command) will not be successful.

Today, the value of taking some time to sharpen the axe, or to understand the problem, before hacking away at the tree with a dull axe or before building a solution to the wrong problem should be more apparent than ever. Today, the value of design teams that include a diverse collection of perspectives should be more apparent than ever. We need UI Designers, UX Designers, Programmers, Computer Scientists, Social Scientists, Engineers, and Systems Thinkers. We need multiple perspectives and we need to explore alternative ways of parsing problems and representing constraints. 

People trained in CSE typically have experience with multiple disciplinary perspectives (e.g., human factors, industrial engineering, systems thinking). They typically have specialized skills related to work analysis: knowledge elicitation, field research methods, ethnographic methods, problem decomposition and representation methods, decision analysis methods and systems modeling methods.

CSE doesn’t have all the skills or all the answers when it comes to the challenge of designing competitive solutions that leverage the opportunities of advanced information technologies. However, they do bring important skills to the table. A design team that utilizes some of the skills that CSE offers is less likely to waste time hacking away with a dull axe or designing the perfect solution to the wrong problem.

Today – design takes a diverse team. And a team that includes a CSE perspective is more likely to cut sharply and to solve the right problem. Yes, the juice produced from a CSE perspective is worth the up front time to do a thorough work analysis, and to explore a variety of ways to represent information to decision makers. 

It's always better to measure twice and cut once!

We often identify natural systems with specific functions. For example, a function shared by most life forms is to survive and propagate. And this global function typically depends on many sub-functions such as obtaining nourishment, regulating temperature, and avoiding potential predators. Similarly, the people who come together to create sociotechnical organizations are generally collaborating to achieve some purpose or intention that is beyond the capacity of any of the individuals working in isolation. In fact, an organization may have multiple purposes – to provide a service, produce a product, and/or to make a profit.

The concept of purpose and its role in shaping performance became a primary consideration for both physical and cognitive systems with the development of Norbert Wiener’s Cybernetic Hypothesis and the associated servo-mechanism metaphor. One of the critical attributes of servomechanisms is closely related to the second question mentioned in my previous essay:

Why things stay the same?

In the context of servo-mechanisms this question is typically framed in terms of stability. A system that is stable tends to resist change. For example, healthy animals typically maintain relatively consistent body temperatures, despite changes in the surrounding environments.

Similarly, engineers have designed mechanisms to regulate the temperatures in buildings. These mechanisms allow people to specify a desired temperature and the control mechanism then combines a sensor – that measures the temperature in a room, with an actuator that activates a heating or cooling mechanism whenever the temperature in the room deviates from the desired temperature. This is an elemental form of muddling – in which behavior is adjusted as a function of feedback provided by the sensors. In a similar way, pilots or drivers will adjust their behavior to keep their vehicles on an intended trajectory. For example, skilled pilots will consistently follow a similar safe path to landing at different airports and under different weather conditions. The similar results will be achieved over and over again – yet the behaviors will be different on every occasion due to the different disturbances (e.g., winds) experienced. A servomechanism has been described as a device that achieves the same results, but in a different way each time.

Note that it is impossible to pre-specify the behaviors that will lead to a safe landing for every situation, because each situation will be different. However, the pilots need to learn or tune their perceptual and motor skills to consistently land safely. The pilots need to learn to recognize what it looks like to be on a safe path and they need to know what actions will counter specific disturbances from that path. Thus, landing an aircraft is much more complex than simply activating a heating or cooling mechanism. And of course, keeping large organizations on the appropriate paths to achieve their missions or goals is even still more complex.

The critical thing to understand about muddling is that it depends on the coupling between perception and action.

Muddling requires both situation awareness (i.e., to be able to sense states relative to goals or intentions, and relative to the capacity for action) and it requires the capacity to act (i.e., to be able to move toward more desirable states or to counter disturbances to prevent undesirable changes in state). Skill is an emergent property of the coupling that reflects relations between situation awareness and the potential for action. It can’t be discovered by studying either perceptual or motor components of an organism in isolation. Further, to the extent that the perceptual and motor systems evolved to serve certain functions (e.g., to control locomotion), it is unlikely that these components can be fully understood without considering the functions that they support. This also applies to large organizations – to fully understand the components and processes of the organization it is essential to consider the ultimate functions, purposes, and value systems that the organization serves.

Another important consideration with respect to organizations that have a closed-loop coupling of perception and action is associated with explaining ‘why’ the organization behaves as it does. To understand an open loop organization, we typically look for causes, which are typically prior events (e.g., the preceding dominoes in the chain of actions and reactions). However, to explain why a closed-loop organization behaves the way it does, we have to ask “what is the function or purpose?” of the organization. In other words, we have to ask what is the organization’s goals or what is it attempting to achieve. Whereas, behavior of open-loop systems tend to be the  effects of prior causes or reactions to past events, closed-loop systems typically are attracted by goals or in pursuit of potentially satisfying future events.

Thus, the behavior of closed-loop systems does not conform to the cause-effect (or stimulus-response) narratives that have been used to describe many isolated physical systems.

Although change is pervasive, with skilled muddling many organisms and organizations actively counter many undesirable changes and often they are able to achieve some degree of stability around states that they find to be satisfying. These satisfying states are typically referred to as the function or purpose of the organizations. For example – achieving and maintaining a specified temperature is the purpose of a temperature regulator; keeping their vehicles in the field of safe travel is the function of pilots or drivers; achieving and maintaining a successful company is the function of corporate executives.

In simple terms, pursuing and maintaining satisfactory states are what organizations do!

However, as suggested in the previous essay, it would be naive to associate the purpose or goals of a complex organization with the ideas in an executive's head or even with the organization's mission statement. These may or may not align with the actual purpose or function of an organization. Unlike the heating control system that is designed by an engineer, complex organizations are self-designing or self-organizing. Thus, stability is an emergent property of organizations reflecting the muddling and mutual adjustments of multiple components.

Ultimately, the proof of the function or purpose is in the doing.

And the doing involves dynamic interactions across all the components, as well as the impacts from the environment. The CEO can't just flap her wings and determine the future trajectory of the organization. Ultimately, the purpose emerges from interactions involving all the components within the organization. 


Two questions that interest general systems thinkers are:

Why things change? 2) Why things stay the same?

Let’s consider the first question in this essay, and the second question will be explored in later essays. Change is pervasive – things are flowing, moving from place to place, growing, eroding, aging. During my career, my students and I have explored problems associated with the control of locomotion. How do pilots judge speed or changes in altitude? How were they able to guide their aircraft to a safe touchdown on a runway? How are pilots and drivers able to avoid collisions? Or how are baseball batters able to create collisions with a small ball? Inspired by James Gibson, this research explored the hypothesis that people utilize patterns or structures in optical flow fields to make judgments about and to actively control their own motion.

When Gibson introduced the concept of optical flow fields and the hypothesis that animals could utilize optical invariants to guide locomotion, it was a very difficult construct for many perceptual researchers to grasp. Where is optical flow? If you study eyes or if you study light or optics you won’t discover optical flow. Optical flow is not in the eye and it is not in the light. Optical flow is an example of an emergent property, it arises from relations between the components. Optical flow results when an eye moves relative to surfaces that reflect light. The flow that results from movement relative to reflective surfaces has structure that specifies important aspects of relations between surfaces and moving points of observation. For example, as you are driving down the highway – surfaces that are near to you will flow by relatively quickly, while surfaces that are far from you will flow by relatively slowly. The further a surface is away, the slower it will flow by – and something that is very far away like the moon may seem to not be moving relative to you at all. And when the surface in front of you begins to expand rapidly – you will immediately recognize that a collision is impending. Thus, the structure in flow fields specifies aspects of the 3-dimensional environment that support skilled movement through that environment.

If you look out the window of a car, a train, or an aircraft you will see the world flowing by - optical flow fields are there right before our eyes. Yet, few perceptual researchers could see it! Or if they did, they didn't recognize it as an important phenomenon to explore. 

Few are aware that a key source that helped Gibson to discover the role of optical structure for controlling locomotion was Wolfgang Langewiesche who described how the relation between the point of optical expansion and the horizon helped to specify the path to a safe landing for a pilot.

Interest in change is partly motivated by our interest in making changes – in steering an aircraft to a soft landing or in managing an organization to achieve some function. In the case of controlling vehicles or other mechanical or physical technologies there is typically a linear, proportional relation between action and reaction. Small inputs to the steering wheel typically result in small changes to the direction of travel. However, this is not the case for more complex systems such as weather systems or sociotechnical systems. In complex systems, small actions can be amplified in ways that have large impacts on performance of the whole. The proverbial “Butterfly” effect illustrates this. That is, the idea that the flapping of a butterfly’s wing can impact the nature of a storm some distant time and place in the future. Or that the loss of a nail can result in the loss of a kingdom. Or that a smile can launch a thousand ships.

This is another reminder to be humble.

A sociotechnical system or complex natural system like weather cannot be controlled as simply as steering a car. And in fact, it is questionable whether such systems can be controlled at all. Yes, like the butterfly we can flap our wings, but we can’t anticipate or completely determine the consequences that will result. The performance of a complex organization depends on interactions among many people and no single person determines the outcome. While each component can have an influence, the ultimate outcome will depend on contributions from many people and it will also depend on outside forces and influences.

It is an illusion to think that we can control complex systems - to believe that we are in control or even that we could be in control. In reality, the best anyone can do is to muddle.

If we are observant and careful, we can dip our oars into the water and/or adjust our sails in ways that will influence the direction of our vessel. But the waves and currents have an important vote! And also, our colleagues and friends have an impact. It is a mistake to think that we are the lone captains of our fate - but with a little luck and a lot of help from our friends - we can keep the boat upright and muddle through. 


A System is a way of looking at the world.

What is a system? I like Gerald Weinberg’s (1975) answer to this question, that “a system is a way of looking at the world.” This emphasizes that a system is analogous to a piece of art – it is a representation or model that an observer creates. It emphasizes that the system is not an ‘objective’ thing that exists independent of the observer. Rather, a system is a representation. As a representation it will make some things about the phenomenon being represented salient, and it will hide other things. I’ve long understood this position intellectually – but it has taken me much longer to appreciate the deeper meaning and the practical implications of this definition.

Early in my career, I was exposed to control theory due to working with Rich Jagacinski to model human tracking performance as a graduate student. Ever since, I see closed-loop systems everywhere I look. It seemed obvious to me that the language of control theory and the various representations (e.g., time histories, Bode plots of frequency response characteristics, state space diagrams) provided unique and valuable insights into nature. And I have bored countless students and colleagues to tears as I tried to explain the implications of gains and time delays for stability in closed-loop systems. The power of control theory led to an arrogant sense that I had a privileged view of nature! I sought out those who shared this perspective, and I expended a lot of energy to convince other social scientists that the language of control theory was essential for understanding human performance.

The mistake was not to believe that control theory is a valuable lens for exploring nature. The mistake was to think that it is the best or only path. My infatuation for control theory colored everything I looked at. Everything I observed, every paper I read, every debate or discussion with a colleague was filtered through the logic of control theory. I classified people with respect to whether they ‘got it’ or not! I tended to discount everything that I could not frame in the context of control theory. The problem was that I was so intent on preaching the ‘truth’ of control theory, that I stopped listening to other perspectives.

The deeper implication of Weinberg’s definition is captured in his principle that

“the things we see more frequently are more frequent: 1) because there is some physical reason to favor certain states; or 2) because there is some mental reason.”

While I still believe that control theory captures some important aspects of nature, I now realize that the reason I see it everywhere, and the reason that I dismiss other perspectives is in part due to my own mental fixation. I now realize that it is impossible to separate these two possibilities from within control theory. You simply can’t tell whether your observations reflect natural constraints of the phenomenon or whether they reflect constraints of your perspective – if you only stand in one place. This is nicely illustrated by the Ames room illusion.

This is important because I am not the only victim. Over my career, I have watched others get locked into specific perspectives and have observed vicious debates as people defend one perspective against another. In an Either/Or world, there is a sense that only one perspective can be ‘true.’ So, if my perspective is right, yours must be wrong. I’ve watched constructivists war with ecological psychologists. I saw the development of nonlinear perspectives, and suddenly everything in nature was nonlinear – and all the insights from linear control theory were dismissed.

Gradually, I have come to understand that an important implication of the first principle of General Systems Thinking is:

To be humble.

Nature is incredibly complex relative to our sensemaking capabilities. Any representation or model that will makes sense to us, will only capture part of that complexity. Thus, every representation will be biased in some way. But also, many different perspectives can be valid in different ways. The challenge of General Systems Thinking is to be a better, more generous listener. Don’t let your skill with a particular perspective or a particular set of analytical tools blind you to the potential value of other perspectives. This is not simply about listening to other scientific perspectives. This is not simply about a debate between constructivists and ecologists, or between linear and nonlinear analytical tools. This is about listening to other forms of experience. Listening to the poets and artists. Listening to domain practitioners, listening to people from all levels of an organization.

In some sense General Systems Thinking is an attempt to find a balance between openness and skepticism. On the one hand, we need to be skeptical about all perspectives or models - including our own. On the other hand, we should be open to the potential values of different models and we need to be capable of using multiple models as we seek to distinguish the constraints that are intrinsic to the phenomenon of interest from the constraints of specific perspectives on that phenomenon. Our enthusiasm for the perspectives that we find to be most useful should be tempered by an appreciation of the potential of other perspectives and an openness to the insights that they offer. We need to move beyond the Either/Or debates to embrace a Both/And attitude of collaboration.

I can’t stop seeing closed-loop systems everywhere I look, and I will continue to share my passion for control theory with those who are interested, but I am working to temper my enthusiasm; to be a better listener; and to be a more generous colleague.  

In sum, a system is a representation created by an observer; and General Systems Thinking is a warning against getting trapped by a narrow perspective on nature. It is a reminder that nature is far too complex to be fully captured within any single representation that we could create or that we could grasp. If you aren't boggled by the complexity of nature, you aren't looking carefully enough. 

Samuel Pierpont Langley was a preeminent scientist/engineer of his time. In 1903, the nation's eyes were on his latest efforts to solve the problem of human powered flight. He had done the calculations, had tested his aerodrome models and was finally ready to scale up the models and put his solution to the ultimate test. On October 7th his assistant climbed on to the aerodrome and was launched over the Potomac River - and immediately crashed into the river. A second attempt on December 8th had the same result. Langley's failure led many to conclude that the solution to human powered flight would not be realized in their lifetimes. Yet, nine days later, two unknown brothers from Dayton, OH made the first human powered flight at Kitty Hawk. 

How was the Wright Brother's approach different than Langley's? What was the key to their success? One of the first questions that the Wright's asked was how would an airplane be controlled. And when they inquired to the Smithsonian about prior work on control they were surprised to learn that nothing had been done. The key to their success was the discovery that the problem of steering an aircraft was different than steering a boat. A simple rudder wouldn't do. To turn an aircraft required banking it. Until the Wrights (who built bicycles and carefully observed birds) no one could imagine that a pilot would need to be able to bank the aircraft to achieve a coordinated turn. So, the Wright's started with the question of how to put control into the hands of smart human's. They tested their control system using kites and gliders and spent hours to become skilled at using the controls before adding an engine. 

For Langley - the problem was to design a flying machine. But for the Wrights - the problem was to design a joint cognitive system - that included the pilot as a critical component of the system. 

Today, the pressing challenge is not how to pilot a single aircraft, but how to manage a complex, distributed multi-layered network (e.g., an international business conglomerate, a complex civil air space, a distributed all-domain military organization). This has been described as a polycentric control problem (e.g., Ostrum, 1999; Woods & Branlat, 2010). 

Many people are attempting to engineer solutions to this polycentric control problem utilizing the latest advancements in artificial intelligence, machine learning, and natural language processing. The power of these technologies is quite amazing and the engineering is advancing at a remarkable rate. However, I worry that some do not fully  appreciate the lesson of Langley and the Wrights.

They are framing the problem around the technologies and forgetting that ultimate success depends on the quality of the joint cognitive system.

For example, there seems to be an implicit belief among some that the technology will eliminate the fog and friction associated with managing a complex, distributed organization. Others seem to believe that the technology will allow control to be centralized into the hands of a single person (i.e., pilot or commander).

However, the works of Ostrum and others illustrate that polycentric control problems demand that we let go of the illusion of an omnipotent, centralized controller. Polycentric control problems require harnessing the power of a network of people and technologies with diverse experiences and a variety of skills. Further, it involves creating the conditions for these diverse components to self-organize around common functional objectives, to make dynamic trade-offs among conflicting values, and to resiliently adapt to unanticipated disturbances. 

Note that, within the joint cognitive systems framework, the Wrights did amazing research on the design of wings and propellers, discovering errors in previous models of lift. So, advancing the technology is an important component of the solution. But advancing the technology is NOT enough! Ultimately - success depends on the ability of people to engage with the technology and steer the technology in the right direction. The technology is only part of the system being engineered. Those who treat the people as an afterthought will find themselves designing systems that are unstable and that don't get far off the ground before crashing into the river of complexity. 

Clock time has been a critical dimension for representing human behavior. This has included chronometric analysis using reaction time as a cue for inferring the nature of internal information processing activities and plots of time histories to visualize patterns of activity in time. However, there are other ways to visualize systems dynamics that are less dependent on clock time as an explicit dimension. The construct of field is one important way to visualize dynamical constraints that exist over time (and space), rather than in time. Feynmann (1963) describes the field construct:

It would be trivial, just another way of writing the same thing, if the laws of force were simple, but the laws of force are so complicated that it turns out that fields have a reality that is almost independent of the objects which create them. One can do something like shake a charge and produce an effect, a field, at a distance; if one then stops moving the charge, the field keeps track of all the past, because the interaction between the particles is not instantaneous. It is desirable to have some way to remember what happened previously. If the force upon some charge depends upon where another charge was yesterday, which it does, then we need machinery to keep track of what went on yesterday, and that is the character of a field. So when the forces get more complicated, the field becomes more and more real, and this technique becomes less and less of an artificial separation.

Inspired by Gibson's construct of the Field of Safe Travel, we used a state space to represent constraints associated with a desk top driving simulation in what engineers refer to as a state space diagram. In this representation we plot the constraints associated with maximal acceleration and maximal braking as critical landmarks for understanding driver performance. The points indicate where braking was initiated, the open symbols represent performance early in training and the solid symbols represent performance after practicing. Many have described these types of results as if the driver had learned to perceive Time to Contact, however, we prefer to describe the results as evidence that the driver has learned the constraints of his vehicle and has discovered an optimal, bang-bang solution to the task of approaching and stopping before an obstacle as fast as possible. That is, full acceleration till they reach a point where full braking will stop them before reaching the obstacle. In essence, the driver has learned the dynamics of the simulation. Note this was a table top simulation - so there were no impact of g forces. The point is that the state space represents the constraints over time (i.e., the field; the unique action constraints on Superman and Spiderman) in ways that time histories do not.

Another way to represent patterns over time, rather than instances in time is to use Fourier Analysis. This is a means of representing events as a collection of sinusoidal patterns rather than a collection of points in time. The frequency domain representations allow dynamical systems to be described as observers or control systems that are tuned to certain patterns (e.g., frequencies). This is consistent with E.J. Gibson's theory of perceptual attunement. The basic idea is that with experience, people learn to detect patterns (or structure) in events and that they can use those patterns to anticipate the future and to synchronize their activities with the patterns in constructive ways. 


So, while time histories and chronometric analysis of human behavior can lead to important insights into human performance, it is important for social scientists to consider other ways to visualize the dynamics of behavior. The exclusive use of the clock tends to suggest a causal narrative. Whereas, other narratives (tuning to constraints or to patterns) are suggested by alternative representations.

Each perspective provides unique insights and suggests different metaphors, and no perspective captures the complete story. 


The construct of wicked problems reflects situations where intuitions based on conventional logic will often not be adequate. Wicked problems are chaotic. Thus, conventional ways to decompose problems based on the intuitions of linear analytic techniques or traditional causal reasoning will fail. It is here where ‘experience’ and ‘wisdom’ are the best guides. It is here where Captain Kirk (a bias toward action), Spock (logic), and Dr. McCoy (emotions) must work together to keep the boat stable amid the waves of epistemic uncertainty.

In dealing with wicked problems the head and heart must trust each other and work together to muddle through. For these situations pragmatics take priority – the right choice is the one that works! And often the only reason it works will be that you did what was required to make it work!

This involves more than classical logic. It often requires going forward and following your intuitions, even when conventional logic says to turn back. It involves passion, persistence, and discipline. For these situations, it is not about making the ‘right choice.’ It is about making your choices work! And typically, this will require the efforts of both head (mind) and heart (body).

Again, it is tempting to argue about whether the head or heart should lead. But this reflects a dualistic trap based on either/or reasoning. It is ultimately a matter of coordination between heart and head. It is not about abandoning analytic aspects of cognition, but rather recognizing that the heart is a necessary partner. Success depends on effective coordination between heart and head in order to make the choices work out right! And still there are no guarantees. Serendipity and luck also play a role. 

Without Captain Kirk, Spock and Dr. McCoy might get caught in infinite analytical loops and never pull the trigger, but without Spock and Dr. McCoy, Captain Kirk may not be able to make his choices work! And despite their combined efforts the waves of uncertainty may still get the final vote! 

Muddling literally involves a trial and error process of "feeling" our way through a complex ecology to satisfying outcomes. This involves applying our experiences and utilizing domain constraints (e.g., landmarks) in a smart way. However, muddling involves more than just "knowledge' or 'intelligence.' It also involves feelings. It takes passion to persist in the face of inevitable errors. It takes a cool head to face the inevitable risks. It takes discipline to persist when progress is slow. And it takes a well tuned value system to set appropriate goals and to gauge progress toward those goals. 

Information processing models of cognition, based on a computer metaphor have tended to consider emotions to be a primitive source of 'noise' that interferes with successful adaptation and performance. In this context, the difficulties that Phinneas Gage had in adjusting to life after his accident have been attributed to damage to an internal executive that normally suppresses more primitive instincts associated with emotions. However, recent research by Damasio suggests that the difficulties that Gage experienced may be due to a lack of emotional constraints - that is, a kind of autism in which the emotions are cut-off from the executive processing, leading to a cold logical process - that is poorly tuned to the realities of everyday life, that lacks the common sense necessary for consistently achieving satisfaction. 

Damasio's work suggests that rather than thinking about emotions as 'primitive' instincts that are a threat to satisfactory functioning, it might be better to think about emotions as evolutionarily tuned instincts that help to ground our newer cognitive capabilities in the realities of every day situations. Perhaps, the tuning of emotional and aesthetic sensitivities are fundamental to human expertise. This might be part of why Don Norman's claim that 

Beautiful things work better!

is true. Perhaps, expertise ultimately depends on our ability to coordinate our hearts and minds. Perhaps, designing technologies to support human expertise requires developing representations that are both well tuned to the domain constraints and to the aesthetic sensibilities of people. Perhaps, it is the intimate intermingling of the pragmatic with the aesthetic that make Pirsig's construct of Quality so difficult to explain in conventional terms. 

Any philosophic explanation of Quality is going to be both false and true precisely because it is a philosophic explanation. The process of philosophic explanation is an analytic process, a process of breaking something down into subjects and predicates. What I mean (and everybody else means) by the word ‘quality’ cannot be broken down into subjects and predicates. This is not because Quality is so mysterious but because Quality is so simple, immediate and direct.


Damasio, A. (1994). Descartes’ Error: Emotion, reason, and the human brain. New York: Penguin Books.

Norman, D.A. (2005). Emotional design. New York: Basic Books.

Pirsig, R.M. (1974). Zen and the art of motorcycle maintenance. New York: Perennial Classics. (p. 254).

In the previous post, the main point was that in complex situations analytic solutions (e.g., maps, classical logic, mathematical modeling) will generally fall short of addressing all the important factors and relations that must be considered to achieve a satisfying outcome. Thus, there will be a need for some degree of muddling to get to a satisfying outcome. By 'muddling' I mean a kind of trial and error process analogous to what C.S. Peirce described as Abductive Inference. That is, we generate hypotheses and then test these hypotheses through acting on them. It is important to note that some hypotheses and some actions are better than others. Productive thinking, then involves generating smart hypotheses and smart tests (i.e., hypotheses that are more plausible and tests that generate useful information or feedback and that are relatively safe). This is consistent with Lindblom's idea of incrementalism - making small, safe adjustments to slowly 'feel' the way to a satisfying outcome. It is also consistent with Gigerenzer's idea of ecological rationality and the smart use of heuristics.

A key aspect of smart or expert muddling is to utilize the natural constraints of situations to reduce the space of possibilities and to minimize the consequences of errors. The aiming off strategy used by sailors and orienteers to solve navigation problems provides a good example of how structure inherent in a problem can provide the basis for heuristic solutions that greatly simplify computational demands. In the sailing context, consider the problem of navigating across the vast Atlantic Ocean from London to Boston in the days before global positioning systems. The ship’s pilot would need to frequently compute the position using whatever landmarks were available (e.g., the stars, the sun etc.). These computations can be very imprecise and on a long trip errors can accumulate so that when the ship initially sights the North American continent - it may not be exactly where intended. In fact, Boston may not be in sight.

A similar problem arises in orienteering, which involves a race across forested country from waypoint to waypoint using a compass and topographic map for navigation. When the next waypoint is a distant bridge across a river, because of the uncertainties associated with compass navigation, there is a high probability that due to accumulated errors, the orienteer will not be able to hit the river at exactly the location of the bridge. What does she do when she gets to the river and the bridge is not visible?

Skilled sailors and skilled orienteers use a strategy of aiming off to solve the problem of error in the computational approaches to navigation. That is, rather than setting their course to Boston or to the bridge, they set their course for a point on the coast south of Boston or to the nearest point on the river below the bridge. That is, they purposely ‘bias’ their path to miss the ultimate target. Why? Is this an ‘error’?

Using a computational solution, when you reach the coast or the river and the target is not in sight, which way do you go? If you use the aiming off strategy you know exactly which way to go. When you see the coast, you should be able to sail with the current, up the coast to Boston. When you reach the river, you know which direction to follow the river in order to find the bridge. With the aiming off strategy, rough computations are used to get into a neighborhood of the goal (to reach the boundary constraint), and then, the local boundary constraint is used to zero-in on the target using directly perceivable feedback. The structural association between the boundary (coast line or river) and the target (Boston or bridge) is information (i.e., a sign or landmark) that specifies the appropriate actions.

As autonomous analytical technologies are integrated into organizations, it is important to also consider the role that smart muddling will play in achieving the goals of the organization. This smart muddling can be supported through the design of direct manipulation/perception interfaces (e.g., Bennett & Flach, 2011; Schneiderman, 2022) that allow people to utilize the power of AI/ML systems to discover patterns (natural constraints), to test hypotheses, and to anticipate the potential risks associated with alternative actions. An important question for designers is

How can we leverage the power of AI/ML systems to help people to muddle more skillfully?


Bennett, K.B. & Flach, J.M. (2011). Display and Interface Design: Subtle Science, Exact Art. Boca Raton, FL: CRC Press.

Schneiderman, B (2022). Human-centered AI. Oxford: Oxford University Press.