Skip to content

Kurt Lewin's quote that "nothing is more practical than a good theory" has been repeated so often that it has become trite. However, few appreciate the complementary implication of this truism. That is, that "the strongest test of a theory is design." In other words, the ultimate test of a theory is whether it can be put to practical use.  In fact, Pragmatists such as William James, C.S. Pierce, and John Dewey might have argued that 'practice' is the ultimate test of 'truth.'

William James was always skeptical about what he called "brass instrument" psychology (ala Wundt and others). In experimental science, the experiment is often 'biased' by the same assumptions that motivated the theory being tested. The result is that most experiments turn out to be demonstrations of the plausibility of a theory, NOT tests of the theory.  That is, in deciding what variables to control, what variables to vary, and what variables to measure the scientist has played a significant role in shaping the ultimate results.  For example, in testing the hypotheses that humans are information processors - the experiments often put people into situations (choice reaction time) where successfully doing the task, requires that the human behaves like an information processing system. Thus, in experiments, hypotheses are tested against the reality as imagined by the scientist. The experiment rarely tests the limits of that imagination - because the scientist creates the experiment.

However, in design the hypothesis runs up against a reality that is beyond the imagination of the designer.  A design works well, or it doesn't. It changes things in a positive way or it doesn't. When a design is implemented in practice, the designer is often 'surprised' to discover that in framing her hypothesis she didn't consider an important dimension of the problem. Sometimes these surprises result in failures (i.e., products that do not meet the functional goals of the designers). But sometimes these surprises result in innovations (i.e., products that turn out to be useful in ways that the designer hadn't anticipated). Texting on smart phones is a classical example. Who would have imagined before the smart phone that people would prefer texting to speaking over a phone?

Experiments are typically designed to minimize the possibilities for surprise. Design tends to do the opposite. Design challenges tend to generate surprises. In fact, I would define 'design innovation' as simply a pleasant surprise!

So, I suggest a variation on Yogi Berra's quote "If you don't know where you're going, you might not get there."

If you don't know where you're going you might be headed for a pleasant surprise (design innovation). 

And if you don't reach a pleasant surprise on this iteration, simply keep going (iterating) until you do!

2

As of May 1st I have retired from Wright State University. I accepted an early retirement incentive that was offered due to severe economic conditions at the university. I am not at all ready to retire, but I am eager for a change from WSU.  I hope I still have things to offer and I know there is still much for me to learn.

It was great to see many of my former students at a research celebration that the Department of Psychology hosted in my honor on May 7th. It is amazing to see the work that these former students are doing.  Clearly, I didn't do too much damage!

I am looking forward to the next adventure!  Just waiting for the right door to open.

Taleri Hammack, Jehengar Cooper, John M. Flach & Joseph Houpt

ABSTRACT

This paper explores the ‘hot hand illusion’ from the perspective of ecological rationality. Monte Carlo simulations were used to test the sensitivity of typical tests for randomness to plausible constraints (e.g., Wald=Wolfowitz) on sequences of binary events (e.g., basketball shots). Most of the constraints were detected when sample sizes were large. However, when the range of improvement was limited to reflect natural performance bounds, these tests did not detect a success dependent learning process. In addition, a series of experiments assessed people’s ability to discriminate between random and constrained sequences of binary events. The result showed that in all cases human performance was better than chance, even for the constraints that were missed by the standard tests. The case is made that, as with perception, it is important to ground research on human cognition in the demands of adaptively responding to ecological constraints. In this context, it is suggested that a ‘bias’ or ‘default’ that assumes that nature is ‘structured’ or ‘constrained’ is a very rational approach for an adaptive system whose survival depends on assembling smart mechanisms to solve complex problems.

Download PDF

Abstract

An alternative to conventional models that treat decisions as open-loop independent choices is presented. The alterative model is based on observations of work situations such as healthcare, where decisionmaking is more typically a closed-loop, dynamic, problem-solving process. The article suggests five important distinctions between the processes assumed by conventional models and the reality of decisionmaking in practice. It is suggested that the logic of abduction in the form of an adaptive, muddling through process is more consistent with the realities of practice in domains such as healthcare. The practical implication is that the design goal should not be to improve consistency with normative models of rationality, but to tune the representations guiding the muddling process to increase functional perspicacity.

This paper has been accepted for publication in Applied Ergonomics: Access Article

 

FIGURE: The muddling dynamic is modeled as two coupled loops. The inner loop reflects the active control, driven by the current assumptions about the problem. The outer loop is monitoring performance on the inner loop for 'surprises' that might indicate that the current assumptions do not fit the situation.