The team at Mile Two recently created an App (CVDi) to help people to make sense of clinical values associated with cardiovascular health. The App is a direct manipulation interface that allows people to enter and change clinical values and to get immediate feedback about the impact on overall health and treatment options.
The feedback about overall health is provided in the form of three Risk Models from published research on cardiovascular health. Each model is based on longitudinal models that have tracked the statistical relations between various clinical measures (e.g., age, total cholesterol, blood pressure) and incidents of cardiovascular disease (e.g., heart attacks or strokes). However, the three models each use different subsets of that data to predict risk, and thus the risk estimates can be quite varied.
A number of people who have reviewed the CVDi App have suggested that this variation among the models might be a source of confusion to users or it might lead people to cherry-pick the value that fits their preconceptions (e.g., someone who is skeptical about medicine might take the best value as justification for not going to the doctor; while a hypochondriac might take the worst value as justification for his fears). In essence, the suggestion is that the variability among the risk estimates is NOISE that will reduce the likelihood that people will make good decisions. These people suggest that we pick one (e.g., the 'best') model and drop the other two.
We have an alternative hypothesis. We believe that the variation among the models is INFORMATION that provides the potential for deeper insight into the complex problem of cardiovascular health. Our hypothesis is that the variation will lead people to consider the basis for each model (e.g., whether it is based on lipids, or BMI, or whether C-reactive proteins are included). Our interface is designed so that it is easy to SEE the contribution of each of these variables to each of the models. For example, a big difference in risk estimates between the lipid-based models and the BMI-based model might signify the degree to which weight or lipids is contributing to the risk. We believe this is useful information in selecting an appropriate treatment option (e.g., statins or diet).
The larger question here is the function of MODELS in cognitive systems or decision support systems. Should the function of models be to give people THE ANSWER; or should the function of models be to provide insight into the complexity so that people are well-informed about the problem - so that they are better able to muddle through to discover a satisfying answer.
Although there is great awareness that human rationality is bounded, there is less appreciation of the fact that all computational models are bounded. While we tend to be skeptical about human judgment, there is a tendency to take the output of computational models as the answer or as the truth. I believe this tendency is dangerous! I believe it is unwise to think that there is a single correct answer to a complex problem!
As I have argued in previous posts, I believe that muddling through is the best approach to complex problems. And thus, the purpose of modeling should be to guide the muddling process, NOT to short-circuit the muddling process with THE ANSWER. The purpose of the model is to enhance situation awareness, helping people to muddle well and increasing the likelihood that they will make well-informed choices.
Long ago we made the case that for supporting complex decision making, models should be used to suggest a variety of alternatives - to provide deeper insight into possible solutions - rather than to provide answers:
Brill, E.D. Jr., Flach, J.M., Hopkins, L.D., & Ranjithan, S. (1990). MGA: A decision support system for complex, incompletely defined problems. IEEE Transactions on Systems, Man, and Cybernetics, 20(4), 745-757.
Link to the CVDi interface: CVDi