As long as we employ “common sense” to guide our own actions, we can’t really go wrong because there is an almost infinite amount of common sense advice for us to deal with our daily situations. If some of these common sense-guided actions seem inconsistent, so be it; life goes on.
Similarly, when we employ common sense in decisions that would impact large numbers of people, we can usually find some commonsense explanations to cite when we are confronted with criticism. In a way, common sense becomes the shield for our hubris.
Politicians of any stripe can always find commonsense explanations that appeal to their supporters however much they evoke disbelief in their opponents. Managers can usually justify their decisions to their peers and management superiors, but not to others whose lives are most impacted.
As I mentioned at the beginning of this “common sense” journey, one of the major problems with using common sense to predict others’ behavior is that we inevitably assume too much. Erroneous assumptions on a large scale lead to all kinds of unintended consequences.
Andrew Watts suggests that instead of using the old but not trustworthy “predict and control” model, we may want to consider switching to “measure and react.” As Mr. Watts points out in Everything Is Obvious, lay people’s prediction is often no worse than the “experts’,” and frequently the layman and expert are equally wrong. If the prediction is off, then the planning control elements that follow the prediction are predisposed to go awry.
Just because we can’t confidently predict most complex systems doesn’t mean that we can’t use probability to help make decisions. Of course, we still need to understand the nature of the phenomenon, which we are confronting. It’s one thing to plan for social behavior that happens with regularity, such as flu season or holiday shopping; it’s another to plan for known but infrequent phenomena, such as impact from a category 4 hurricane or the impact of the “ice bucket challenge.” (find a link for explanation) Seriously, who had predicted the success of the “ice bucket challenge?”
In addition, we should be cautious about reliance on “expert’s” opinions. Why? Watts explains that it is because we usually consult “experts” only one at a time. We would be much better off relying on polls of many people, experts and non-experts (or, no experts at all), for input. Not only do experts cost more; they also tend to advocate employing more sophisticated models for “better control.”
In Watts’ many experiments and reviews of others’ experiments, we learn that in trying to make predictions, simple models do just as well as the more sophisticated ones. Or, the more sophisticated models don’t bring enough return on the investment for all the additional information you have to acquire (at a cost, of course). Watts uses the example of sports games. The key factors for predicting which team might win are whether it’s a home game and what the historical data tells about the teams. All the additional nuanced information helps only just a little, not enough to make any significant difference.
With experiment, especially for organizations, “measure and react” approach would allow more immediate information on what the next step should be and how to implement it. For example, a company can do advertising in one geographic area or to one demographic group, and compare these results with similar markets. Of course, not every decision allows experimentation; imagine launching a military surge in one town but not in other towns.
Watts offers these additional principles: “local knowledge,” “bright spot success stories,” and “bootstrapping,” and they are all connected. Local knowledge incorporates more accurate information and focused skills to tackle specific problems. In other words, one size cannot possibly fit all. “Local” personnel would have much better grasp of who to contact, for what resources and how much, and where to focus the resources, etc. Most of the issues that organizations face are not likely to be brand new, thus it’s efficient to look for ideas that have been tried. But don’t just copy; by studying other ideas closely you can see how they can be adapted to your needs. Underlying all this is the notion of “humility.” Watts quotes William Easterly,
“A Planner thinks he already knows the answer; he thinks of poverty [or whatever issue] as a technical engineering problem that his answers will solve. A Searcher admits he doesn’t know the answers in advance; he believes that poverty is a complicated tangle of political, social, historical, institutional, and technological factors…and hopes to find answers to individual problems by trial and error… A Planner believes outsiders know enough to impose solutions. A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.”
Watts further drives home with this observation, “[Planners] develop plans on the basis on intuition and experience alone. Plans fail, in other words, not because planners ignore common sense, but rather because they rely on their own common sense to reason about the behavior of people who are different from them.”
Of course, what I’ve been presenting in this space is based on my “expertise and experience”, which is likely to commit the same common sense fallacy even as I have been learning from Mr. Watts. So, I strongly suggest that you read Everything Is Obvious: How common sense fails us for yourself.
Till next time,
Staying Sane and Charging Ahead.
Direct Contact: email@example.com
Editor’s note: Dr. Yang has a PhD in Management from the Wharton Business School of the University of Pennsylvania. She taught at Wharton for a number of years, and consulted for small groups and small organizations and on cross-cultural issues. Her professional worldview comprises three pillars: 1. All organizations are social systems in which elements are inter-related. 2. To improve organizations, the focus should be on the positive dimensions on which to build. This philosophical foundation is Appreciative Inquiry. 3. Yang subscribes to the methodological perspective that she is part of the instrument from which to gain quality data from respondents, and with which to compare and contrast with others’ realities.