Management and the Problems of Overdetermination and Underdetermination

The Wall Street Journal has posted a story entitled: Management Research Is Fishy, Says New Management Research. The article is based on a paper, “The Chrysalis Effect: How Ugly Initial Results Metamorphosize into Beautiful Articles.” According to the WSJ, the paper is forthcoming from the Journal of Management (note, however, that as of this writing, the paper was not available from the JOM website).

As reported by the WSJ, the paper finds that at the dissertation level, 82 hypotheses were supported for every 100 that were unsupported (i.e., 45% of hypotheses were supported), meaning that researchers’ theories were disproven by their findings more often than not. However, by the time the papers made it into journals, the ratio shifted to 194:100, meaning that some 65% of hypotheses were supported. This is commonly known as publication bias. In a prior version of the paper, the authors interpreted this finding as evidence of “questionable research practices” (QRP).

Implicit in this logic is an assumption that every dissertation should be published in a journal. How else could we resolve the QRP problem? It also seems to imply that both supported and unsupported hypotheses are inherently unproblematic, and require no further qualification. In essence, all hypotheses are intrinsically fit to print, and they are assumed to give us some kind of direct access to the “truth” of the matter. But what does it mean when a hypothesis is supported or not supported?

This discussion prompted me to reflect a bit on the problems of overdetermination and underdetermination. “Overdetermination” refers to situations in which a particular effect could arise from any one of many possible causes (Hannan, 1971; Meyer & Goes, 1988). Or as Weick (1996: 308) put it: “Overdetermination is simply another way of stating Thompson’s first point that people have multiple, interdependent, socially coherent reasons for doing what they do.” Other organizational theorists have described such circumstances in terms of mean-ends ambiguity, or situations when there are multiple plausible alternatives (Hambrick, 2007). Overdetermination also can occur when mechanist notions of causality overwhelm alternative plausible explanations for what is happening (Boje, 2001).

“Underdetermination” refers to situations in which the “facts” are not clear or strong enough to establish a definitive explanation (Giddens, 1979; 1984: 17). This could be because facts themselves posses “interpretive flexibility” (Pinch and Bijker, 1987), meaning they are open to more than one plausible reading. Or, it could be that the available empirical evidence is limited or derived from narrow contexts (Shrivastava, 1986). In both cases, the available evidence is compatible with more than one theory or explanation. However, more facts may not resolve the problem; “science” can even make matters worse (Sarewitz, 2004). As Giddens (1979: 243) put it: “no amount of accumulated fact will in and of itself determine that one particular theory be accepted and another rejected, since by the modification of the theory, or by other means, the observations in question can be accommodated to it.”

One famous example, Allison’s (1972) analysis of the Cuban missile crisis, has elements of both overdetermination and underdetermination. In this case, “the same event is explained by three completely different theories, each of which nevertheless is able to highlight clear and distinct insights into the origin, unfolding, and resolution of the crisis” (Burgelman, 2011: 597). More generally, viewed through the lenses of overdetermination and underdetermination, we might hypothesize that not every study will work out. Some hypotheses will be supported, some will not. But if they are to be useful, any such findings will need to be translated. After all, we don’t live in a world of variables.

But in that case, how do we know if a study is fit to print? In a widely cited paper, Davis (1971) offered one explanation, arguing that “interesting” studies are more likely to be published and popular. No doubt other explanations are possible. Whether such circumstances are evidence of questionable research practices, depends on the meaning that is given to the evidence. Can a question such as this even be put to a hypothesis test? My sense is that it cannot. Instead, questions such as these entail what I have called values work. Conclusions and their sustenance depend on the network of values practices in which one is entangled and on the continued performance of the implicated social and material network.

Selected References

Allison, G. T. 1972. Essence of Decision: Explaining the Cuban Missile Crisis. Little Brown & Co.

Boje, D. M. 2001. Narrative Methods for Organizational and Communication Research. Thousand Oaks: Sage.

Burgelman, R. A. 2011. Bridging History and Reductionism: A Key Role for Longitudinal Qualitative Research. Journal of International Business Studies, 42: 591–601.

Davis, M. S. 1971. That’s Interesting! Towards a Phenomenology of Sociology and a Sociology of Phenomenology. Philosophy of the Social Sciences, 1: 309–344.

Giddens, A. 1979. Central Problems in Social Theory: Action, Structure, and Contradiction in Social Analysis. Berkeley: University of California Press.

Giddens, A. 1984. The Constitution of Society: Outline of the Theory of Structuration. Berkeley: University of California Press.

Hambrick, D. C. 2007. The Field of Management’s Devotion to Theory: Too Much of a Good Thing? Academy of Management Journal, 50: 1346–1352.

Hannan, M. T. 1971. Aggregation and Disaggregation in Sociology. Lexington, MA: Lexington Books.

Meyer, A. D., & Goes, J. B. 1988. Organizational Assimilation of Innovations: A Multilevel Contextual Analysis. Academy of Management Journal, 31: 897–923.

Pinch, T. J., & Bijker, W. E. 1987. The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other. In W. E. Bijker, T. P. Hughes, & T. J. Pinch (Eds.), Social Construction of Technological Systems: 17–50. Cambridge: MIT Press.

Sarewitz, D. 2004. How Science Makes Environmental Controversies Worse. Environmental Science & Policy, 7: 385–403.

Shrivastava, P. 1986. Is Strategic Management Ideological? Journal of Management, 12: 363–377.

Weick, K. E. 1996. Drop Your Tools: An Allegory for Organizational Studies. Administrative Science Quarterly, 41: 301–313.

One thought on “Management and the Problems of Overdetermination and Underdetermination

  1. Hi Joel,

    I saw this link last week, but I avoided weighing in because it felt weird being the first to comment/share a post on my own article. I’m still the first, but you bring up a few interesting points worth addressing. First, in top tier publications the gap between article acceptance and “online first” publication can take a few weeks, sometimes months. As your CV grows, you’ll learn this frustration, but I advise you to be patient and if there is ever a concern, implied or explicit, feel free to verify the forthcoming status with the accepting journal. Second, there was no assumption in our paper that all dissertations should be published in journals. This is an odd critique as we never implied or recommended that all dissertations be published in journals. It almost seems as if you haven’t read the paper nor made any efforts to contact the authors or journal to get it. The bottom line is that our research found evidence that changes to data and hypotheses coincided with increases in the ratio of significant to non-significant hypotheses more often than vice versa. Hence, the term “questionable research practices” rather than academic misconduct.

Leave a Reply