When Does “No” Mean “No”?

Note: This article was published in The Globe and Mail on March 3, 2015. The version below includes additional hyperlink references not published in the original.

gam-masthead

On big resource projects, when does ‘no’ mean ‘no’?

By Joel Gehman and Michael Lounsbury
March 3, 2015

A recent column lamented that getting to “yes” on energy projects in Canada has never been tougher: Fossil-fuel developments, pipelines, mines, dams, transmission lines, and even wind turbines “are frequently contested, delayed or blocked.” But do such outcomes mean there is a problem? And if so, what kind of problem is it?

The argument – ‘Getting to Yes’ – assumes that “yes” is somehow on the side of angels. But a critical element of any great strategy is saying “no.” It’s Strategy 101. No organization – whether a corporation, a nation-state or a non-profit – can say “yes” to everything. Choices must be made. In his classic article “What Is Strategy?,” Harvard professor Michael Porter put it bluntly: “The essence of strategy is choosing what not to do.”

Clearly then, “no” is often the better strategic choice. And yet, organizations often fall into a “yes” trap. This is because, once set in motion, strategies are hard to reverse. There are sunk costs, learning effects, organizational inertia and network externalities, among other issues. And so, an organization can easily escalate its commitment to a losing course of action. But in real-time, as these strategic decisions are unfolding, the folly is often hard to stop.

One famous example is New York’s Shoreham Nuclear Plant. First proposed in April, 1966, the plant was expected to cost $75-million and come online by 1973. The plant was eventually completed in October, 1985, only to be decommissioned in March, 1989, having never sold any electricity. By that point total costs had ballooned to $5.5-billion. Predictably, the plant’s owner, Long Island Lighting Company, was unable to survive as an independent company. All because it refused to take “no” for an answer.

On the heels of President Obama’s recent veto, some advocates of the Keystone XL pipeline have proudly proclaimed they won’t take “no” for an answer. Perhaps their persistence in the face of “no” will prove prescient. Or perhaps Keystone XL is another Shoreham Nuclear Plant in the making. Only time will tell. But all of this suggests that perhaps Canada doesn’t have a “yes” problem; perhaps Canada has a “no” problem.

Entrepreneurs in Silicon Valley have a saying: “If you’re going to fail, fail fast.” By comparison, getting to “no” on Canadian energy projects has been taking longer and longer. That prompts some interesting questions. Why has Canada been taking so long to get to “no”? How can we get to “no” faster? Why do so many organizations keep chasing “yes” in the face of “no”? And, perhaps most importantly, what are the costs to Canada of not taking “no” for an answer?

Joel Gehman (@joelgehman) is assistant professor of strategic management and organization and Southam faculty fellow at the Alberta School of Business. Michael Lounsbury is associate dean of research, professor of strategic management and organization and Thornton A. Graham chair at the Alberta School of Business.

Sociomaterial Networks and Moral Agencements

A good friend of mine recently sent me this TEDx talk in which Nitin Nohria, Dean of Harvard Business School, explores what he calls moral overconfidence and argues for the practice of moral humility as an antidote.

According to the talk’s abstract: Whenever we see examples of ethical or moral failure, our knee-jerk reaction is to say “that was a bad person.” We like to sort the world into good people who have stable and enduringly strong, positive characters, and bad people who have weak or frail characters. So why then do seemingly good people behave badly?

The centerpiece of Dean Nohria’s talk is the Milgram Experiment, which is typically argued to show that — given a strong enough situation — even “good people” will do “bad things.” More particularly, following Stanley Milgram’s own interpretation, most consider the experiment as showing the potentially dangerous consequences that may result from blind obedience to authority.

In light of my own research on values work, it seems the entire line of inquiry may be a false start — it presupposes from the beginning that good and bad are individually located. An alternative interpretation of the Milgram Experiment might start by taking notice of the many heterogeneous social and material actors that were required to be enrolled in the performance of “bad things.” Yale University, newspaper advertisements, experimental designs, subjects, confederates, experimenters, lab coats, electricity, shock machines, voltages, vocabulary tests, payments. In short, the experiment requires the enrollment of an ensemble of sociomaterial actors. If any of them had resisted, the experiment might have “failed.” So why is the actor at the end of the network the one to blame?

Such an interpretation is broadly consistent with actor network theory, in which the explanation for action can no longer be reduced to individual agency. In fact, such attributions are themselves part of what is in need of sociological explanation. What if the Milgram Experiment says more about the culture in which it is located than it does about the subjects it tested? After all, what kind of society is required for test subjects to be held responsible for the actions of an entire network, without which their performances could not have gone off? One can well imagine alternative societies in which different conclusions might have been drawn from the very “same” experiment.

In other words, we need to pose a more fundamental question. As Latour puts it, where is the morality? Is it in me, or in the objects? After reflecting on automobiles, seat belts and police officers, he concludes that morality is located in a network of humans and things. Networks make me (im)moral. Rather than an individual attribute, the definition, recognition and performance of good and evil are the result of moral agencements; moral agency is sociomaterially constituted.

See: Bruno Latour, 1992, ‘‘Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts,’’ in Wiebe E. Bijker and John Law, eds., Shaping Technology/Building Society: Studies in Sociotechnical Change, MIT Press, pp. 225-258.