A good friend of mine recently sent me this TEDx talk in which Nitin Nohria, Dean of Harvard Business School, explores what he calls moral overconfidence and argues for the practice of moral humility as an antidote.
According to the talk’s abstract: Whenever we see examples of ethical or moral failure, our knee-jerk reaction is to say “that was a bad person.” We like to sort the world into good people who have stable and enduringly strong, positive characters, and bad people who have weak or frail characters. So why then do seemingly good people behave badly?
The centerpiece of Dean Nohria’s talk is the Milgram Experiment, which is typically argued to show that — given a strong enough situation — even “good people” will do “bad things.” More particularly, following Stanley Milgram’s own interpretation, most consider the experiment as showing the potentially dangerous consequences that may result from blind obedience to authority.
In light of my own research on values work, it seems the entire line of inquiry may be a false start — it presupposes from the beginning that good and bad are individually located. An alternative interpretation of the Milgram Experiment might start by taking notice of the many heterogeneous social and material actors that were required to be enrolled in the performance of “bad things.” Yale University, newspaper advertisements, experimental designs, subjects, confederates, experimenters, lab coats, electricity, shock machines, voltages, vocabulary tests, payments. In short, the experiment requires the enrollment of an ensemble of sociomaterial actors. If any of them had resisted, the experiment might have “failed.” So why is the actor at the end of the network the one to blame?
Such an interpretation is broadly consistent with actor network theory, in which the explanation for action can no longer be reduced to individual agency. In fact, such attributions are themselves part of what is in need of sociological explanation. What if the Milgram Experiment says more about the culture in which it is located than it does about the subjects it tested? After all, what kind of society is required for test subjects to be held responsible for the actions of an entire network, without which their performances could not have gone off? One can well imagine alternative societies in which different conclusions might have been drawn from the very “same” experiment.
In other words, we need to pose a more fundamental question. As Latour puts it, where is the morality? Is it in me, or in the objects? After reflecting on automobiles, seat belts and police officers, he concludes that morality is located in a network of humans and things. Networks make me (im)moral. Rather than an individual attribute, the definition, recognition and performance of good and evil are the result of moral agencements; moral agency is sociomaterially constituted.
See: Bruno Latour, 1992, ‘‘Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts,’’ in Wiebe E. Bijker and John Law, eds., Shaping Technology/Building Society: Studies in Sociotechnical Change, MIT Press, pp. 225-258.