The existence of moral disengagement shows that some moral intuitions are, at least in part, consequences of reasoning from known principles. It also appears to be a source of objections to each of the theories of moral intuitions we have so far considered, as well as (to anticipate) to Greene’s dual-process theory.
This recording is also available on stream (no ads; search enabled).
We have understood the theory of moral disengagement and seen evidence that it occurs and can explain an interesting range of morally-relevant judgements and actions. No doubt, then, that it is interesting for its own sake. But why are we focussing on it at this point in the course on moral psychology?
UPDATE 2: This is incorrect. Many or all of the principles typically used in moral disengagement are false. (Thank you Isabel!) For instance, it is untrue that ‘Some people have to be treated roughly because they lack feelings that can be hurt’ (Bandura, Barbaranelli, Caprara, & Pastorelli, 1996). They cannot therefore be know. What I should have written is this:
UPDATE 1: This claim does not imply that moral intuitions are ever conclusions of reasoning from known principles (thank you Emily H). Since we defined moral intuitions as unreflective, this would be a contradiction. The claim, rather, is that moral intuitions are consequences of reasoning in this sense: People sometimes anticipate that they will have certain moral intuitions and reason from known principles in order to avoid having them. (This is illustrated with the Tale of the Great and Glorious Leader near the start of Question Session 03.)
Because moral disengagement is implicated in a wide range of inhumane actions, from small-scale bullying (Pelton, Gound, Forehand, & Brody, 2004) through executions of individuals (Osofsky, Bandura, & Zimbardo, 2005) to the use of military force where civilian casualties are expected (McAlister, Bandura, & Owen, 2006), its effects cannot be dismissed as marginal. Invoking moral disengagement is unlike observing that philosophers sometimes reason about ethical dilemmas.
The role of reason in moral disengagement—and therefore in moral intuition—is incompatible with views on which ‘basic values are implemented in our psychology in a way that puts them outside certain practices of justification’ (Prinz, 2007, p. 32). It is also incompatible with the view that ‘moral reasoning is […] usually engaged in after a moral judgment is made, in which a person searches for arguments that will support an already-made judgment’ (Haidt & Bjorklund, 2008, p. 189).2,3
Moral disengagement indicates that reasoning often functions to support moral intuitions in ways that do not provide justification (because the reasoning is so bad; e.g. ‘Kids who get mistreated usually do things that deserve it’ (Bandura et al., 1996, p. 374)).4 Although not directly our concern in moral psychology, this may be a source of objections to theories of moral intuitions based on analogies with language (for example, Mikhail, 2007).
In short, moral disengagement appears to be a source of objections to each of the theories of moral intuitions we have so far considered.
According to Sinnott-Armstrong, Young, & Cushman (2010, p. 256), moral intuitions are ‘strong, stable, immediate moral beliefs.’
Royzman, Landy, & Goodwin (2014) provide an independent source of evidence for this conclusion. (Why not use this as a shortcut rather than discussing the more complicated research on moral disengagement? Because, as noted below, there are some further conclusions that we can draw by from the existence of moral disengagement.) ↩ ↩2
Dahl & Waltzer (2018, p. 241) offer a conflicting interpretation: according to them, the findings about moral disengagement are ‘consistent with recent proposals that decisions about moral issues do not typically follow from reasoning about moral principles […] Instead, decisions are said to happen before moral reasoning in most situations. […] moral reasoning happens primarily when people later seek to justify their decisions to themselves or others.’ I reject their interpretation because do not know how to reconcile it with Bandura (2002, p. 102)’s point that moral disengagement requires anticipating the effects self-regulation; this appears to require reasoning in order to make or sustain a moral judgement. ↩
Much of research on moral disengagement does appear to support these authors’ claims about the social role of reason. But note that these are independent claims. We can consistently hold that moral reasoning influences moral judgements both intra- and inter-individually. ↩
Hindriks (2014, pp. 206–7) attempts to argue that individual differences in propensity to morally disengage do suggest there is a role for reason in justifying moral judgements. I think Royzman et al. (2014)’s findings would provide a more direct route to this conclusion. ↩