Reason and Atrocity: Hindriks’ Observation
Moral reasoning appears to enable humans to condone and commit atrocities. Yet it is quite widely held that reasoning is ‘usually engaged in after a moral judgment is made’ (Haidt & Bjorklund, 2008, p. 189). Hindriks observes (in effect) that it is hard to see how both views could be correct (Hindriks, 2014; Hindriks, 2015).
If the video isn’t working you could also watch it on youtube. Or you can view just the slides (no audio or video).
This recording is also available on stream (no ads; search enabled).
If the slides are not working, or you prefer them full screen, please try this link. The recording is available on stream and youtube.
Notes
One compelling reason for studying moral psychology is that ethical abilities appear to play a central role in atrocities:
‘The massive threats to human welfare stem mainly from deliberate acts of principle, rather than from unrestrained acts of impulse’ (Bandura, 2002, p. 116).
Further, the principles that underpin humans’ capacities to perform inhumane acts are often appear to be manufactured and maintained through reasoning to fit a particular situation.1
This observation appears to be in tension with views on which reason can play only an indirect role in motivating morally-relevant actions (for example, harming or helping another person).
As one example of a view on the limits of reason, consider Prinz. Commenting on moral dumbfounding, Prinz (2007, p. 29) writes:
‘If we ask people why they hold a particular moral view, they may offer some reasons, but those reasons are often superficial and post hoc. If the reasons are successfully challenged, the moral judgment often remains. When pressed, people’s deepest moral values are based not on decisive arguments that they discovered while pondering moral questions, but on deeply inculcated sentiments.’
From this Prinz draws a bold conclusion:
‘basic values are implemented in our psychology in a way that puts them outside certain practices of justification. Basic values provide reasons, but they are not based on reasons. … basic values seem to be implemented in an emotional way’ (Prinz, 2007, p. 32).
Prinz appears to be ignoring a key feature of the experiment he is discussing: it is structured as a comparison between harmless and harm-involving cases where subjects’ level of dumbfounding differs between these (see Moral Dumbfounding). The evidence he is (misre)presenting in favour of it actually challenges his view.
Haidt and Bjorklund articulate a slightly less radical view:
‘moral reasoning is an effortful process (as opposed to an automatic process), usually engaged in after a moral judgment is made, in which a person searches for arguments that will support an already-made judgment’ (Haidt & Bjorklund, 2008, p. 189).2
Hindriks observes (in effect) that even this less radical view appears to conflict with the idea that moral reasoning often appears to be necessary for condoning and performing inhumane acts (Hindriks, 2014; Hindriks, 2015). Affective support for judgements about not harming can be overcome with reason. Affective obstacles to deliberately harming other people can be overcome with reason. This should not be possible if reason usually occurs after a moral judgement is made and enables people only to provide post hoc justification for it.3
So is moral reasoning ‘usually engaged in after a moral judgment is made’? Or is it essential for overcoming affective support for judgements about not harming? This discussion can be sharpened by considering moral disengagement.
Glossary
References
-
To take just one example, Osofsky, Bandura, & Zimbardo (2005) investigated prison workers who were tasked with work related to executions. They observe
‘The executioners, who face the most daunting moral dilemma, made the heaviest use of all of the mechanisms for disengaging moral self-sanctions. They adopted moral, economic, and societal security justifications for the death penalty, ascribed subhuman qualities to condemned inmates, and disavowed a sense of personal agency in the taking of life’ (Osofsky et al., 2005, p. 387).
-
This is only half of those authors’ view about reasoning. They also claim that ‘Moral discussion is a kind of distributed reasoning, and moral claims and justifications have important effects on individuals and societies’ (Haidt & Bjorklund, 2008, p. 181). Their idea, very roughly, is that moral discussion can have a long-term effect on affect which can in turn modulate individuals’ judgements and actions. ↩
-
Hindriks focuses on a normative question about justification for moral judgements. The fact that Bandura and other social scientists tend to study abysmal bits of moral reasoning (e.g. ‘Kids who get mistreated usually do things that deserve it’ (Bandura, Barbaranelli, Caprara, & Pastorelli, 1996)) is therefore a potential problem he needs to resolve (Hindriks, 2014, p. 205). We need not consider this problem because our primary concern is to only understand the causal role of reason in how moral judgements are acquired. ↩