Link Search Menu Expand Document

Reason and Atrocity: Hindriks’ Observation

Moral reasoning appears to enable humans to condone and commit atrocities. Yet it is quite widely held that reasoning is ‘usually engaged in after a moral judgment is made’ (Haidt & Bjorklund, 2008, p. 189). Hindriks observes (in effect) that it is hard to see how both views could be correct (Hindriks, 2014; Hindriks, 2015).

If the video isn’t working you could also watch it on youtube. Or you can view just the slides (no audio or video).

This recording is also available on stream (no ads; search enabled).

If the slides are not working, or you prefer them full screen, please try this link. The recording is available on stream and youtube.

Notes

One compelling reason for studying moral psychology is that ethical abilities appear to play a central role in atrocities:

‘The massive threats to human welfare stem mainly from deliberate acts of principle, rather than from unrestrained acts of impulse’ (Bandura, 2002, p. 116).

Further, the principles that underpin humans’ capacities to perform inhumane acts are often appear to be manufactured and maintained through reasoning to fit a particular situation.1

This observation appears to be in tension with views on which reason can play only an indirect role in motivating morally-relevant actions (for example, harming or helping another person).

As one example of a view on the limits of reason, consider Prinz. Commenting on moral dumbfounding, Prinz (2007, p. 29) writes:

‘If we ask people why they hold a particular moral view, they may offer some reasons, but those reasons are often superficial and post hoc. If the reasons are successfully challenged, the moral judgment often remains. When pressed, people’s deepest moral values are based not on decisive arguments that they discovered while pondering moral questions, but on deeply inculcated sentiments.’

From this Prinz draws a bold conclusion:

‘basic values are implemented in our psychology in a way that puts them outside certain practices of justification. Basic values provide reasons, but they are not based on reasons. … basic values seem to be implemented in an emotional way’ (Prinz, 2007, p. 32).

Prinz appears to be ignoring a key feature of the experiment he is discussing: it is structured as a comparison between harmless and harm-involving cases where subjects’ level of dumbfounding differs between these (see Moral Dumbfounding). The evidence he is (misre)presenting in favour of it actually challenges his view.

Haidt and Bjorklund articulate a slightly less radical view:

‘moral reasoning is an effortful process (as opposed to an automatic process), usually engaged in after a moral judgment is made, in which a person searches for arguments that will support an already-made judgment’ (Haidt & Bjorklund, 2008, p. 189).2

Hindriks observes (in effect) that even this less radical view appears to conflict with the idea that moral reasoning often appears to be necessary for condoning and performing inhumane acts (Hindriks, 2014; Hindriks, 2015). Affective support for judgements about not harming can be overcome with reason. Affective obstacles to deliberately harming other people can be overcome with reason. This should not be possible if reason usually occurs after a moral judgement is made and enables people only to provide post hoc justification for it.3

So is moral reasoning ‘usually engaged in after a moral judgment is made’? Or is it essential for overcoming affective support for judgements about not harming? This discussion can be sharpened by considering moral disengagement.

Glossary

moral disengagement : Moral disengagement occurs when self-sanctions are disengaged from inhumane conduct. Bandura (2002, p. 103) identifies several mechanisms of moral disengagement: ‘The disengagement may centre on redefining harmful conduct as honourable by moral justification, exonerating social comparison and sanitising language. It may focus on agency of action so that perpetrators can minimise their role in causing harm by diffusion and displacement of responsibility. It may involve minimising or distorting the harm that follows from detrimental actions; and the disengagement may include dehumanising and blaming the victims of the maltreatment.’
moral dumbfounding : ‘the stubborn and puzzled maintenance of an [ethical] judgment without supporting reasons’ (Haidt, Bjorklund, & Murphy, 2000, p. 1).
Moral Foundations Theory : The theory that moral pluralism is true; moral foundations are innate but also subject to cultural learning, and the Social Intuitionist Model of Moral Judgement is correct (Graham et al., 2019). Proponents often claim, further, that cultural variation in how these innate foundations are woven into ethical abilities can be measured using the Moral Foundations Questionnare (Graham, Haidt, & Nosek, 2009; Graham et al., 2011). Some empirical objections have been offered (Davis et al., 2016; Davis, Dooley, Hook, Choe, & McElroy, 2017; Doğruyol, Alper, & Yilmaz, 2019). See Moral Foundations Theory: An Approach to Cultural Variation.
Social Intuitionist Model of Moral Judgement : A model on which intuitive processes are directly responsible for moral judgements (Haidt & Bjorklund, 2008). One’s own reasoning does not typically affect one’s own moral judgements, but (outside philosophy, perhaps) is typically used only to provide post-hoc justification after moral judgements are made. Reasoning does affect others’ moral intuitions, and so provides a mechanism for cultural learning.

References

Bandura, A. (2002). Selective Moral Disengagement in the Exercise of Moral Agency. Journal of Moral Education, 31(2), 101–119. https://doi.org/10.1080/0305724022014322
Bandura, A., Barbaranelli, C., Caprara, G., & Pastorelli, C. (1996). Mechanisms of Moral Disengagement in the Exercise of Moral Agency. Journal of Personality and Social Psychology, 71(2), 364–374. https://doi.org/10.1037/0022-3514.71.2.364
Davis, D., Dooley, M., Hook, J., Choe, E., & McElroy, S. (2017). The Purity/Sanctity Subscale of the Moral Foundations Questionnaire Does Not Work Similarly for Religious Versus Non-Religious Individuals. Psychology of Religion and Spirituality, 9(1), 124–130. https://doi.org/10.1037/rel0000057
Davis, D., Rice, K., Tongeren, D. V., Hook, J., DeBlaere, C., Worthington, E., & Choe, E. (2016). The Moral Foundations Hypothesis Does Not Replicate Well in Black Samples. Journal of Personality and Social Psychology, 110(4). https://doi.org/10.1037/pspp0000056
Doğruyol, B., Alper, S., & Yilmaz, O. (2019). The five-factor model of the moral foundations theory is stable across WEIRD and non-WEIRD cultures. Personality and Individual Differences, 151, 109547. https://doi.org/10.1016/j.paid.2019.109547
Graham, J., Haidt, J., Motyl, M., Meindl, P., Iskiwitch, C., & Mooijman, M. (2019). Moral Foundations Theory: On the advantages of moral pluralism over moral monism. In K. Gray & J. Graham (Eds.), Atlas of Moral Psychology. New York: Guilford Publications.
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. https://doi.org/10.1037/a0015141
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the moral domain. Journal of Personality and Social Psychology, 101(2), 366–385. https://doi.org/10.1037/a0021847
Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology, Vol 2: The cognitive science of morality: Intuition and diversity (pp. 181–217). Cambridge, Mass: MIT press.
Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript, University of Virginia.
Hindriks, F. (2014). Intuitions, Rationalizations, and Justification: A Defense of Sentimental Rationalism. The Journal of Value Inquiry, 48(2), 195–216. https://doi.org/10.1007/s10790-014-9419-z
Hindriks, F. (2015). How Does Reasoning (Fail to) Contribute to Moral Judgment? Dumbfounding and Disengagement. Ethical Theory and Moral Practice, 18(2), 237–250. https://doi.org/10.1007/s10677-015-9575-7
Osofsky, M. J., Bandura, A., & Zimbardo, P. G. (2005). The Role of Moral Disengagement in the Execution Process. Law and Human Behavior, 29(4), 371–393. https://doi.org/10.1007/s10979-005-4930-1
Prinz, J. J. (2007). The emotional construction of morals. Oxford: Oxford University Press.
  1. To take just one example, Osofsky, Bandura, & Zimbardo (2005) investigated prison workers who were tasked with work related to executions. They observe

    ‘The executioners, who face the most daunting moral dilemma, made the heaviest use of all of the mechanisms for disengaging moral self-sanctions. They adopted moral, economic, and societal security justifications for the death penalty, ascribed subhuman qualities to condemned inmates, and disavowed a sense of personal agency in the taking of life’ (Osofsky et al., 2005, p. 387).

  2. This is only half of those authors’ view about reasoning. They also claim that ‘Moral discussion is a kind of distributed reasoning, and moral claims and justifications have important effects on individuals and societies’ (Haidt & Bjorklund, 2008, p. 181). Their idea, very roughly, is that moral discussion can have a long-term effect on affect which can in turn modulate individuals’ judgements and actions. 

  3. Hindriks focuses on a normative question about justification for moral judgements. The fact that Bandura and other social scientists tend to study abysmal bits of moral reasoning (e.g. ‘Kids who get mistreated usually do things that deserve it’ (Bandura, Barbaranelli, Caprara, & Pastorelli, 1996)) is therefore a potential problem he needs to resolve (Hindriks, 2014, p. 205). We need not consider this problem because our primary concern is to only understand the causal role of reason in how moral judgements are acquired.