This recording is also available on stream (no ads; search enabled).
If the slides are not working, or you prefer them full screen, please try this link. The recording is available on stream and youtube.
Notes
Our overall question is, What do humans compute that enables them to track moral attributes?
From the two bodies of research on moral dumbfounding and moral disengagement, we can conclude that any answer to this question must be consistent the discovery that moral intuitions are sometimes, but not always, a consequence of reasoning from known principles.
In addition to the three puzzles we have already seen …
[emotion] Why do feelings of disgust (and perhaps other emotions) influence moral intuitions? And why do we feel disgust in response to moral transgressions? (see Moral Intuitions and Emotions: Evaluating the Evidence)
[structure] Why do patterns in moral intuitions reflect legal principles humans are typically unaware of? (see Moral Attributes Are Accessible)
[dumbfounding-disengagement] Why are moral intuitions sometimes, but not always, a consequence of reasoning from known principles?
To understand the roles of feeling and reasoning in moral intuitions, we must identify or create a theory that can solve the puzzles, is theoretically coherent and empirically motivated, and generates novel testable predictions.
Glossary
characteristically consequentialist : According to Greene, a judgement is characteristically consequentialist (or characteristically utilitarian) if it is one that in ‘favor of characteristically consequentialist conclusions (eg, “Better to save more lives”)’ (Greene, 2007, p. 39). According to Gawronski, Armstrong, Conway, Friesdorf, & Hütter (2017, p. 365), ‘a given judgment cannot be categorized as [consequentialist] without confirming its property of being sensitive to consequences.’
characteristically deontological : According to Greene, a judgement is characteristically deontological if it is one that in ‘favor of characteristically deontological conclusions (eg, “It’s wrong despite the benefits”)’ (Greene, 2007, p. 39). According to Gawronski et al. (2017, p. 365), ‘a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms.’
Drop : A dilemma; also known as Footbridge. A runaway trolley is about to run over and kill five people. You can hit a switch that will release the bottom of a footbridge and one person will fall onto the track. The trolley will hit this person, slow down, and not hit the five people further down the track. Is it okay to hit the switch?
moral disengagement : Moral disengagement occurs when self-sanctions are disengaged from inhumane conduct. Bandura (2002, p. 103) identifies several mechanisms of moral disengagement: ‘The disengagement may centre on redefining harmful conduct as honourable by moral justification, exonerating social comparison and sanitising language. It may focus on agency of action so that perpetrators can minimise their role in causing harm by diffusion and displacement of responsibility. It may involve minimising or distorting the harm that follows from detrimental actions; and the disengagement may include dehumanising and blaming the victims of the maltreatment.’
moral intuition : According to this lecturer, moral intuitions are unreflective ethical judgements. According to Sinnott-Armstrong, Young, & Cushman (2010, p. 256), moral intuitions are ‘strong, stable, immediate moral beliefs.’
tracking an attribute : For a process to track an attribute is for the presence or absence of the attribute to make a difference to how the process unfolds, where this is not an accident. (And for a system or device to track an attribute is for some process in that system or device to track it.) Tracking an attribute is contrasted with computing it. Unlike tracking, computing typically requires that the attribute be represented. (The distinction between tracking and computing is a topic of Two Questions about Moral Intuitions.)
Trolley : A dilemma; also known as Switch. A runaway trolley is about to run over and kill five people. You can hit a switch that will divert the trolley onto a different set of tracks where it will kill only one. Is it okay to hit the switch?
References
Bandura, A. (2002). Selective Moral Disengagement in the Exercise of Moral Agency. Journal of Moral Education, 31(2), 101–119. https://doi.org/10.1080/0305724022014322
Gawronski, B., Armstrong, J., Conway, P., Friesdorf, R., & Hütter, M. (2017). Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making. Journal of Personality and Social Psychology, 113(3), 343–376. https://doi.org/10.1037/pspa0000086
Greene, J. D. (2007). The Secret Joke of Kant’s Soul. In W. Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 3 (pp. 35–79). MIT Press.
Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726. https://doi.org/10.1086/675875
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107(3), 1144–1154. https://doi.org/10.1016/j.cognition.2007.11.004
Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript, University of Virginia.
Petrinovich, L., & O’Neill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17(3), 145–171. https://doi.org/10.1016/0162-3095(96)00041-6
Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137. https://doi.org/10.1016/j.cognition.2015.04.015
Sinnott-Armstrong, Walter, Young, L., & Cushman, F. (2010). Moral intuitions. In J. M. Doris, M. P. R. Group, & others (Eds.), The moral psychology handbook (pp. 246–272). Oxford: OUP.