Link Search Menu Expand Document

Conflicting Evidence against a Dual-Process Theory of Moral Judgement

Several studies provide results which partially undermine the evidence in favour of our stripped-down dual-process theory of moral cognition. Here we consider two of the most compelling (Bago & Neys, 2019; Gawronski, Armstrong, Conway, Friesdorf, & Hütter, 2017). Taken together these studies are puzzling: as well as individually conflicting with the evidence for our dual-process theory, the two studies also appear to conflict with each other. It is hard to identify a view that is consistent with taking the results from all of the studies at face value.

If the video isn’t working you could also watch it on youtube. Or you can view just the slides (no audio or video).

This recording is also available on stream (no ads; search enabled).

If the slides are not working, or you prefer them full screen, please try this link. The recording is available on stream and youtube.

Notes

We have a stripped down dual-process theory of moral judgement (see A Dual Process Theory of Ethical Judgement) and an auxiliary hypothesis (see Dual Process Theory and Auxiliary Hypotheses). According to these:

Two (or more) ethical processes are distinct in this sense: the conditions which influence whether they occur, and which outputs they generate, do not completely overlap.

One process is faster than another: it makes fewer demands on scarce cognitive resources such as attention, inhibitory control and working memory.

Only the slow process ever flexibly and rapidly takes into account differences in the more distal outcomes of an action.

Earlier we saw that there is some evidence which appears to support the predictions of this theory (in Evidence for Dual Process Theories). Is there also evidence disconfirming any of its predictions?

While it is hard to find evidence directly against this theory,1 there are some studies that undermine the view we took earlier on which studies provide evidence in favour of the dual-process theory.

Time Pressure

Recall that Suter & Hertwig (2011) provide evidence that time pressure makes participants less sensitive to distal outcomes. Bago & Neys (2019) consider what happens when subjects first make a moral judgement under time pressure and extraneous cognitive load and then, just after, make another moral judgement (in answer to the same question) with no time pressure and no extraneous cognitive load. They report:

‘Our critical finding is that although there were some instances in which deliberate correction occurred, these were the exception rather than the rule. Across the studies, results consistently showed that in the vast majority of cases in which people opt for a [consequentialist] response after deliberation, the [consequentialist] response is already given in the initial phase’ (Bago & Neys, 2019, p. 1794).

As explained in the recording, this is an obstacle to considering Suter & Hertwig (2011)’s study as evidence for our dual-process theory of moral judgement.

Process Dissociation

Recall that Conway & Gawronski (2013) use process dissociation to provide evidence for the prediction that higher cognitive load reduces sensitivity to more distal outcomes.

Gawronski et al. (2017) note that reduced sensitivity to more distal outcomes could be consequence of a general preference not to do anything when under time pressure. They therefore extend the process dissociation model to include a preference for no action.

Separating sensitivity to distal outcomes from preferences not to act changes the picture:

‘The only significant effect in these studies was a significant increase in participants’ general preference for inaction as a result of cognitive load. Cognitive load did not affect participants’ sensitivity to morally relevant consequences’ (Gawronski et al., 2017, p. 363).

They conclude:

‘cognitive load influences moral dilemma judgments by enhancing the omission bias, not by reducing sensitivity to consequences in a utilitarian sense’ (Gawronski et al., 2017, p. 363).

While we should be cautious about putting too much weight on this study, these results do reveal that we cannot take Conway & Gawronski (2013) as evidence in favour of our dual-process theory and auxiliary hypothesis.

Conflicts in the Conflicting Evidence

The two studies which conflict with the evidence for our dual-process theory also appear to conflict with each other. If Gawronski et al. (2017) is right about cognitive load, the participants in Bago & Neys (2019)’s study should have appeared to be less ‘utilitarian’ (as they describe it) when under cognitive load. This is because avoiding action would lead one to make judgements that Bago and Neys classify as non-utilitarian.

So we cannot accept both Gawronski et al. (2017)’s and Bago & Neys (2019)’s conclusions.

This is a sign that there may be something wrong with the way the studies are constructed, perhaps because the dual-process theories they are targeting are not well specified (e.g. involve too many independent bets being made simultaneously).

Conclusion

We may not yet have found sufficient grounds to reject the stripped-down dual-process theory of moral cognition outright. But we should recognise that we do not have sufficient evidence to confidently assert that any of the candidate auxiliary hypotheses are true (see Dual Process Theory and Auxiliary Hypotheses).

This matters for Greene (2014)’s attempt to link characteristically consequentialist judgements to slow processes. As things stand, we do not know that any such link exists. We should be correspondingly cautious in using the dual-process theory in defending a consequentialist ethical theory.

Appendix: Some Other Evidence

There is much evidence on how time pressure and cognitive load influence moral judgements. Understanding how it bears on the stripped-down dual process theory is complicated, in part because many studies target features of Greene’s dual process theory that are not features of the stripped-down dual process theory. Here my focus is on studies that can be interpreted as finding evidence against the theory.

Białek & De Neys (2017) provide direct evidence against out auxiliary hypothesis: time pressure and cognitive load do not appear to influence the extent to which participants take into account the distal outcomes of an action in making moral judgements.

Tinghög et al. (2016) find no evidence for effects of time pressure or cognitive load on moral judgements. They conclude that ‘intuitive moral decision-making does not differ from decisions made in situations where deliberation before decision is facilitated.’

Baron & Gürçay (2017) offer a meta-analysis of response time findings, but focus on an auxiliary hypothesis which we have not used (the ‘default interventionist’ claim).

Koop (2013) and Gürçay & Baron (2017) both measure subjects’ movements as they make a decision, which can provide a window on to how the decision unfolds. Koop (2013) do not find evidence to support the conjecture that subjects increasingly consider distal outcomes later in the decision process. Gürçay & Baron (2017) do not find support for the conjecture that more thinking increases sensitivity to the distal outcomes of actions.

Capraro, Everett, & Earp (2019) examined the effects of telling (they say ‘priming’) people to use ‘emotion, rather than reason’. As background, they note that much of the research on dual-process theories concerns characteristically consequentialist judgements, which may confound two factors: reluctance to cause harm instrumentally and impartiality. The auxiliary hypothesis we have chosen is linked to the first factor (reluctance to cause harm instrumentally) but not the second. They find that when these factors are separated, priming intuition reduces willingness to cause harm instrumentally.2

Although Capraro et al. (2019)’s study supports the auxiliary hypothesis, I have included it here (in a section on evidence against our dual-process theory of moral judgement) because it illustrates a complication in interpreting studies which appear to provide evidence against the theory: none of them are focussed narrowly on sensitivity to distal outcomes specifically rather than on some broader contrast between characteristically consequentialist and characteristically deontological.

Glossary

automatic : On this course, a process is _automatic_ just if whether or not it occurs is to a significant extent independent of your current task, motivations and intentions. To say that _mindreading is automatic_ is to say that it involves only automatic processes. The term `automatic' has been used in a variety of ways by other authors: see Moors (2014, p. 22) for a one-page overview, Moors & De Houwer (2006) for a detailed theoretical review, or Bargh (1992) for a classic and very readable introduction
characteristically consequentialist : According to Greene, a judgement is characteristically consequentialist (or characteristically utilitarian) if it is one that in ‘favor of characteristically consequentialist conclusions (eg, “Better to save more lives”)’ (Greene, 2007, p. 39). According to Gawronski et al. (2017, p. 365), ‘a given judgment cannot be categorized as [consequentialist] without confirming its property of being sensitive to consequences.’
characteristically deontological : According to Greene, a judgement is characteristically deontological if it is one that in ‘favor of characteristically deontological conclusions (eg, “It’s wrong despite the benefits”)’ (Greene, 2007, p. 39). According to Gawronski et al. (2017, p. 365), ‘a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms.’
cognitively efficient : A process is cognitively efficient to the degree that it does not consume working memory and other scarce cognitive resources.
distal outcome : The outcomes of an action can be partially ordered by the cause-effect relation. For one outcome to be more _distal_ than another is for it to be lower with respect to that partial ordering. To illustrate, if you kick a ball through a window, the window’s breaking is a more distal outcome than the kicking.
dual-process theory : Any theory concerning abilities in a particular domain on which those abilities involve two or more processes which are distinct in this sense: the conditions which influence whether one mindreading process occurs differ from the conditions which influence whether another occurs.
fast : A fast process is one that is to some interesting degree automatic and to some interesting degree cognitively efficient. These processes are also sometimes characterised as able to yield rapid responses.
Since automaticity and cognitive efficiency are matters of degree, it is only strictly correct to identify some processes as faster than others.
The fast-slow distinction has been variously characterised in ways that do not entirely overlap (even individual author have offered differing characterisations at different times; e.g. Kahneman, 2013; Morewedge & Kahneman, 2010; Kahneman & Klein, 2009; Kahneman, 2002): as its advocates stress, it is a rough-and-ready tool, not the basis for a rigorous theory.
outcome : An outcome of an action is a possible or actual state of affairs.

References

Bago, B., & Neys, W. D. (2019). The Intuitive Greater Good: Testing the Corrective Dual Process Model of Moral Cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801. https://doi.org/10.1037/xge0000533
Bargh, J. A. (1992). The Ecology of Automaticity: Toward Establishing the Conditions Needed to Produce Automatic Processing Effects. The American Journal of Psychology, 105(2), 181–199. https://doi.org/10.2307/1423027
Baron, J., & Gürçay, B. (2017). A meta-analysis of response-time tests of the sequential two-systems model of moral judgment. Memory & Cognition, 45(4), 566–575. https://doi.org/10.3758/s13421-016-0686-8
Bartels, D. M. (2008). Principled moral sentiment and the flexibility of moral judgment and decision making. Cognition, 108(2), 381–417. https://doi.org/10.1016/j.cognition.2008.03.001
Białek, M., & De Neys, W. (2017). Dual processes and moral conflict: Evidence for deontological reasoners’ intuitive utilitarian sensitivity. Judgment and Decision Making, 12(2), 148.
Capraro, V., Everett, J. A. C., & Earp, B. D. (2019). Priming intuition disfavors instrumental harm but not impartial beneficence. Journal of Experimental Social Psychology, 83, 142–149. https://doi.org/10.1016/j.jesp.2019.04.006
Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology, 104(2), 216–235. https://doi.org/10.1037/a0031021
Gawronski, B., Armstrong, J., Conway, P., Friesdorf, R., & Hütter, M. (2017). Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making. Journal of Personality and Social Psychology, 113(3), 343–376. https://doi.org/10.1037/pspa0000086
Gawronski, B., & Beer, J. S. (2017). What makes moral dilemma judgments “utilitarian” or “deontological”? Social Neuroscience, 12(6), 626–632. https://doi.org/10.1080/17470919.2016.1248787
Greene, J. D. (2007). The Secret Joke of Kant’s Soul. In W. Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 3 (pp. 35–79). MIT Press.
Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726. https://doi.org/10.1086/675875
Gürçay, B., & Baron, J. (2017). Challenges for the sequential two-system model of moral judgement. Thinking & Reasoning, 23(1), 49–80. https://doi.org/10.1080/13546783.2016.1216011
Kahneman, D. (2002). Maps of bounded rationality: A perspective on intuitive judgment and choice. In T. Frangsmyr (Ed.), Le prix nobel, ed. T. Frangsmyr, 416–499. (Vol. 8, pp. 351–401). Stockholm, Sweden: Nobel Foundation.
Kahneman, D. (2013). Thinking, fast and slow.
Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526. https://doi.org/10.1037/a0016755
Koop, G. J. (2013). An assessment of the temporal dynamics of moral decisions. Judgment and Decision Making, 8(5), 527.
Moors, A. (2014). Examining the mapping problem in dual process models. In Dual process theories of the social mind (pp. 20–34). Guilford.
Moors, A., & De Houwer, J. (2006). Automaticity: A Theoretical and Conceptual Analysis. Psychological Bulletin, 132(2), 297–326. https://doi.org/10.1037/0033-2909.132.2.297
Morewedge, C. K., & Kahneman, D. (2010). Associative processes in intuitive judgment. Trends in Cognitive Sciences, 14(10), 435–440. https://doi.org/10.1016/j.tics.2010.07.004
Suter, R. S., & Hertwig, R. (2011). Time and moral judgment. Cognition, 119(3), 454–458. https://doi.org/10.1016/j.cognition.2011.01.018
Tinghög, G., Andersson, D., Bonn, C., Johannesson, M., Kirchler, M., Koppel, L., & Västfjäll, D. (2016). Intuition and Moral Decision-Making  The Effect of Time Pressure and Cognitive Load on Moral Judgment and Altruistic Behavior. PLOS ONE, 11(10), e0164012. https://doi.org/10.1371/journal.pone.0164012
  1. One potential source of evidence that directly opposes the theory is Białek & De Neys (2017) (mentioned below). Unfortunately I came across this too late to include it in the recording. 

  2. Bartels (2008) distinguished between subjects with more intuitive and more deliberative thinking styles. He found that moral judgement ‘(a) makes use of intuitive and deliberative process, (b) is influenced by the judgment-eliciting context, and (c) recruits representations of both deontological constraints and utilitarian considerations.’