Do discoveries in moral psychology reveal that not-justified-inferentially premises about particular moral scenarios cannot be used in ethical arguments? This section outlines a loose reconstruction of one strand of Greene (2014)’s argument which, if successful, shows that the answer is yes.
This recording is also available on stream (no ads; search enabled).
Greene (2014)’s argument has been interpreted in a variety of ways, and has ambitious aims (including establishing that a broadly consequentialist theory is preferable to any deontological theory). Since Greene’s argument has been the target of several objections, our strategy will be first to consider whether we can craft a loose reconstruction of one strand of the argument which aims to establish a conclusion more modest than Greene’s own (although one with interesting implications). If that succeeds, we may then consider whether further arguments for Greene’s more ambitious conclusions succeed.
Faster processes are unreliable in unfamiliar situations (see Cognitive Miracles: When Are Fast Processes Unreliable?).
Therefore, we should not rely on faster process in unfamiliar situations [from 2].
When philosophers rely on not-justified-inferentially premises, they are relying on faster processes (see What Is the Role of Fast Processes In Not-Justified-Inferentially Judgements?).
Therefore, not-justified-inferentially premises about particular moral scenarios cannot be used in ethical arguments where the aim is to establish knowledge of their conclusions [from 3, 4 and 5].
The above argument implies that Thomson’s method of trolley cases is misguided (see Thomson’s Other Method of Trolley Cases), along with many other philosophical arguments in ethics.
The above argument, if successful, also implies the falsity of Audi’s view about ethics:
‘Episodic intuitions […] can serve as data […] … beliefs that derive from them receive prima facie justification’ (Audi, 2015, p. 65).
The above argument does not favour one type (e.g. deontological vs consequentialist) of ethical theory, nor one approach to doing ethics (e.g. case-based vs systematic).1 (We will eventually consider whether further arguments succeed in establishing either such favouritism.)
The above argument does not imply that philosophers should give up on arguments involving not-justified-inferentially premises about particular moral scenarios. Aristotelian theories of the physical, although much less useful than the successors which arose when scientists moved away from reliance on not-justified-inferentially premises, remain useful in some situations. And in the cases of ethics, there may be no better alternative approach.
The above argument implies that when using arguments involving not-justified-inferentially premises about particular moral scenarios (as in Thomson’s Other Method of Trolley Cases, for example), the aim should not be to establish knowledge of their conclusions. Instead it might be to characterise aspects of moral cognition (as Kozhevnikov & Hegarty (2001) use an Aristotelian theory of the physical to characterise physical cognition). Or the aim might be to understand what consistency with certain judgements would require.
Kumar & Campbell (2012) provide an alternative reconstruction of Green’s argument (which, helpfully, is a refinement on a critique of Berker (2009)’s earlier reconstruction: Kumar and Campbell are probably easier to understand). They analyse Greene’s argument as a debunking argument. This means that (a) it depends on premises about which factors are morally relevant; and (b) is is open to the response that facts about which factors explain judgements are ethically irrelevant (see Rini, 2017, p. 14432).
Why bother with my loose reconstruction when I could just borrow Kumar & Campbell (2012)’s? While their reconstruction may be more faithful to the original (Greene, 2014), my loose reconstruction does not depend on premises about which factors are morally relevant nor does it require the premises that facts about which factors explain why certain judgements are made are ethically relevant. This enables the loose reconstruction to avoid some objections (see Quick Objections to Greene’s Argument).
Since automaticity and cognitive efficiency are matters of degree, it is only strictly correct to identify some processes as faster than others.
The fast-slow distinction has been variously characterised in ways that do not entirely overlap (even individual author have offered differing characterisations at different times; e.g. Kahneman, 2013; Morewedge & Kahneman, 2010; Kahneman & Klein, 2009; Kahneman, 2002): as its advocates stress, it is a rough-and-ready tool, not the basis for a rigorous theory.
Claims made on the basis of perception (_That jumper is red_, say) are typically not-justified-inferentially.
Why not just say ‘noninferentially justified’? Because that can be read as implying that the claim is justified, noninferentially. Whereas ‘not-justified-inferentially’ does not imply this. Any claim which is not justified at all is thereby not-justified-inferentially.
The loose reconstruction may appear to favour systematic over case-based approaches to ethics because its conclusion concerns judgements about particular moral scenarios. This appearance is misleading. The conclusion is framed in this way for simplicity. The argument can be straightforwardly generalised to cover not-justified-inferentially premises about moral principles too. ↩
In this passage, Rini cites Nagel (1997, p. 105) in support of the view that discoveries about moral psychology cannot ‘change our moral beliefs’. Note that the paragraph she cites from ends with a much weaker claim opposing ‘any blanket attempt to displace, defuse, or subjectivize‘ moral concerns. Further, Nagel’s essay starts with the observation that moral reasoning ‘is easily subject to distortion by morally irrelevant factors … as well as outright error’ (Nagel, 1997, p. 101). So while one of Nagel’s assertions supports Rini’s interpretation, it is unclear to me that Rini is right about Nagel’s considered position. But I could easily be wrong. ↩