One of the most annoying myths of PHIL 101 is that any interest in people’s intentions (or quality of will) is inherently “Kantian”, or at least contrary to consequentialism. I disagree: there are obvious reasons to care about whether or not others are acting from good intentions, and consequentialists can say perfectly plausible things here (even though many, traditionally, have failed to do so).
Intention and Permissibility
Here’s a possible argument that might underlie the common confusion:
Intentions don’t feature in utilitarianism’s criterion of right action.
Moral theories are exhausted by their account of right action.
So, utilitarianism has nothing to say about intention.
The argument could be generalized to most other forms of consequentialism. The only problem is that premise 2 is blatantly false.
Now, I think there are good theory-neutral reasons to deontically assess acts in a way that abstracts from the agent’s motivations. (Famed non-consequentialist T.M. Scanlon agrees, for reasons I summarize here—see his book Moral Dimensions for the full story.) For one thing, it’s commonsensical that you can do “the right thing for the wrong reasons”. If doing the right thing built in that it was done for the right reasons, then this would be impossible. Secondly, it doesn’t seem that agents should generally introspect on their motives before deciding what to do;1 if permissibility depended on one’s motives, such introspection might often be necessary.
So I think that everyone, consequentialist and non-consequentialist alike, should agree that the deliberative question of what we have decisive reason to do does not generally depend upon our intentions.
What Else Matters
As I explain in Consequentialism Beyond Action, agents have various intrinsic normative properties (shared by their intrinsic duplicates), such as degrees of rationality, prudence, virtue/vice, praise vs blameworthiness, etc. These are not affected by the actual consequences of their actions. (Some instead depend upon how the agent responded to their available evidence concerning the expectable consequences. Others depend more on the agent’s underlying dispositions to respond appropriately to such evidence.)
There are many obvious reasons to care about these intrinsic properties. For a non-moral example, suppose you are selecting a mutual fund manager to direct your investments (index funds no longer exist). Firm A employs a bunch of random gamblers, while firm B hires prudent traders who offer average market returns. While most in Firm A have terrible track records, one struck it lucky and has the best track record of all. Should you prefer the lucky gambler with the best actual consequences (to date), or someone with traits like prudence and good judgment that make them more likely to succeed in future?
Clearly, we have instrumental reason to care about the intrinsic properties of decision-makers. Character traits and quality of will provide evidence about future prospects. Rational people are apt to achieve their aims; at a minimum, their adopting a goal makes it more likely. So it matters, instrumentally, whether people have moral aims. Good moral aims make good moral outcomes more likely.
Virtue might also have intrinsic value. If we imagine two possible populations, one good and one evil, but both equally happy (even in the long run), it seems like the world of good people is better. This departs from strict utilitarian evaluation, but maybe desert-adjusted welfarism is correct, and the welfare of morally worse people has less value.
Finally, we might care about motives for personal reasons, independent of value. As Strawson famously noted in Freedom and Resentment, we typically care immensely about others’ attitudes towards us. These attitudes contribute to determining the natures of our interpersonal relationships: e.g., whether we are trusted allies, on broadly friendly terms, or hated enemies. A harmful act can have a very different social meaning depending on whether the harm was accidental (and followed by an apology), negligent (revealing an objectionable lack of regard for others), or outright malicious.
To simply not care about these things would indicate a disturbing alienation from ordinary interpersonal relationships (including the minimal moral relationships we stand in to strangers and “weak tie” acquaintances). Sometimes we care too much about others’ attitudes.2 It’s an interesting theoretical question precisely when we ought to adopt the “participant” stance as opposed to the alienated “objective” stance. But part of Strawson’s point is that the theoretical question is practically moot, as most of us simply can’t help but care about interpersonal relationships (and so adopting “participant attitudes”) in practice. This basic fact about human nature is something that any moral theory ultimately needs to accommodate. That’s not to say that it needs to prioritize this concern above others. But it does need to answer to it, and so offer some account of when our reactive attitudes of gratitude, resentment, etc., are warranted.3
A Constraint on Warranted Hostility
It seems clear that we should be generally well-disposed towards virtuous people and think poorly of the vicious. ‘Virtuous’ here meaning something like good-willed, and ‘vicious’ covers both the actively malevolent and the merely “amoral” or negligent (malevolent agents being the worse of the two).
There are interesting questions about what precisely qualifies as sufficiently “good-willed”. I’m inclined to think it’s a fairly broad, undemanding category. It doesn’t generally require believing the correct moral theory.4 (Though sincere moral belief is not itself a defense against having actually wronged someone—if anything, a Nazi’s sincere belief in their cause makes them worse, not better.) It also doesn’t require having anything close to the morally ideal degree of concern for others. (We aren’t required to be saints.) It may additionally require a modicum of reasonableness. I don’t think that even having the objectively correct moral goals is sufficient defense for organ-harvesting naive instrumentalists, for example—see my discussion of (Q3) here. Hostility towards naive instrumentalists seems warranted to me, on the grounds that they are being recklessly, predictably harmful.
Putting this all together, if someone is being both (i) well-intentioned, and (ii) reasonable, then they don’t warrant hostility. To justify hostility, you need to demonstrate ill will and/or unreasonableness on the part of your target. Mere first-order policy disagreement is not enough.
(Wouldn’t it be nice if participants in online discourse more reliably abided by this constraint?)
Except in rare cases where their motives will have downstream effects that independently matter.
The extent of gratuitous hostility out there is the thing I’ve always found hardest about participating in online discourse. I often wish that I could simply care less.
For example, I see no barrier to supplementing utilitarianism with an account of fitting attitudes. You just need to remember that, if faced with a choice between fitting attitudes or value-promoting ones, you have more reason to (prefer and) choose the latter.
This is compatible with the now-standard “de re” view of moral motivation, on which good will consists in being motivated by the ultimate “right-making features” of actions. This is because, for reasons explained in my paper ‘The Right Wrong-Makers’, plausible moral theories will substantially overlap in their “ground-level” right-makers or normative reasons. They should all agree that individuals’ interests are among the moral grounds, for example—even if they disagree on some higher-level questions about precisely when or why this is so.
Two people in a car, one kidnapped the other:
Person 1: What do you intend to do?
Person 2: Why do you care? Intent is a Kantian notion!
I think this is a significant point of disagreement between you and me (including your openness to "fittingness" as a relevant concept).
I think consequentialists absolutely should care about intent, just as we should care about skill, full tanks of gas, clean energy, and lots of other things that are empirically instrumentally relevant to lots of things that matter. But I don't think there's any sort of principled special weight to put on intent.
One helpful point I've seen emphasized in some trans and disability activism is the point, "I don't care if you mean well and intend to be an ally - I care about whether you repeatedly harm me". It's true in many cases that someone with good intentions and a good will is more likely to do good things in the future. But empirically, there are some people who mean well and yet, through poor information or poor skill, keep doing things that are harmful. And there are other people whose intent is not particularly positive, but have internalized habits that ensure they keep doing things that are helpful.
In the case of your fund managers, if there is one manager who really thinks hard about what will make the most profit, but is bad at it, and keeps buying at the peak and selling at the dip, and another manager who subjectively feels like they're just guessing, but is subconsciously reliably tracking future performance, then I'd rather have the second going forward.
Intention is a useful heuristic for future action, but it's not especially different in this way from various skills and abilities.