16 Comments
User's avatar
David's avatar

> I like it, but I’m not sure how much confidence I have that I won’t change my thinking on several of the points here within the next few weeks or (even more likely) over the next few years.

Just wanted to drop by to say that this is one of my favorite things about your writing. You have opinions and write well on them, but freely admit your uncertainty. When someone admits freely what they do not know and allow themselves to change their mind, THAT is when I trust what they say.

Expand full comment
Resident Contrarian's avatar

Man, if you like it when I don't know something for sure, hold on to your butt - I might know very little about more things than any other living man.

Expand full comment
Rina's avatar

My disagreements with you probably ground out in my atheism, but regardless, here are some thoughts! Note that I’m not particularly interested in philosophy qua philosophy, so I’m sure a philosophy guy wouldn’t like me either.

I think that consequentialism is obviously correct and that you're probably a consequentialist in the trivial sense that you probably ultimately care about good outcomes. We may disagree about what those outcomes are, or how we might achieve them, but if we both ultimately care about good outcomes then I think we're consequentialists in this sort of trivial sense.

With regards to identifiable works, I’d point to effective altruism (EA). I like to perhaps idiosyncratically characterise them (us?) as ‘taking consequentialism seriously’ alongside some flavour of utilitarianism and physicalism. EA is particularly concerned with ameliorating global poverty, animal suffering, and existential risk, and while the movement's not perfect I certainly think it can really be said to be trying.

I think it's important to distinguish consequentialism, which seems rather trivial to me, from utilitarianism, which on my view is an arbitrary choice, as it seems to me that what we end up valuing is arbitrary. Hence concerns about AI alignment! (Indeed, I've recently come to appreciate that for Christians the most effectively altruistic thing to do is whatever converts the most people to Christianity, so I can respect that sort of thing.) Why choose some flavour of utilitarianism? Because 'flourishing for all sentient beings' seems like the most natural thing to value, at least as a human. I hope the same is true for AI, but I very much doubt it.

What’s the place for rules and virtues in a consequentialist framework? It’s simple: how do you calculate what actions will produce most good? This is computationally intractable! In practice we have to rely on heuristics, such as, say, rules to follow or virtues to embody. These can't be specified to such detail that you can always rely on them; the world is too complicated for that. So sometimes we'll have to fall back to reasoning from first principles. But to do this, your capacity to perform this sort of reasoning should be developed enough to be able to generate the sorts of rules and virtues by which you normally live. Consequentialism shouldn't lead you to bad consequences; the problem is with doing it wrong. That's easy to do, so one ought to be careful. And of course these rules or virtues—which, yes, we absorb from tradition—must be improved by reasoning from first principles to account for changes in the world.

It's not clear to me that there exists a moral system that's 'less technically correct' and yet 'results in more quantifiable good'. The 'less technically correct' approach may be a useful heuristic that improves outcomes. This does not make it any less technically correct; it simply means that your moral theory accounts for the physical and information-theoretic limitations of this universe, which is probably a nice feature. You cannot ignore the fact that there's a cost to obtaining information, and a cost to focusing on minutiae, and this must be accounted for in any sensible moral theory. Ultimately, I'm not sure what sort of system couldn't be understood in this heuristic sense.

Expand full comment
Resident Contrarian's avatar

One thing that's interesting about consequentialism is that, as you said, I think it's obviously correct - I just don't think it's obviously correct to the exclusion of all other things. One way I sometimes talk about it is saying that utilitarianism/consequentialism is a great module for other moral systems - like it can inform them and help them a great deal. Think: Christianity has "love your neighbor as yourself", which you can't really do without some at least primitive understanding of outcomes.

I think part of my thinking on this - where I do push back and disagree a little - is that consequentialism and utilitarianism are to the extent they are true tautologically true. So if a consequentialist says "I think good outcomes are good, and that's what my moral system is about" I go "Yeah, man, that's everyone. What else do you have?".

To flip it around a little bit: accept, for the sake of this argument, that anything a theoretically perfect christian does promotes an eventual eternal and perfect rule of God, in the same way a consequentialist might think that their AI work is making a benevolent AI which will protect and nurture humanity, or something. That's a consequence, and a really big one! And once you get into that kind of system, it's pretty trivial to start making scenarios where obedience to a set of rules with minimal flexibility might be the net-positive to run after.

On the "Less technically correct" thing: pretend for a moment that there's no god or anything supernatural beyond just a magic plaque that exists somewhere and comfirms that consequentialism is completely right - that the only thing that matters is consequences. Now imagine that there's some people who (completely in error, in this scenario) think that following a set of rules set down by a god (who doesn't exist in this scenario) set down. They think that these things are right primarily because they satisfy a terminal value of "being obedient to god", where the fact that they ever produce good outcomes is a happy accident.

In this case, the latter-case rule followers are completely incorrect about morality; they believe that things are moral for a wrong reason, and care about the actual right thing to maximize as a secondary goal at best. That's what I mean by wrong - they are incorrect about what morality is, and how it works.

But that said, they might still outperform the people who know the truth. Maybe this is out of fear of hell, or out of adoration for their non-existant god, or even just out of a persistant belief that someone is watching them which reigns in some of their worse impulses.

That's sort of what I'm talking about - there's plausible situations in which an all-consequentialist earth underperforms an all-divine-command-theory earth morally, not because their ideas of morality are wrong but because their ideas of morality are inferior in terms of enforcement of behavior.

Whether or not that's the case is a whole argument in and of itself. But that's what I was saying there - even if a particular systems *ideas of morality* are more correct on paper, that doesn't necessarily tell us a lot about what they actually do under fire.

Expand full comment
Rina's avatar

I suppose it just comes down to me being too much of a rationalist and too much of an EA to think that the noble lie believers could outperform the truth knowers in any reasonable situation. The problem with toy examples is that people like to impose conditions that are completely implausible. In such scenarios, your moral theory may tell you to take actions that go against your intuitions, but this is a feature, not a bug: your intuitions were not built for toy examples.

Consider the trolley problem, and its fat man and transplant variants. It's clear to me that you should pull the switch, but you should not push the fat man or harvest the organs. It's not obvious that pushing the fat man will actually will actually save the people, and doing so engenders a culture in which people don't feel safe standing around in public, a serious but diffuse harm. Similarly, harvesting the organs for transplant engenders a culture in which people don't feel safe going to the doctor, and besides, the traveller is usually taken to be healthy and young so their life is plausibly worth more than the extra life bought for the transplant patients. The moment you assume things like 'pushing the fat man will definitely save the people' or 'no one will find out that the doctor killed the traveller', as people are wont to do, you immediately step out of reality into a place where you should expect to have rubbish intuitions.

In this vein, here's an excellent article on integrity for consequentialists: https://forum.effectivealtruism.org/posts/CfcvPBY9hdsenMHCr/integrity-for-consequentialists-1. To summarise, the costs for acting without integrity are actually very large, because it gives people evidence for the sort of person you are and the sorts of behaviours they can expect from you, hence you should almost always act with integrity especially because it doesn't involve as much thought. More generally, the consequentialist truth knowers should be able to reproduce many of the beneficial rules followed by the noble lie believers, if indeed these rules are good rules, and it's not clear to me that there exists a reasonable situation in which fanatical devotion would be useful but could not be derived from consequentialist reasoning. Then again, as I said at the start I'm a big cheerleader for team reasoning!

Expand full comment
Craig's avatar

>"Consider the trolley problem, and its fat man and transplant variants. It's clear to me that you should pull the switch, but you should not push the fat man or harvest the organs."

You have sound and convincing arguments about what kind of culture is engendered, but I think this is more clear: The fat man could volunteer to jump and stop the trolly; the healthy person could volunteer to donate all their organs; if they were 100% utilitarian, you might be able to convince them. Maybe 2 bodies could better stop the trolly. Maybe 2 sets of organs could save more people. Maybe your eulogies would compare you both to the greatest saints. Pushing the fat man or harvesting the organs makes you a murderer.

Expand full comment
Resident Contrarian's avatar

I think another thing that makes this messy is that EA's are an incredibly self-selected group. It's like a couple hundred to several thousand group of mostly rich, successful people who have strongly signaled they are above-average into morality/giving/service type things.

Christianity has some similar divisions - like, there's merely nominal christians, creasters (people who attend church exactly twice a year), frequent practicers, people who regularly serve within the church, etc.

If you take an average practicing EA, and thus like a 1 percenter atheist utilitarian, I'm going to expect exactly zero crime from them and not a lot of more banal "troublesomeness" - they are from a group selected to be rich, effective, and stable for the most part. Ditto if you take someone who goes to church 19 out of 20 sundays - it's a person who is very interested in their moral system.

I think my objection for your last paragraph is always going to be something like "yes, but the whole point of saying you should act with integrity but making sure that everyone knows that's not a rule that should be followed 100% of the time is to leave yourself room to disobey it". For some people that's going to mean, say, lying in some extreme "it's either this or people die" situations.

For other people it's going to mean different things. I've tried to be pretty careful here to not say "And so it must be that the utilitarian is always less moral" because I think that individual variation and personality are going to be big, real differences here that are sometimes going to wash out the difference between moral systems, if that makes sense.

Expand full comment
Rina's avatar

That's fair, I'm definitely in a filter bubble. But hey, if AI goes well and we manage to immanentise the eschaton... Maybe then the sort of reasoning I'm a fan of will become more broadly applicable!

Expand full comment
Skivverus's avatar

Alternative - "consequentialism" is incomplete, in the same way that a sentence without a verb is incomplete.

Yes, everyone wants good consequences, but how do you tell an outcome is good or not?

"If someone ends up dead, it's bad" sounds an awful lot like a "don't kill people" deontological rule, just in the passive voice.

Virtue ethicists would presumably also argue that the noun is missing too.

Expand full comment
Doctor Hammer's avatar

Good essay; I feel inspired to build upon it. For now, I think one key point to think about is the distinction between rules and rules for coming up with other rules.

Utilitarianism strikes me as being about how you choose your rules for behavior. No self proclaimed utilitarian goes through the moral calculus of every decision, both because the numbers aren't there and no one has time for that. Instead they try to work out general heuristics that work most of the time, based on the best numbers they can conjure and their general sense of the good. They reserve the right to change those heuristics based on changing senses of numbers and/or their general sense of the good.

Deontology, or rather deontology as practiced, does pretty much the same thing, only with varying levels of openness to changing the rules, and perhaps less clear rules about how to change the rules. One could go farther and say "Deontology is only inflicted on people by those who make the rules; left to their own devices, everyone tries to pick rules that maximize their expected good outcome." That is, the only deontologists are those who want other people to follow rules the deontologist makes up, while everyone who chooses to follow rules does so because they think those rules are going to lead to more good.

As you say, everyone is sort of a consequentialist, but how they evaluate the consequences of actions and rules under the extreme uncertainty of real life varies a good bit, but less than the rationalist/utilitarians probably like to admit :)

Expand full comment
Mark Kruger's avatar

Hi Resident,

This was a thoughtful piece and I enjoyed reading it.

I think Utilitarians are pretty clear about what constitutes the "good".

The confusion arises because there is not a single definition of the good that is applied to each and every person.

This is from John Stuart Mill's On Liberty (which I got via https://en.wikipedia.org/wiki/On_Liberty

):

"That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.... Over himself, over his body and mind, the individual is sovereign."

The connection with Utilitarianism is straightforward. Externally imposed standards are bound to limit the "sovereignty" of some folks.

So, the greatest good for the greatest number is achieved by laissez-faire, with guardrails for public saftey.

The beauty of this approach is that allows individuals to discover and pursue their own goodness.

The drawback is that there is no guarantee of success.

Expand full comment
Resident Contrarian's avatar

I think this applies really well to, say, law and government. But imagine this situation:

One of my friends asks me if a third party is having a party that weekend. The third party *is* having a party, but doesn't want my friend to go, because my friend is admittedly a bit of a turd in the punch bowl.

My friend has been very clear to me in the past that he doesn't want me to lie to him. But if I tell him what's going on, he's going to be a bit of a turd at both me and the party-thrower in a way I don't particularly want to deal with. I can also imagine that giving him accurate information would result in a net-loss for him, since it would hurt his feelings.

Does utilitarian allow me to lie to him if I really think his feelings would be hurt and no good was served? Probably. Does it allow me to lie to him just because I want to? Probably, unless someone can make a compelling case that it's a net-loss, and I'm the only person present to make that judgment. Chances are I *do* lie to him - it's a gray area, lies aren't black-and-white-wrong (no distinct defined action is, in consequentialism), and I don't want the hassle.

When I say their idea of good is foggy, I'm talking about this interpersonal level for the most part, and not so much coercive-force-of-government stuff. It's the part you are talking about as I "pursue my own goodness". Once we dial down from the governmental authority stuff Mills is talking about there, it matters a lot that we have a working definition of Good that can guide my personal, non-governmental actions.

Thats sort of my complaint in that one respect - if I ask a utilitarian how he's working on his day-to-day actions to maximize his moral situation, he's going to tell me that he's pursuing good. and I go "OK, what's good" and he says "you know, the one that makes people happy, the good thing, that does good stuff. that one." The definitions get a little better than that but not much.

At least that's what I'm saying. I'm sort of open to hearing more about the utilitarian's very specific definition of good if I'm wrong about it.

Expand full comment
Mark Kruger's avatar

Utilitarianism holds that the most ethical choice is the one that will produce the greatest good for the greatest number.

So, it seems that your first responsibility is to prevent your friend from spoiling the party. It looks like you also reached that decision. Were you performing a utilitarian calculus here?

Then on the question of whether to lie or not, the choice seems to come down for what's best for your friend.

In the short run, you can save him pain by lying. In the longer run, being honest with him could spark a positive change in his behaviour. The course you pursue would depend on a lot of specifics related to your relationship, your persuasiveness etc.

While utilitarianism is typically seen as a negative guide to ethical behaviour -- leave people alone and let them to get on with their lives as they see fit -- it can also act as framework for guiding one's actions.

Nevertheless, from a utilitarian perspective, the urge to "do good" has to be tempered by the understanding that what we may think is good, may not be good at all, when seen from the other's perspective.

Expand full comment
Resident Contrarian's avatar

So don't get me wrong - I wouldn't lie, or at least shouldn't. I'm a deontologist, more or less.

But my basic problem with that utilitarian calculus is that - as you pointed out - utility didn't actually tell you anything about whether or not you should lie to him. Both "lie to him" and "don't lie to him" have their points; utilitarianism allows you to do either (and feel good about it!) so long as you can come up with a compelling rationale for it.

Putting aside what kind of behavior that does or doesn't encourage, now let's switch gears: What can your friend expect from hypothetical-you? He can't expect truth except in a clock-right-twice-a-day way. And there's a variety of conclusions you might come to about whether to lie or not based on the data you have, mood, etc.

I tend to favor deontology for religious reasons, but this is my general misgiving with utilitarianism. Any given decision is based on pursuing good, which is whatever good seems like it would be at the time. But that's very variable and judgment based - even if you aren't letting your own biases and preferences have an inappropriate weight, it's still going to be all over the place.

Not that sets of rules don't have problems, but they are at least consistent-when-followed; when someone doesn't follow them, there's at least ways to criticize them on a moral basis.

As mentioned in the article, I don't know how well this actually translates to *actual behavior*. I'm really desperately trying to keep this constrained to theoretical mechanics more than "My side wins over your side" stuff.

Expand full comment
Mark Kruger's avatar

I think that life is messy and that right thing to do depends deeply on the context.

Rather than saying we just have too "come up with a compelling rationale" for our actions, I would say that leading the good life requires that we exercise our moral judgement.

Rules like "don't lie", "don't steal" and "don't kill" are all ones we should live by pretty well all of the time. However, I am sure you can think of examples in which the "greater good" demands that we break them.

The ability of the utilitarian calculus to justify both lying and not lying is not a bug. It's a feature. It forces you to consider your actions carefully.

Expand full comment
Robson's avatar

I really really like the conclusion here. But I'm frustrated by the introduction and development.

I thought it would be obvious by now that all moral systems are arbitrary. The advantage of utilitarianism is that it's explicitly not a moral system. It's more of a moral framework where you can plug in your arbitrary moral preferences and then get some help on how to solve the real world issues case by case.

Religious moral is also arbitrary. We codify whatever arbitrary rules we like as some pretend commandments from a pretend higher being. Then we follow those rules as they satisfy our preferences as a group. When our arbitrary preferences change, religions change too, you can just study history of Christianism to see how much it has changed over the last 1500 years to adapt to whatever arbitrary preferences we have in a given era. The history of all other religions will show you a similar pattern. Also when the preferences change too much some whole religions disappear and others take their place, but that's not super relevant as they are at a meta level interchangeable.

The case for utilitarianism is that it makes the meta game explicit. We know that the preferences are arbitrary, we think about them, we agree as a society what our preferences are, then we use the framework to apply the preferences. This allow us to constantly be aware and capable of adjusting the preferences that serve as input to the system.

The case for religion is that people are not smart enough to play the utilitarian game explicitly. And they are also not capable of following rules that are known to be arbitrary. So we still have to agree on some arbitrary preferences but we do it through some indirect social interactions. Then we codify these rules as coming from supernatural made up sources to increase the likelihood that people will follow them. That may increase rule observance but hides the preference choosing part and makes it harder to be aware of it.

I'm not convinced which one is actually better in practice. But I do think there's a paradox that if religion is better, it requires people to never be actually fully aware of how the system operates otherwise the whole system would come crumbling down. So I currently believe that utilitarianism would be a better answer as it could allow people to be more aware of the system as a whole.

In the end though I do agree that whatever system proves out to be better will be the one that compels people to act. Right now I see neither as specially successful in that goal.

Expand full comment