71 Comments
User's avatar
Laura Creighton's avatar

One of the more annoying things about Utilitarians, in general, is that they argue 'we've got the math on our side'. This irritates people who have studied basic Set Theory, who know that this isn't so. It is not as if mathematics is the best foundation for moral sentiments, but Utilitarianism has a strong appeal among people who pride their own rationalism above and beyond all else.

Infinite sequences are a source of strange paradoxes. Most of them are not actually contradictory but merely indicative of a mistaken intuition about the nature of infinity and the notion of a set.

"What is larger," wondered Galileo Galilei in _Two New Sciences_, published in 1638, "the set of all positive numbers (1,2,3,4 ...) or the set of all positive squares (1,4,9,16 ...)?" (He wasn't the first to do this sort of wondering, of course, but it's a convenient starting point, i.e. there are links.)

For some people the answer is obvious. The set of all squares are contained in the set of all numbers, therefore the set of all numbers must be larger. But others reason that because every number is the root of some square, the set of all numbers is equal to the set of all squares. Paradox, no?

Galileo concluded that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all the numbers, nor the latter greater than the former; and finally the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.

See 'Galileo's Paradox' https://en.wikipedia.org/wiki/Galileo%27s_paradox .

We haven't changed our minds much about this in the mathematical world. We've become more rigourous in our thinking, and have invented fancy notation -- typographical conventions -- to talk about them, but the last big thing in such comparisons was the idea of 'Cardinality of infinite sets', thank you Georg Cantor. The Cardinality of a set is how many items are in it. If the set is finite, then you just count the elements. Cantor wanted to make it possible to do certain comparisons with infinite sets. I could explain a whole lot of math here, which would bore most of the readership to tears, but people interested in this stuff can find it all over the internet. If you come from a country where set theory is taught in high school, you will have already learned this.

The bottom line is that the set of all numbers and the set of all squares are both the same size, the size being 'countable infinity' or 'aleph-null' in the jargon. Aha, you conclude. So where is the mistaken intuition that creates these paradoxes? The mistaken intuition is that you can compare infinite sets and conclude things like 'the set of all numbers is twice the size of the set of all even numbers'.

Which brings us to the Utilitarian's favourite hobby horse, trolley problems. If you consider each human being on the track as 'an infinite set of potentials', not just metaphorically, but mathematically too, then you can no longer conclude that killing 1 person is better than killing 4. They've all got the same cardinality. (No, I cannot prove this one. But for a thought experiment, we can assume it.)

And this is, after all, what the non-utilitarian moral philosophers have been insisting for all this time. People are not fungible. Non-utilitarians still have to make the tough moral decisions about whether you let one person die to save four, but we don't get to hide behind a shallow mistaken intuition all the while singing the 'we're superior because we have the math on our side' song, as loudly as we can.

P.S. I think that this dreadful state where we all end up living in capsule-hotel accommodations feeling just a hair above misery, with only one duty, to fill the world with people in the same state is simply a restatement of Hilbert's Paradox of the Grand Hotel --

https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel

where the hotel has a particularly lousy rating for hospitality on Trip Advisor.

Expand full comment
Doctor Hammer's avatar

That's a hell of an effort comment, and really should be it's own essay somewhere. Thank you!

Expand full comment
Resident Contrarian's avatar

Agreed. Is there a blog of Laura?

Expand full comment
Laura Creighton's avatar

There isn't. I didn't think there would be much of a market for the weird thoughts that I have been having all my life. Even my close friends get the 'what is she going on about again' and 'oh, rats, she is about to unload a whole lot of mathematics on me again' look rather too often when I am around. But possibly I am mistaken about this generalising into a lack of demand. 'Long Tail' and all that .... :)

Expand full comment
Resident Contrarian's avatar

>>> There isn't. I didn't think there would be much of a market for the weird thoughts that I have been having all my life.

That's what a writer is! The friends not getting it thing is normal! No man is a prophet in his hometown and that!

All the stuff you just said is the stuff you say before you start writing and (with luck) everyone finds it and loves it.

Expand full comment
Ester's avatar

I'd read your blog if you started one.

Expand full comment
walruss's avatar

I still think self-dealing is the biggest practical problem with utilitarianism. While it's true that every moral system gets abused, utilitarianism is uniquely susceptible.

Imagine I decide to eat more healthily. I have two options: Banish cream-filled Hostess cupcakes from my life entirely, or acknowledge that cream-filled Hostess cupcakes are a sometimes food that should be eaten extremely rarely.

Now probably I would increase my amount of life enjoyment if I could usually not eat a cream-filled Hostess cupcake, but occasionally, on a special occasion, have one cream-filled Hostess cupcake. So the "math" says to go with option 2.

I do so. But oh no! Now every time I come across a cream-filled Hostess cupcake I have to do a mental analysis. Is this one of these rare occasions when I am permitted a cupcake? And hey, look at all this psych research that says I will *not* make that decision based on my calculating mind, but on the hind-brain that thinks it might die if I don't have that cupcake. So I eat the cupcake and tell myself that I've been dealing with a lot of stress lately and that this will help me cope with that stress, and probably I'll be more effective in my diet if I don't make myself miserable through denial, and etc. etc.

Or I go with option one, say "sorry, I don't eat cream-filled Hostess cupcakes" and miss out on a tiny bit of pleasure, but gain the advantage of this decision not being a decision.

For me the big lie in utilitarianism is less that you might reach a repugnant conclusion. As you point out the whole purpose of the project is to increase human flourishing in human terms, so if our logic chain hits a point of not doing that, at the point of action I think 99.9% of sane people will revise the chain.

And it's not that I don't think there is a right and wrong answer to moral questions: I absolutely am okay decreasing human happiness and flourishing today for drastically increased human happiness and flourishing in the future.

The big lie is that we are mentally capable of handling every single moral choice through calculation. We absolutely are not.

Expand full comment
Resident Contrarian's avatar

I do think that some situations are better suited than others, as far as "thinking through every situation based on this philosophy" goes. One of those does seem to be decisions made slowly at a distance, as "where do we spend this money" seems to potentially be.

That kind of thing ends up being problems for me when I write stuff like this, because if I'm criticizing EA long-termists about something like this (admittedly sort of a corner-case issue) I then really want to spend the effort to be clear that at the end of the day, someone like MacAskill is doing a lot more work for charity than I am.

It's something like "Well, yes, I can map out all these problems with his philosophy and the ways it might go. But he still got up one day and dedicated his life to charity work, which I demonstrably do not do most mornings."

Basically I agree with you, but post-hoc I want to point out that whatever the structure MacAskill is using, he's also doing what appears to be a lot of work for other people and I'm post-hoc trying to point out that counts for something.

Expand full comment
asdf asdf's avatar

The repugnant conclusion asserts that there is some sufficiently large number of people living lives worth living such that you should prefer it over 10 billion very-high-quality-of-life population. The repugnant conclusion does not say anything about what that size is, or should be.

Given your phrasing, I focus on "even if that makes it sadder". The important part is how *many* more people are we gaining at *how much* cost in happiness. For any given decrease in sadness above some minimum level, there should be some increase in population size that should be worth the sadness cost, but that size increase can be arbitrarily large.

If, for any specific scenario you're considering, you feel like the increase in people does not justify the decrease in happiness, then you're not considering a repugnant conclusion. You've ruled out those benefits as actually not being worth those costs. Your repugnant conclusion number is higher than the number in the scenario under consideration.

This is exactly the reason why the repugnant conclusion does not imply we should rabidly increase people at zero consideration for population happiness, because that's not a plan to reliably get enough people to be *worth* the decrease in happiness.

I assume without checking that What We Owe The Future mentions repugnant conclusion reasoning to broadly justify that we should be willing to make some real sacrifices today in order to safeguard the future of humanity. I assume that the larger populations involved there are populations like "every human that will exist for the rest of humanity's future".

One example mentioned in the ACX article is leaving surface deposits of coal. We can either have a slightly richer today by mining and burning that coal for energy, or we can have a greater chance of humanity recovering from civilizational collapse before becoming interplanetary, which would also cut off any chance of humanity eventually becoming interstellar, much less intergalactic.

This sounds like a very plausible tradeoff to me! Given the assumptions "surface deposits of coal will enable humanity to redevelop industry after civilizational collapse" etc, then the reasoning that we should preserve these surface deposits of coal is valid, and I would dispute "we should preserve surface deposits of coal" by disputing the assumptions, not the reasoning. For example, maybe there are many other great options besides surface coal deposits, or maybe civilizational collapse isn't plausible, or maybe interplanetary life just isn't plausible and humans can't colonize the galaxy.

The "problem" of the repugnant conclusion is that the same abstract reasoning here would also imply that if, counterfactually, hypothetically, you were presented with a scenario trading off between "10B very happy people", vs a much much larger population living some much lower quality of life that you still consider worth living by whatever standard you choose, that there should be a population size large enough that you should prefer the larger population.

Many people find this idea unintuitive, and use that feeling as a refutation of abstract ethical reasoning, because it's implied by so many systems of ethical reasoning. I think this is something about the conflict between "If a plan like this existed, that actually was enough people to be worth it, then I should choose it" vs "Every plan I imagine that maximizes the number of people alive asap is actually a terrible plan, and would not be worth it".

This is also why you see big parts being left undefined, because "repugnant conclusion" is a recurring property of ethical systems. Exactly which lives are "worth living" depends on the specific ethical system you're using.

For Repugnant Conclusion purposes, I propose you can check your own intuitions with a thought experiment. For a given quality of life, like a one-bedroom apartment in Space Tampa, would an entire galaxy full of people living that life be better than 10B people living much better lives on Earth? If not, then I'd say that the quality-of-life you're proposing is not worth living.

In some sense, Repugnant Conclusion is kind of the converse of Wireheading. If you're evaluating populations by some combination of size and happiness, then you can make it better by making people happier and by making more people. Your optimal path depends on the relative costs of increasing utility by making more people vs the relative costs of increasing utility by making happier people.

In real situations, these values and costs relate to each other, and it's very rare for one of these to completely overwhelmingly dominate. People feel really weird about this math implying that they should accept strange-sounding hypothetical scenarios that are difficult to imagine due to their implausible unreal properties and won't happen in the real world, and use this feeling to reject ethical systems entirely. Take a look at the core section of the paper I linked:

We agree on the following:

1. The fact that an approach to population ethics (an axiology or a social ordering)

entails the Repugnant Conclusion is not sufficient to conclude that the approach

is inadequate. Equivalently, avoiding the Repugnant Conclusion is not a necessary

condition for a minimally adequate candidate axiology, social ordering, or

approach to population ethics.

2. The fact that the Repugnant Conclusion is implied by many plausible principles

of axiology and social welfare is not a reason to doubt the existence or coherence

of ethics and value theory (although we do not rule out that there may be other

reasons for moral skepticism).

3. Further properties of axiologies or social orderings – beyond their avoidance of

the Repugnant Conclusion – are important, should be given importance and

may prove decisive.

This is *not* "population-maxxing ASAP pls". This is "we should not throw out every ethical system that implies that numbers matter".

Expand full comment
Warmek's avatar

I guess the thing I don't get in all of this is why all of the people have to live simultaneously?

Well, OK, it's because most people **really** cannot comprehend geologic and stellar time scales... There's plenty of time for all of those people to exist and *actually* be happy instead of just de minimis happy if we don't hyperpopulate and lock our species into that. I'm disappointed at this crowd failing at futuring so hard.

Expand full comment
Resident Contrarian's avatar

It's like a pure math thing (warning, I'm replying to you-not-stephen becuase I haven't reread his post at the time of this typing). If there's a non-infinite amount of years, and you max each one, that's a *higher number* and it's a number-maxing morality. If you say "Well, it's not infinite, but it's such a large amount of time I'm sure we could come up with an impressive happiness number everyone would like" the counter is going to be something like "yeah, but why not EVEN HIGHER?"

Expand full comment
Warmek's avatar

So they're afraid of paperpclip maximizing AI because they've looked in the mirror...

Hrm...

;)

Expand full comment
Rina's avatar

I haven't seen as much pushback on this as I'd like, so I felt the need to right that myself! A few things.

First, and I think I might've said this in a previous comment, I do see morality as arbitrary; my morality is merely a reflection of my preferences. But I don't think it's reasonable to hope for any more than this, so I don't see this as a problem! Fortunately I also think that, given the right environment, most people would end up choosing to value something like 'flourishing for all sentient life'. I'm an optimist like that! (Unfortunately, I don't think AI will converge on this without significant effort on our part. But hey, if that goes well, I think the best possible future is very likely.) And I think we could reasonably try to operationalise this as some combination of preference and hedonic utilitarianism. But I'm a consequentialist before I'm a utilitarian, so I'm happy to let the relevant utility function be whatever we determine to be sensible.

Second, I think the comment on infinities is rather bizarre. If you had $5500, would you rather donate it (at the margin) to AMF to buy malaria nets, thereby saving a life in expectation, or to other cause or organisation? How would you make that decision? In a world like ours where resources are finite, if we genuinely care about doing the most good possible we have to think about the of tradeoffs we're making when we allocate our resources. All else equal, I think most people would rather one person die than five! Why throw that away, especially in a decidedly finite observable universe?

Third, I mean, come on. This is ridiculous. Surely you have to realise this objection is ridiculous? I'm happy to bite the bullet that the world of the repugnant conclusion is better than the world of today. But a universe filled with people whose lives are barely worth living doesn't sound good! It sounds absolutely horrible! An obscene tragedy! Such a universe has squandered the immense resources involved in supporting so many lives—it has squandered roughly all potential value!

The relevant quantity to maximise is the utility derived from each unit of resources consumed. (And to be clear, I would be very surprised if the resulting system ended up being simple, or bland, or repetitive, because after all those don't sound like very positive words.) What the universe ends up filled with as a result depends on what humanity ends up valuing—it could be humans, dogs, digital minds, or even something else. This would depend on things like the weight we assign to the experiences of digital minds, which would lean on better understandings of neuroscience and artificial intelligence. (It's not going to be dogs, though. That would be dumb. There's no way dogs are an optimal use of resources.) The one thing I'm sure it won't end up being is filled with is miserable people whose lives are barely worth living. That would be awful!

I think the world we live in today is pretty repugnant. I hope that the future is filled with an enormous amount of lives each of which is far better than any life led today. I hope no utilitarian settles for anything less!

Expand full comment
Warmek's avatar

> He has to hope you will, because the moment you remember that the $1 car price argument was “happiness of the kind that makes intuitive sense to all of us, even if it’s just a trick of evolution” and not “The kind of stuff Goebbels comes up with after snorting an entire Monkey Paw”, he’s sunk.

I just want to start by saying thank you for how hard this made me laugh. I non-ironically slapped my knee. :D

Expand full comment
Resident Contrarian's avatar

Man, you are making THE ROUNDS through my back catalog. I get lthese little engagement notifications and the last couple of days it's been the Mek show. What is spurring this? I refuse to believe it's, as Bujold once wrote, "my native charm".

Expand full comment
Warmek's avatar

I binge read new stuff. Got here via the EA stuff via ... somewhere... the "wheel of blades" guy. :D

And no, I think "native charm" about covers it, actually. I dunno, man! You wrote it, and more to the point you posted it online, surely you were hoping people would read it? ;) I read the one post, liked it, and started clicking around. Now you're stuck. :P

Expand full comment
AlexT's avatar

Playing devil's advocate:

The happiness function *might be* very complicated. In fact, that's almost certainly the case, seing that people are famously complicated. So, it might simply not be possible to tile the Earth with pod-hotels while average happiness is anywhere above "shoot me now".

The failure seems to be in the premise "for any human population with a given average happiness (however that might be calculated) you can add more people so that overall happiness increases". I strongly refute this claim. For given assumptions about technology and how society works, there probably is an optimal population where increasing it *a little* makes everyone *much less happy* so that overall happiness is strictly diminished. Trying to say this with words rather than math sux btw.

Seing how we mostly hate having too many people around, I strongly suspect that optimum is *not* pod-hotel-tiled Earth. It might be the case that we're already above optimum population, although tech is changing rapidly so who knows. But it seems to me that this Repugnant-Conclusion argument is built on a flawed hypothesis and is therefore invalid.

I'm not a proponent of the Dreaded Philosophy by the way, it seems to breezily over-simplify far too much. But the part where it says "Things should be good." is irrefutable. The required question is "OK, good how?" and all hell breaks loose.

Expand full comment
Charles Karelis's avatar

Maximize total happiness utilitarianism seems to endorse increasing population by any amount, even if that lowers average happiness, as long as total happiness is increased. So if the average happiness now is 10 utils and there are 10 people, for a total of 100 utils, then we should opt to increase the population to 10,000, even if that means lowering the average happiness, as long as the total is more than 100 utils. For instance, we should prefer doing that increase in numbers even if the average happiness falls to 0.0101, for a total of 101. So we are required to produce a lot of very, very unhappy people.

My solution of the paradox is that 0.0101 is not very unhappy. It's just a little bit happy, which is not the same thing. At 0.0101 everyone would have more than enough. That's because zero means sufficiency, and anything plus means more than sufficiency. Very unhappy means suffering from an overwhelming quantity of troubles, problems, etc. Once we leave behind Epicurus's mistake of equating misery with a tiny bit of happiness, we should appreciate that the repugnant conclusion is not repugnant. .0101 is better than not so bad. It's good. It's true that the people at 10 would resist the prospect of losing happiness, if they survive into the new scene, but that has to be left out of the thought experiment.

Expand full comment
Charles Karelis's avatar

Maximize total happiness utilitarianism seems to endorse increasing population by any amount, even if that lowers average happiness, as long as total happiness is increased. So if the average happiness now is 10 utils and there are 10 people, for a total of 100 utils, then we should opt to increase the population to 10,000, even if that means lowering the average happiness, as long as the total is more than 100 utils. For instance, we should prefer doing that increase in numbers even if the average happiness falls to 0.0101, for a total of 101. So we are required to produce a lot of very, very unhappy people.

My solution of the paradox is that 0.0101 is not very unhappy. It's just a little bit happy, which is not the same thing. At 0.0101 everyone would have more than enough. That's because zero means sufficiency, and anything plus means more than sufficiency. Very unhappy means suffering from an overwhelming quantity of troubles, problems, etc. Once we leave behind Epicurus's mistake of equating misery with a tiny bit of happiness, we should appreciate that the repugnant conclusion is not repugnant. .0101 is better than not so bad. It's good. It's true that the people at 10 would resist the prospect of losing happiness, if they survive into the new scene, but that has to be left out of the thought experiment.

Expand full comment
Chris's avatar

What's funny is that I've been thinking about "The Monkey's Paw" off and on all week.

Expand full comment
Resident Contrarian's avatar

A dangerous passtime.

Expand full comment
Chris's avatar

If you haven't seen it yet, watch (or, better, play through yourself) a run of Mothered (https://enigma-studio.itch.io/mothered). That's the precipitating event. Should take about four hours to go through, give or take one for personal play style.

Expand full comment
The_Archduke's avatar

You and xkcd are on the same page about dogs

https://xkcd.com/2672/

Expand full comment
Apple Pie's avatar

What bothers me the most about utilitarianism is that people take it seriously enough to talk about.

* I propose that morality entails hitting people with hammers! More hitting, harder hitting, and bigger hammers all translate to more good!

* I propose that humans are born good and become more evil as they age! The more effectively you commit suicide by 35, and the more children you have by then, the more good you brought the world!

* I propose that hurting people's feelings is the root of all evil! One must never hurt the feelings over another person! (Are cats people? Are politicians people? Who is a person? We need answers!)

Nobody bothers talking about these arbitrary moral codes. But then we have:

* I propose maximizing happiness is morally good!

And now suddenly everyone is trying to figure out who counts (or should count) as a happiness-haver, and whether this does or doesn't align with intuition. And boy is this important - the only reason anyone likes utilitarianism is that they find it intuitively satisfying.

But is relying on intuition really the way we should be reasoning about morality? About anything? A tactic that might seem at least vaguely sensible would be to ask questions, like, "Does morality exist? Where does it come from? How would we know?" This isn't what utilitarians do. They start with "Happiness is good, just, you know, because," and then talk about what hardcore rationalists they are. Seriously, I'm left standing here wondering - what does "rationality" even mean?

Expand full comment
Myron's avatar

"There is probably a strong moral argument to be made for, at some point in the future, killing off most people to make room for dogs."

This would be true only if you 1) are a consequentialist utilitarian, 2) who treats utility and happiness as synonyms. I am 1 but not 2 - there is clearly more to a good life than the kind of happiness experienced by a breed of dog with a particularly sunny disposition. If we treated that kind of happiness as all that mattered, there would also be an excellent case for developing really pleasant drugs, or machines that just stimulated the section of the brain responsible for the feeling you're looking to maximize.

Also, I didn't feel baited-and-switched when I first heard of discussions around the "repugnant" conclusion - partially because as you've mentioned in comments below, it's not clear exactly how to define a life barely worth living, and it could be fine (and by whatever definition one uses, it must be worth living, so I guess maybe it's morally OK by definition, even if the definition isn't clear and specific?). The second reason is it was pretty clear to me pretty quickly that we were talking hypothetical worlds that couldn't actually exist here, and this was unlikely to translate into an actualizable world. The repugnant conclusion is something like "for any given world with high average utility, it is possible to imagine a world with many more people who have lives that are barely worth living, where the sum of the utility of all the people involved in the second world is greater than that in the first." Which, fine, yes, I can accept that. It is theoretically possible for me to imagine one world where there are 10 billion people living their best lives however one defines best, and another world where people's lives are so close to being morally not worth living that 10 quintillion of those people getting to exist would have the same utility as me enjoying a very nice cheese bagel, but yet still because there are a 1 with a sufficient number of zeroes after it number of people living those sorts of lives in this hypothetical world, the math comes out that there is more total utility in the second world. Fine, I accept that it is possible to imagine such a scenario.

But this does not mean that in practice in the actual world we live in, we're going to have to make actual people have barely-good lives. Let's say we take the next hundred years and work on making the current world better, so we do have around 10 billion people living very good lives at the end of it. And someone says "well actually, wouldn't it be better if there were as many people living on Earth as it took for there to be a higher total utility, even if that means a quintillion people?" My response would be "A quintillion people will not fit."

I suspect, although I cannot prove, that we will reach a point where adding more people reduces total utility due to resource constraints. This is explicitly avoided in discussions of the repugnant conclusion by saying unrealistic things like "imagine that you can add more people without making the existing people worse off, because nobody uses extra resources - let's suppose you could create a second planet Earth by snapping your fingers". Of course, since utility is poorly defined and not something like "there are 15 apples, each apple weighs 100 grams, so there are 1500 grams of apples. Also each apple is worth 2 utils, so there are 30 utils of apples", there will be debates even among total utilitarians when that point of decreasing total utility has been reached, but still, my point is that even accepting the logic behind the repugnant conclusion does not compel one to try to bring about a state of affairs where everyone's life is far from great but there are a lot of people. If it would take 1 quintillion extra people to make up the utility of a cheese bagel, I'll just bring two cheese bagels into existence that wouldn't have otherwise existed (which is much easier than working towards a future state of the world where there are +1quintillion people anyway), and call it a good day. And the same logic with smaller numbers will likely apply at more realistic scales.

Oh, and: Despite the fact that I'm just posting stuff that disagrees with you, I liked your post a lot - got me thinking, and it was well thought out. I actually had filed the repugnant conlusion under "unsolved edge cases/things I'm not sure about", and now I feel like I have a better response to it. :)

Expand full comment
cdh's avatar

It seems to me that while utilitarian calculations might be a good way to distribute resources that a government has taken from the citizens (or, alternatively, exert power over the citizenry), or for a charity to distribute funds it received from donors, utilitarianism is not a good justification for the government to appropriate those resources or that power, or for a person to donate to the charity, in the first place.

This reminds me of that grand rat's nest of legally instantiated moral hazard and grift, conservatorship law. The moment grandma's memory gets a little hazy and she gets a little unsteady on her feet and maybe makes one or two unwise financial moves, an army of private and public "good samaritans" are called in to send grandma to a care home, divide half of her estate among the professionals working supposedly in grandma's best interest, and the other half to her squabbling heirs, lest she *gasp* continue to control her own destiny and maybe squander the estate before the heirs can get at it, all while grandma insists, Monty-Python-style, that she feels fine and wants to maintain her independence and might even go for a walk.

The point is that it *might* be better to let most of us go around like demented old folks trying to pursue our own happiness while we liquidate our own estates to the detriment of our heirs, rather than let our betters take all our stuff and determine what's best for us.

There should at least be a strong presumption that each individual's idea of his own good is at least, if not more, valid than our own conception of what's good for him.

There's a reason the US Declaration of Independence talks about the "pursuit of happiness" (which is not uncoincidentally listed third after life and liberty) and not happiness per se.

Expand full comment
Randy M's avatar

Okay, so I don't read all that much philosophy. But this:

"The long-termist repugnant conclusion salesmanship here is sneaky. It has successfully argued in the past that our concept of good should be based on what - by trick of evolution or otherwise - seems good to humans. It then said “Hey, since we all agree that we should be maximizing good is great, shouldn’t we do something that seems incredibly bad on an instinctual level to all but a very small percentage of human beings?” and hoped like hell you wouldn’t remember the work it took to get you into the dealership in the first place."

Is a good and new to me critique of materialist consequentialism.

Expand full comment
Resident Contrarian's avatar

It seems obvious enough to me that I'm guessing someone must have somewhere done the same thing, or else I'm missing something easy that invalidates it. But every now and again you really DO find a 20 on the ground, so who knows.

Expand full comment
Monkyyy's avatar

> But I’m arguably neither fair nor utilitarian, and my colloquial sanity is wobbly at best.

lovely

Expand full comment
Doctor Hammer's avatar

Excellent breakdown. That is such a sneaky move I hadn't noticed it myself.

I also request pictures of the miniatures your wife is painting.

Expand full comment
Resident Contrarian's avatar

Shoot me an email. It's probably too easy to dox me to do it entirely in public.

Expand full comment
Vaquero's avatar

I don't know why total utilitarianism gets as much flak for this scenario as it does from Scott and others. Nobody has a plan to tile the planet with barely livable slums. Meanwhile, the principle of prioritizing average welfare also leads to some repellant conclusions like "A population of 500 million very fulfilled people would be better than the world today", which is a much more dangerous idea. If we're picking a theory of population ethics to get mad at, why total utilitarianism?

The practical tradeoff that hinges on this distinction today is whether population control in poor countries is desirable or not: if it's good to let more people in Burundi be born even at a low standard of living. I would say yes, but I'm not a utilitarian at all and I think people are valuable in their own right, not just as a contribution to aggregate utility.

If all you want to argue is that utilitarianism itself is a bait-and-switch ("I was promised compelling, intuitive answers to moral dilemmas!") that's fine, but this is just about the least threatening nontrivial implication of utilitarian ethics: nobody is in a position to execute on it and even the people who find it intellectually persuasive think that it's aesthetically distasteful.

Expand full comment
Resident Contrarian's avatar

So there's a couple things that are happening here that I think are relevant:

1.

It's true that nobody has plans to tile the world in slums currently. But when accepts a really out-there, extreme implication of a philosophy they are at least to some extent accepting everything *inside* of that extreme as well. So, say, in longtermism someone might be deciding whether or not to spend money on Haitian well-being, and might decide to do so in ways they expect to grow Haitian populations in the future rather than help existing Haitians now. Or really anything that accepts "big populations are better than happy populations" as a credo.

I'm not sure how this actually ends up looking in real life - it might, as you imply, be a non-issue we never get around to making important. But that leads into:

2.

If someone says "you know, if I had my druthers, I'd kill every single irishman.", and I object to it, and he then says "listen, it's not like I can make this happen right now - you know I'm poor and ineffective", I'm not sure that's entirely a fair counter to my objections. Like, yes, I can fully buy that Will MacAskill is talking about a theoretical thing. But he's talking about a theoretical thing in a book he wrote that he wants people to think of as important - responses to it should be able to piggy-back off of Will's assumption that this is a thing *worth talking about like it's important*.

3.

On the "why total utilitarianism" thing, I think it's important to note that what you are saying is "Listen, total utilitarians do a wierd kind of philosophical math almost nobody else does. If they apply that reasoning to your philosophical system in a way you wouldn't and come to a weird conclusion you wouldn't have come to, it should negate what you are arguing should be the correct views".

But "normal" people, in the sense that they aren't hyper-rationalism-utilitarian people, don't go:

"Oh, I think encouraging policies that maximize population growth beyond what's sensible/moderate straight until we get to a maximally large, miserable population is dumb; that means I must want to do something like "increase the average happiness" and that means, in turn, that if I extrapolate that out with weird utilitarian math as far as I possibly can that I'm actually in favor of massive genocides, and since I did that math I'm now morally bound to consider that correct and sort of move in that direction".

That's how the people they are criticizing think, and they are criticizing them for thinking like that! Like part of the criticism is "math isn't a dread god you are beholden to even when it tells you to do something every human finds instinctively revolting". When these people are running that "average happiness is good!" algo, they are coming to conclusions like "average happiness is good - I should probably do things like ease suffering in existing populations."

Expand full comment
Vaquero's avatar

As I mentioned in another comment, "we should attempt to make the population smaller in the future to improve average welfare" is not a hypothetical danger. It has been official government policy affecting millions of people in horrible ways. Worrying about the opposite position is not a hard-nosed, pragmatic approach to thinking about population ethics and their practical implications.

Expand full comment
Resident Contrarian's avatar

So I agree! Like, yes, one-child was pretty bad. And some kind of genocide motivated by population-numbers-control would be bad! But if you look at my article and even the comment you are talking about, I'm not actually ever criticizing big populations alone; I'm criticizing big populations where you will accept terrible outcomes to get them because math taken to extremes under a particular set of assumptions commanded you so. I'm also fine with small populations! But I'm not fine with neutron-bombing an entire region to get it, because math taken to extremes under a particular set of instructions told me so.

To put it another way, if LT people were saying "Listen, this is part of the picture - we want to counteract 'don't have kids for the environment' people a bit by proposing that a new person's happiness - that you created a new person, who can now be happy - has some value too!" I don't think anyone would object. Ditto someone saying "You know, resources aren't infinite - if we remain planet-bound, eventually we are going to have to have a hard discussion about quality-of-life's balance against quantity of life."

But then people on both side either propose or actually initiate policies that try to get the nominal goals of both those thinkings (more or less population) by any means necessary and accepting any costs necessary, and it gets bad.

The criticism this article tries to make is that you can make crazy implications-of-cold-unfeeling-math arguments, but the initial argument for why you should accept utilitarianism's general form of "good" was something like "Listen, I think we all have pretty good intuitions on what good is. Let's feed some hungry and pet some lonely cats and stuff, and it will be OK.". Whatever Repugnant conclusion *is* in a precise sense, it's pretty far from that. Ditto population-control genocide.

I don't think taking of the common-sense intuition limiter is good, obviously, but at least I want it to be clear that's what's proposed.

Expand full comment
asdf asdf's avatar

Who is actually calling for more population by any means necessary at any costs? Where have you seen these calls? What policies have you seen proposed for more population by any means necessary?

Expand full comment
Resident Contrarian's avatar

I think this is a fair criticism - that was poorly worded. I can think of one-child on the other side, but I can't really think of forced breeding camps on the other side. Where I can think of proposed initiatives to increase breeding, it's stuff that's explicitly tied to *increasing average utility*, i.e. not "let's do this even though it would make everyone's lives worse on average".

Expand full comment
Laura Creighton's avatar

Chattel slavery?

Expand full comment
Vaquero's avatar

I don't think we actually disagree about any of this—I just think the so-called Repugnant Conclusion isn't the best target for making your point about consequentialism.

Expand full comment
cdh's avatar

Can one even be a long-termist and a consequentialist at the same time without being clairvoyant?

Person A: Why are you planning to do what you're planning to do?

Person B: Because the consequences will show that it was the best thing to do.

A: What were the consequences?

B: I don't know, they haven't happened yet.

A: Aha!

Expand full comment
Monkyyy's avatar

if nothing else they are effecting politics; and I dont think its very hard to find politics that committed a genocide if left unchecked.

I find your mere suggestion that maybe you should influence birth rates in the 3rd world, which you likely dont live in, horrifying.

Expand full comment
Vaquero's avatar

I'm saying that if we're talking about population ethics that cashes out in implications for real populations. Specifically my point is that average utilitarianism meshes with antinatalism, which I regard as bad because I like people and think they should be able to have as many babies as they want. This is a real threat that has resulted in millions of forced sterilizations and abortions, whereas the threat of people actually acting on the "Repugnant Conclusion" is phantasmagorical.

Expand full comment
Edwardoo's avatar

>Specifically my point is that average utilitarianism meshes with antinatalism, which I regard as bad because I like people and think they should be able to have as many babies as they want.

Right, and I assume you believe that other people should be forced, literally forced at the threat of imprsionment (and death if one resists hard enough), to pay for the cost of these babies if the parents can't afford to care for them? In which case, it's deeply dishonest to frame this as "people should be able to do as they want". All laws are enforced with the threat of violence.

Expand full comment
cdh's avatar

If you want to procreate, be sure to ask Edwardoo's permission.

Expand full comment
cdh's avatar

I don't know what "population ethics" means, but it sounds incredibly Orwellian.

Expand full comment
Monkyyy's avatar

China has been swinging back a forth on encouraging children and banning them; Im not sure whats going on with that but even having the discussion on the table to apply pressure to go either way may make swings in population worse and whatever consequences come with that.

Expand full comment
Vaquero's avatar

The discussion is already happening. Again, this is not a hypothetical and we should think carefully about what the right thing to do is.

Expand full comment
Monkyyy's avatar

In china and in universities, when bill gates forced sterilized people in africa, during nazi germany and racist times during america.

In mainstream America and Germany its not currently on the table; so its possible to stop the conversation; and we should really be telling the university to stop entertaining the ideas.

Expand full comment
Doctor Hammer's avatar

I think you brush against the real objection here: whether or not one is justified in "letting" people be born or not.

Utilitarianism (or at least utilitarians) were making the argument of what we should prefer, what we should choose, going right past the question of whether or not we have any business making that choice, individually or as a group. The fact they are arguing over whether we should pick 500 million super happy people or 500 billion fairly miserable people is the problem in and of itself. That isn't a decision and individual or group gets to make for anyone else.

Expand full comment
Edwardoo's avatar

It's not a matter of what is going to happen. It's a check on the validity of utilitarian principles. If these principles have negative implications, then that suggests there's something wrong with them.

Expand full comment
Vaquero's avatar

You can work towards a state of affairs without coercing anybody into it: for that matter, what should you choose yourself? I don't think this objection is very forceful.

Expand full comment
Doctor Hammer's avatar

What people choose themselves is one thing. What people think that causes them to think that chucking a baby into a furnace is for the greater good, that's something else.

I would rather have a system that helps people make decisions better for themselves, sure. It is really important that the system doesn't encourage them to force horrors on other people "for the greater good". Especially when the greater good is something extremely abstract, like relative happiness of various population sizes. The question of "can we be sure that is better? can we get there from here on purpose? should we be deciding this?" needs to come up.

Expand full comment
Vaquero's avatar

In pragmatic terms, yes, ruling out coercion and violence prevents a lot of horrible things from happening and is a good heuristic.

Expand full comment
Vaquero's avatar

I also think people steal a base by only imagining lives that are unremittingly crappy and make *us* sad when we think about them. What if instead, "barely worth living" means drudgery punctuated by moments of aching, transcendent beauty and goodness that make everything else worth it?

Expand full comment
Resident Contrarian's avatar

I want to acknowledge this as true. There are some values of "barely worth living" I'd probably accept as reasonable here; I'm not sure how we could avoid there being some, actually.

Expand full comment
Edwardoo's avatar

>by moments of aching, transcendent beauty and goodness that make everything else worth it?

How many people in the world, especially in the poorest countries, experience something like this?

Expand full comment