Existential Comics' proposal of a "freedom monster", an original contribution to philosophy, is a particularly illuminating variant of this concept, and suggests a fundamental conflict with the notion:
...Nozick's conception of freedom is based largely on contracts revolving around property rights. That is to say, freedom for Nozick is freedom to own and control not just your own personhood, but any property that you own. Property, like resources devoted to increasing "utility", is a finite resource that could theoretically be entirely owned by a single "Freedom Monster", or maybe "Justice Monster", but perhaps best named "Property Monster". Like the comic imagines, a monster that lived forever and bent its entire will to owning more and more land could, theoretically, through entirely voluntary transactions, own all of the land. If this situation arose, the monster would have infinite leverage in any negotiation that it entered into, because everyone on earth would starve unless they made a deal with the monster. From Nozick's point of view, because neither party was physically coerced, and the monster's property came from a history of free transactions, the monster's ownership of all its property is just and free. However, the situation that it leads to seems to be one that severely lacks freedom. The monster could make any rules it wanted, and everyone on earth would be more or less "freely" forced to obliged it. Most people would not describe this situation as one where humanity is more free....
Yeah, it's not surprising that you can blow out a weighting system if you... blow it out.
If we're modelling utility in situations where human life could be endangered, you should probably tie some amount of utility value to risk of death (or inevitability of death, given the cookie example), preferably on a ramping scale that approaches infinity as death becomes more imminent.
Maybe I'm missing the forest for the trees here, maybe the point is that it's impossible to create a moral system for allocating resources in a world where they are finite. I see little utility in such an argument, however.
Unfortunately for the risk of death idea due to the way that scales it means we should bring everyone to the point of almost certain death tomorrow to avoid one certain death right now.
I mean yes, the need to reduce values, emotions, and human life down to numbers is a known and justified challenge to a utilitarian ethic. But in many instances, it is generally allowable to do so, such as in the instance of taxation.
In that scenario, the utility value for the utility monster would have to be enormous. We're talking something that would probably also result in human deaths if it were to go unhandled, like triggering a war or nuclear meltdown.
I feel the greater ethical flaw in utilitarianism is the case where a utility value scales positively with suffering. What can you do if the utility monster derives positive utility value from others dying, as opposed to remaining linear? Utilitarian measures simply don't fare well when the actors are inherently antagonistic to one another.
This is why the strict core of microeconomics never allows one to make interpersonal utility comparisons. As soon as you start to try and figure out "best" outcomes across groups (beyond ability to pay), it gets really complicated, really quickly. This is the domain of areas like Social Choice:
> There's pretty good evidence for logarithmic marginal utility of money
1) This is, at best, controversial. Measuring cardinal utility is an unsolved, perhaps unsolvable problem. The kinds of experiments that lead to these claims are things like "what more would you buy if you had $X" or "how much, out of 10, would your happiness increase if you had $X more". Needless to say these are full of problems with self reporting, perception, lack of skin in the game, etc.
The only utility-adjacent concept that can be tested rigorously is preference ordering in situations where the subject actually bears the costs and benefits of the choice. This makes most experiments you'd want to do very costly, and potentially unethical.
2) Even if we accept that everyone has a logarithmic curve describing their own utility for money, this doesn't mean you can make intersubjective comparisons. To give an analogy, it's a "units" problem. Bob's utility is measured in Bobutils, and Alice's utility is measured in Aliceutils. Even if Bobutils(dollars) and Aliceutils(dollars) are both logarithmic, this doesn't get you any closer to aggregating their utility or evaluating tradeoffs (can't add/subtract things with different units).
Edit: playing minor devil's advocate to myself on point 1, it's certainly true that there's good evidence of diminishing marginal utility of money (or of anything). I was specifically responding to the kinds of experiments that lead to the claim of a logarithmic form, which I consider to have many problems.
I may have misinterpreted utilitarianism, but I thought it was egalitarian because everyone's utility values were the same. Am I wrong?
Meaning the utility value of a cookie is the same for everyone, and the utility value of your first $100 is the same for everyone, and the utility value of anyone's second $100 is lesser than the first, etc, etc.
Obviously someone thought this Utility Monster idea warranted a wikipedia entry, so I assume there's some point in it, but it does nothing to break my head canon of what utilitarianism is.
It's useful to understand a concept in the context innwhich it was formulated before attacking it. Bentham's notion was intended as egalitaria. This does not mean it's unflawed, however.
Right, but I assume insulin has the same utility regardless of which diabetic we're talking about?
And there are people who want to live in the woods without interacting with people, with 0 use of $100. And people who don't like cookies at all, etc, etc.
I view it more from a statistical point of view. Your model doesn't have to consider individual preferences (or needs) to increase utility (it would have to to literally maximize).
We can likely assume that out of 1000 people, 100 insulin pens would have more utility being distributed to the ~10% diabetics, rather than the same diabetic who "wants it more".
In point of practicality - insulin does have differing utility for each diabetic.
A note I meant to put in the original comment is that utility is considered equally, not equal. So yes, if you have 100 insulin pens they distribute according to
need.
The point of the original wikipedia article is it is normally always possible to construct a "utility monster" that would take the need to the extreme and suck away all the utility.
In the diabetic/insulinpen scenario this might be done by declaring the monster is a giant of epic proportions and failure to take all 100 pens means death, whereas for the others it 'only' means severe complications.
It should be noted the point at which insulin is next available alters the practicalities but not the gist; if people will die before more insulin is available, our monster should have more utility than those who die etc.
One way to frame the egalitarianism of utilitarianism is as a part of the equity vs equality debate; considering everyone equally but not as equals is fairer.
>And there are people who want to live in the woods without interacting with people, with 0 use of $100. And people who don't like cookies at all, etc,
Whether you're talking about utilitarianism or microeconomics, your utility function has to take those things into account.
If you use a statistical average model to distribute resources you'd end up with a very unhappy population.
Did you never do the experiment in school where you randomly distribute candy, calculate the class average utility score, then allow people to trade and calculate it again?
It's always much higher after trading. Trade creating value is one of the central tenets of economics, and it's based on the concept of utility heterogeneity.
> But could you ever do that trading if one kid in class had 95% of all the candy from the off-set?
Hum... Yes. That's still in economics 101. Everybody will still be better off after the trade. Also, it's very likely (but still dependent of the candy distribution) that everybody will prefer to trade with that one kid instead of everybody else.
I don't know that much about diabetes, but I'm sure that there are people for whom insulin is more of an urgent need than the average diabetic. And if not diabetes, there's plenty of conditions where people have it in varying degrees of severity, where lack of treatment is life threatening for some and not a huge deal for others.
If you had, for example, 10 treatments for a disease, 10 severe cases, and 10 mild cases, I would assume that utilitarianism would posit that the 10 most severe cases should get it.
Insulin is an urgent need for any T1 diabetic. They could maybe make it a week depending on what they ate, but they could also be dead in a day. There isn't really such a thing as mild or severe Type 1, only well treated and poorly treated.
You could maybe say "insulin is an urgent need for a diabetic who just ate 3 cookies, but not as urgent for one who just ate a plate of vegetables". But both will need some amount of insulin within 24 hours.
Type 2 is a different story, and not all Type 2's need insulin.
But your flaw here is extrapolating this utility difference to its limit, that something like a cookie can become exponentially more valuable to someone versus their peers. The value of insulin is just that — life and death. But it certainly doesn’t equate to a million lives or deaths.
Suppose a psychedelic drug that is incredibly difficult to synthesize makes someone inhumanly good at problem solving, but only for a few minutes.
Further suppose one of the easiest ways to create this drug is to harvest human brains. They have to be extracted while the donor is still alive, though, as the delicate compounds coming from the brain become unusable rapidly after death.
According to utilitarianism, if that person uses their insane problem solving abilities to increase utility for others, and they're good enough at it with this drug, then we should throw the people who generate the least utility for others (e.g. the poor and disadvantaged, who usually have few resources to offer and little training or skill) into the brain ripper so that the remaining people can get the gains.
This example may seem absurd, but many others like it can be constructed. Spend a few minutes trying and you'll come up with some. This one took me thirty seconds, if that.
See "The Ones Who Walk Away From Omelas" by Ursula K. Le Guin for the canonical illustration of this problem.
Not so absurd: Do you prefer that old people can die of Covid-19 (a few are tortured) or that public life is locked down (millions are mildly inconvenienced)?
Although the purpose of the lockdowns isn't to prevent old people dying, per se. It's to buy time and prevent system collapse while the government builds out systems to track and quarantine individuals who may be infected.
The lockdown can be eased once we have a better option.
You could argue that there was plenty of time to build such systems before COVID-19 made it to your country, and you'd probably be right. You could also argue that maybe your government hasn't been on the ball with the second half of this story, and you might be right, but that's neither here nor there.
We should make the people who discovered the drug take it and use its effects to figure out a way to synthesize more of it without shredding living human brains. Then we can get to the business of other gains.
For the first run we should just ask for volunteers. No need to pre-optimize, maybe some of those poor and disadvantaged people will end up contributing more on NZT.
These reductio ad absurdums aren't very absurd, because of this clause: "if that person uses their insane problem solving abilities to increase utility for others, and they're good enough at it". That's a big if, in order to offset the disutility of harvesting brains, but if that is the premise, the conclusion is unsurprising.
To many of us, it seems obvious that any moral system which can lead to these conclusions is not a complete moral system, in much the same sense that Newtonian physics is not a complete physics system.
I imagine that gulf of perception is what actually needs to be crossed somehow in the debate between hardcore utilitarians and those of us who are not.
> any moral system which can lead to these conclusions is not a complete moral system
But "these conclusions" are unobjectionable, because they include the premise "things are net better". Because "net better" is imprecisely defined, I can substitute as strong a condition as desired up to and including impossible conditions. Which means they can subsume non-utilitarian moralities (I believe non-utilitarians are just utilitarians with binary step functions for certain utilities).
> I hope I would walk away, even at the price of slow starvation.
Because to you, Omelas is not net better. Which is perfectly understandable. But what basis do you have for rejecting Omelas, other than that you believe it to be worse?
My belief that there is an external standard of right and wrong that has little to do with utility, and that I know what that standard says about torturing people for your own gain.
It's not about utility, it's that it's wrong to hate and harm a few people. The "good" of the many does not have one iota of influence over the wrongness of that action.
To quote the book that defines so much of my moral code, "What does it profit a man if he gain the whole world but loses his soul?"
Side note: describing my worldview with language from yours does not mean yours subsumes mine.
Utilitarianism does not prescribe a specific utility function. Happiness is only used as a toy example that quickly breaks down if scrutinized (otherwise everyone would be getting free morphine injections).
Utility functions can and do vary between specimens. For one children can become rhetoric utility monsters (think of the children) and human lives can also become a utility monster (people often balk at putting price tags on lives saved). And consider how many animals we sacrifice in the name of human pleasure.
I think happiness would become too hard to define here, rather than breaking down.
I don't believe free morphine injections would provide a lot of long term happiness, additionally it wouldn't be a sustainable way to live economically, leading to poverty-induced unhappiness.
Children do deserve more happiness, because they're more vulnerable and have a longer lifespan, investing in children makes sense for humanity, additionally a parent's well-being is often connected to their childrens' well being. Still, a child's logical utility demand is not _that_ much more to call them a monster, it's within limits.
Utilitarianism's problem is that it's hard to quantify, the logical basis seems pretty solid to me.
> Children do deserve more happiness, because they're more vulnerable
That does not sound like a utilitarian argument. Are you saying that more happiness should be allocated to children and that vulnerability entitles one to more happiness? Conversely, does that mean non-vulnerable people require less happiness?
> Still, a child's logical utility demand is not _that_ much more to call them a monster, it's within limits.
Well, logical demand is not the concern here but irrational "think of the children" thought-terminating cliches used to justify policy changes at the expense of others. The children are held up as purported utility monsters. I.e. they are rhetorically treated as if they were true utility monsters.
That was in fact the original idea, though generally pitched as equating utility preferences between social classes -- the utility of a nobel and commoner were considered equivalent, a radical idea at the time.
Clearly there are difficulties with the notion, and direct comparisons between. individuals' peferences or utility functions are at best difficult if not impossible to measure. But the matter of unequal treatment by social class was the specific error Bentham was attempting to correct.
This aspect of Bentham's philosophy is specifically mentioned in Nigel Warburton's A Little History of Philosophy (2011), chapter 21.
It seems to me that the utility Monster argument has a critical flaw in that it actually does a great job of describing the way we distribute utility in reality.
We can't properly rationalise with a being that derives 100 times more utility than humans any better than we can imagine a fourth spatial dimension, so instead, let's go in the other direction.
What if humans were the beings that derived 100 times the utility? Would we be then be able to justify consuming 100 times the resources?
Well, we can and we do. Case and point, literally any other animal.
That leaves a choice: either one accepts that being a utility monster is actually ok, or refrains from causing harm to any conscious being, human or otherwise.
Consider a population of mostly healthy people but one person has a rare cancer. There is a small supply of drugs that treat that cancer. The vast majority of the population would derive little utility from being allocated this drug (there is some small chance that they may get the same cancer in the future, so having it available does have value, just not much) whereas the person who has that cancer derives a much larger amount of utility from the resource. The cancer patient is in this case a utility monster and giving it cancer medicine is feeding the utility monster.
Consider even further the case where most of the population is middle class but some fraction is living in absolute poverty. A single dollar provides little utility to the middle class people but a tremendous amount of utility to the extremely poor. Taxing the population some small amount and allocating it to the poor individuals would not only be feeding a utility monster but it would be directly decreasing the utility of everyone else to do so.
Let's consider a third scenario: a researcher is trying to cure some terminal disease but requires test subjects who may die as various possible drugs are tried. The utility gain from curing a disease once and for all would be immense compared to the life of one or even a few individuals. Carrying out this research is again feeding a utility monster, and in this case literally sending people to their deaths to do so.
While people might disagree on how we'd go about feeding these monsters in practice, society has widely accepted at least in principle giving medicine to the sick, providing welfare to poor, and taking calculated risks in the name of progress.
The utility monster only sounds appealing as an ad absurdum argument because the existence of such a utility monster presupposes such an absurd situation. One can easily imagine that a utility monster which derives more utility from every resource than the whole of humanity combined could exist, but actually imagining such a monster is much harder. It feels wrong to support sacrificing the whole of humanity to one being because I can not truly comprehend a situation where one being could derive enough utility to justify such a sacrifice. But in this hypothetical scenario where the sacrifice actually was justified, I would by definition be able to imagine it, and thus I would apply the same logic I do to the utility monsters which I can imagine, and there is no reason to believe I would reach a different conclusion. Thus I would support feeding the utility monster if I were ever in such a scenario.
I don't think it does. You have still the issue of the unhappy monster once everyone else has hit the "max" utility. You would spend all your resources on them rather than improve things for the rest of the group because they are now deemed unimprovable.
No, as long as one entity can derive exponentially more utility than another, utility monsters are possible; such monsters cannot be satisfied only with polynomially many sacrifices.
Maybe I'm missing something obvious but it seems surprising this argument would gain any traction. It's pretty thin...
Even before you get into debates between average and total utilitarianism, even for total/maximum utility, utilitarianism is predicated on the idea that all beings are equally considered, from which the supposed egalitarianism stems. The whole premise is that utility monsters don't exist.
This is how most welfare, insurance, medical, and education systems work today. The opposite is basic income. Somehow I see most people supporting both.
Not in and of themselves, no. They can be, but it isn’t a necessary state.
Even if you assume their wealth is stolen and they do nothing for job creation etc., there is no reason to assume that their gains are correlated with anyone else’s losses.
To be a utility monster, you have to say that the suffering they cause is still a net win specifically because of their pleasure.
I suppose you could say it about budget airlines who nickel-and-dime you for everything and take you to an airport that is “near” your destination in the way that Dublin is “near” Belfast — lots of small irritations causing a small number of people to be very happy.
Billionaires actually present the opposite problem. People's happiness tends to revert to a mean as they adapt to whatever situation they are in [1]. If somebody wins the lottery, or their startup makes a big win, then they will quickly adjust to their new normal. It takes more and more resources to add a single unit of happiness.
If you pick a random person on the street and gave them a million dollars, it would be life changing. Paying off all debt, move to a new location, buy a house, set up investments to ensure that food and the mortgage are paid in perpetuity. If you give a million dollars to a billionaire, it is a rounding error on their total assets, and does not substantively change their lifestyle. Conversely, taking a million dollars, or even nine hundred million dollars, from a billionaire, will not significantly affect their lifestyle, and that amount could improve the lives of many others. This is one reason why taxation is ethical from a utilitarian perspective.
I'd be interested to hear the conflict that you describe between utilitarianism and morals. From my perspective, the two are entirely orthogonal. Morals define what one should optimize (e.g. minimize the number of unnecessary deaths, improve human comfort), and utilitarianism defines how to achieve those goals (for each action, determine how much it achieves those goals).
Some naughty boy walking over a pond distributing his blood without a medical assay is no basis for a system of ethics. Supreme moral authority derives from a mandate from the masses, not from some farcical reverse-vampire ceremony.
More seriously, the OT Ten Commandments come with a whole bunch of asterisks — Thou Shalt Not Kill (except for war and executing criminals) — and even in the NT, when Christianity first popped into existence a number of the new converts immediately committed suicide right after baptism so that they would get into heaven with all their sins absolved and no chance of, e.g., dying immediately after an unwanted adultery boner [Matt 5:28].
...Nozick's conception of freedom is based largely on contracts revolving around property rights. That is to say, freedom for Nozick is freedom to own and control not just your own personhood, but any property that you own. Property, like resources devoted to increasing "utility", is a finite resource that could theoretically be entirely owned by a single "Freedom Monster", or maybe "Justice Monster", but perhaps best named "Property Monster". Like the comic imagines, a monster that lived forever and bent its entire will to owning more and more land could, theoretically, through entirely voluntary transactions, own all of the land. If this situation arose, the monster would have infinite leverage in any negotiation that it entered into, because everyone on earth would starve unless they made a deal with the monster. From Nozick's point of view, because neither party was physically coerced, and the monster's property came from a history of free transactions, the monster's ownership of all its property is just and free. However, the situation that it leads to seems to be one that severely lacks freedom. The monster could make any rules it wanted, and everyone on earth would be more or less "freely" forced to obliged it. Most people would not describe this situation as one where humanity is more free....
http://existentialcomics.com/comic/259