Fairness
TLDR: Fairness is about lopsidedness of expected rewards, rather than about unbiased rules or equality of opportunity.
Prerequisites: None
“When everyone is given an equal opportunity, treated without bias, and decisions are made without favoritism or prejudice, this concept embodies the principle of ensuring balance and impartiality in actions, decisions, and treatment. It means everyone has the same chance, regardless of their background or circumstances.”
- ChatGPT-4, defining “fairness” (October, 2023)
Notions of fairness are introduced to most children at an early age, particularly around games and toys. A game is typically considered unfair if the rules are prejudiced in one party’s favor. Use of a toy or playground equipment that doesn’t involve sharing it for some of the time is often unfair. And I think it’s pretty common for the story of what fairness is to stop there, with little expansion, even in higher education. If one goes hunting for academic inspection of fairness, it’s often equated with the absence of bias, or treated as a synonym for “justice”. Fair enough! We certainly need a word for unbiased/impartial and perhaps “fair” should be that word. And synonyms are fine and good.
I have a confession to make: for most of my life I misunderstood fairness, and I believed the definition I just gave. I equated fairness with unbiasedness and impartiality. I formed a story early in life that the adults around me were really into the rules of “games” being neutral to the players, and that if the rules weren’t neutral to you or a group you cared about, you needed to get upset and call it out for being “unfair.” This, of course, extended beyond literal games, and applied to things in life like the allocation of taxes, representation in media, or the ability to get high-paying jobs.
And, as a result of my mistake, I saw much of the world as hypocritical and irrational when it came to discussions of fairness. I repeatedly described myself as “not really caring about fairness” and criticized intellectual and social movements for being wrongheaded on the topic.
I now realize there’s a mathematical depth to fairness that only superficially resembles unbiasedness, and that I was the wrongheaded one.
But my younger-self’s error isn’t unique. Many others I’ve spoken to on the topic (like ChatGPT in the opening quote) use language of bias, impartiality, and equality first and foremost, without reference to anything more. In a better world the mathematical underpinnings of fairness would be more widely taught (and more widely known) and I expect this would lead to societies with a greater ability to negotiate, coordinate, and protect the disadvantaged.
But let’s start with my error. Let’s talk about impartiality and equality of opportunity as it applies to children playing with toys and games.
Bias Makes Games Fun
Is the game of chess biased? Certainly. The white player moves first, and has a significant advantage over black. But this is remedied easily enough: simply flip a coin to determine who plays which side. Now things are more unbiased, but are they totally unbiased? No.
We can see this remaining bias in the way that some demographics are disproportionately likely to win against others. For instance, 10-year-olds are significantly better at chess than 7-year-olds (on average). Russians, similarly, are better than Egyptians (on average). More directly, people in the 80th percentile of general intelligence will be better at chess than those in the 20th percentile (on average). And even more directly, people who have spent at least twenty hours studying chess will be more likely to win (on average) than those who haven’t.
It is the fact that different players are more or less likely to win that makes chess, and other games, fun. The most impartial, unbiased “games” are those like Candyland, which essentially amounts to randomly drawing cards to see who wins. And it’s not just skill differentials that come into play, either. Physical contests and contests of willpower can also be thrilling. But by their nature, all good competitions must be biased in favor of brains, skill, knowledge, strength, speed, determination, and/or teamwork.
Note that I’m not trying to claim that this makes these games unfair. Fairness is distinct from equality of outcome or equality of opportunity. My point is just that it is normal and good for games to favor some kinds of people winning over others (and thus be biased).
What about a footrace where someone gets a head-start, or a game where only one player knows the rules? Are these fair? In most contexts where children are playing on a playground, they aren’t. It’s unfair if one player in the game has an advantage like that. And when framed along these lines, I have empathy for my young self who mistakenly concluded was that the unfairness lay in the advantage, rather in the “like that.” But I now claim that there are fair races where one player gets a head-start or knows more of the rules, and that the key to seeing them as fair is in how those advantages are acquired, and how the players feel about them.
Cake Cutting
Suppose we’re cutting a cake to divide between two people: Alice and Bob. The cake is 50% vanilla and 50% chocolate, and as it happens, Alice loves vanilla and is lukewarm about chocolate, while Bob has the exact opposite preference. There’s an algorithm as old as civilization that ensures that the two can decide on a way of splitting the cake such that neither would prefer the other’s piece (“envy free”): Alice cuts the cake into two pieces, Bob selects his piece, and then Alice gets the remaining piece.
It’s tempting to instinctually describe divide-and-choose as a naturally fair process. After all, each party gets “the best slice” according to their own subjective preferences. And indeed, if neither party knows anything about the other, it is fair. Alice, not knowing Bob’s tastes, must divide the cake into two pieces that are exactly equal according to her values: half chocolate and half vanilla. If she doesn’t make this symmetric cut, there’s a high chance that Bob will take the “better” piece and Alice will regret her choice. Thus by nature, each party will get a slice that’s half-yummy, according to their tastes. This is fair.
But it’s not good. (Sometimes called “efficient” in the literature.)
Let’s suppose that Alice and Bob know each other, and in particular, Alice knows that Bob likes chocolate more than vanilla. She can then cut the cake into a chocolate half and a vanilla half. They then each get a large slice that’s to their tastes; an outcome that’s both fair and good (following the utilitarian rule).
But what if Alice slices the cake such that there’s a piece with 40% of the whole, composed entirely of chocolate, and another that’s 60%, which has all the vanilla and the remaining bit of chocolate? Perhaps Bob would rather have the smaller piece than the large piece, given his taste for chocolate. Thus this cut is envy-free. But I would argue it’s very unfair. Alice is exploiting her knowledge of Bob’s tastes to get more of the cake.
Ultimata
Suppose that Bob is in a position to, upon seeing Alice’s cut, throw all the cake on the floor and ruin it for everyone. This is the kind of situation that commonly goes by the name The Ultimatum Game in the literature. Alice proposes a split of some resource (like cake) and Bob can choose whether to accept her split or to ruin everything for everyone (at least within the bounds of the interaction).
Though this is a distilled example, we see situations like this show up all the time in collaborative relationships where parties are (at least a little) self-interested, such as trading goods, making business deals, or sharing homes and offices.
As another example, suppose Dave is selling an old car to Eve. They live in a small community such that Eve is realistically the only potential buyer, and both of them know the other’s preferences pretty well. Outside of selling the car to Eve, the vehicle is worthless to Dave, with him being ambivalent between having or not having the car. We say “Dave values the car at $0” (if the sale doesn’t happen). Eve likes the car, however, and would be ambivalent between getting the car or getting $500.
If the car is sold for any price between $1 and $499, both people will be richer than before the trade, according to their own values. When they decide on a price between these two bounds, they’re essentially deciding on how to split the pile of dollars between them. If either party walks away, it’s as though they’re throwing all that value on the ground and deciding to be worse-off. Naively, if the car was sold for $1 it would be good for both people; Eve gets the car (which she valued at $500) for extremely cheap, and Dave (who saw the car as worthless outside of the trade) gets a dollar, which has value. But intuitively it should be obvious that this price is extremely unfair to Dave.
And if so, what’s the car’s fair sale price?
I think it’s $250, since each party gains the same amount of value by making the sale happen. Each party presumably put in the same amount of work, making the trade, and it seems right to divide the value equally between them.
Now let’s say that Dave knows he can sell the car to Carol for $400, if he doesn’t sell it to Eve. Now, outside of the deal, it’s effectively worth $400 to him. I would say that this changes the fair price to $450 using the same logic. The resource to be divided is the gain in value by making that trade in particular, which is ($500 - $400 =) $100.
What if Eve had to put in more resources than Dave, to make the sale happen, such as by buying a $10 bus ticket? Now Eve’s effective gain is only $490 of value, meaning the sale price should drop by $5 to stay fair. Or at least, it should drop by $5 if we only count the bus ticket’s price; Eve probably also put in more time than Dave, and the price should be lower still, to compensate for the time cost. (Notice how introducing each factor and how it affects the fair price resembles haggling!)
What if there’s a third-party, Fred, who helped the two connect? Does he get a cut? Naturally, yes, that would be fair, since the trade couldn’t happen without his involvement. But what if there was some chance of Dave and Eve finding each other and making the trade without Fred’s help?
The full theory involves calculating the Shapely Value, which is a mathematically robust procedure for calculating how much each party is contributing to the surplus that’s being divided. But for relatively simple cases we don’t need the general solution. The underlying principle is basically that a fair price is one where each party is rewarded according to their contribution to the trade. In a basic trade, each party contributes equally, and thus should be rewarded equally.
And now we can use this same frame to understand what makes playground experiences fair or unfair. A fair game is one where each player is expected to be rewarded with the same amount of fun. (Note: actual fun may vary, what matters is the anticipation going in.) A race where one person gets a head-start can only be fair if each racer thinks that divides the fun at least as evenly as a more even start. And sharing about equal time on the swings is usually the fair move because that rewards each child evenly for peacefully coexisting.
Hidden Values
Wait. Hold on. Something isn’t right.
It can’t possibly be (universally) fair to share time on the swing-set equally. Some people get more joy per minute on the swings! If George gets $2/minute of value from swinging and Heidi gets $1/minute, wouldn’t that mean the fair allocation would give twice as much time to Heidi as to George?
In a certain sense, yes, that would be more fair! The perfectly fair, ideal world gives equal (expected) value to the children who visit the playground, not equal time on the swings. Similarly, if there’s someone who naturally gets less joy out of playing a game, it’s fair to bend the game more to meet their preferences. We see this logic come into play with people with disabilities, where there’s a common intuition that we should give them disproportionate affordances and opportunities to correct for an imbalance of how good things are for them by default.
But there are obvious problems going down this road. The first is an obvious decrease in efficiency.
To illustrate this, consider farmers named John and Kathy who drew straws and randomly found themselves with two plots of land. John’s land is naturally very fertile and Kathy’s is rocky and bad. There’s a communal tractor that they share, which can work their lands to improve yield. Every day John uses the tractor creates $2k of value for him, while Kathy only gets $1k of value for using the tractor for a day. The naively fair way to divide tractor time would be to let Kathy use it twice as much, meaning that each of them benefit the same amount from the tractor. But this means that the less useful the tractor is to Kathy, the more she gets it!
The smart way to allocate things is to let John use the tractor 100% of the time. Assuming no diminishing returns, this means he gets to use it three times as much, and produce three times as much money selling his crops at the market. He can then give half of that profit to Kathy and they’ll both be better off while keeping things totally fair. Once again, exchanging money saves the day.
Alas, clever solutions like this are hard to set up with kids on a playground. If George is really enjoying the swings, having him pay Heidi for the privilege of not sharing them is hard to imagine. We can perhaps imagine trading comparable goods (“I’ll let you hog the swings if I can hog the sandbox.”) but this is often tricky to arrange in practice, especially when there’s a sense of competition rather than cooperation.
The second major problem with tracking fair divisions of value rather than fair divisions of time or some other, legible resource is that we introduce an incentive to lie about one’s internal values. Suppose we decide to let Heidi use the swings more, since she needs more time with them to get the same amount of joy. A lesson will soon be learned: tell nobody what you truly value, lest they take it away from you in the name of “fairness.”
Dealing with Defectors
This last problem is extremely real. We often see it show up in haggling and other negotiations. Consider, again, the sale of Dave’s old car to Eve. Suppose that Eve lies about how much she cares about the car, claiming that she only values it at $420. If Dave is trying to be fair, he’ll take a lower price there, than if she’d been honest. Likewise, if Dave falsely claims that it’s worth $460 to him, he can get a higher price.
Part of what makes this problem so difficult is that while lying about whether the car runs is something that can easily be checked, there’s no simple test for how much someone cares about something. Thus, even with two people who are doing their best to be fair and honest, things often boil down to each party having a different guess at what the genuine fair price of something should be.
Thankfully, there’s also a clever way to deal with this, and it falls nicely out of considering how to play the ultimatum game on a noisy channel. Imagine that you’re playing the standard ultimatum game with someone, so there’s something like $100 to divide between you. Your partner will propose a split, and then you can either accept the amount they’ve offered or reject it and prevent either of you from getting paid. The fair split is, by our earlier logic, $50.
But suppose that sometimes your partner makes an error when dividing the money and they’ll offer you slightly more or slightly less. If you see that you’ve been offered $47 and they’re getting $53, what should you do?
If you simply assume that was an error, you leave yourself vulnerable to exploitation. Another player who isn’t interested in fairness might deliberately defect, taking advantage of your tolerance at every opportunity. The only thing that prevents defectors from getting away with unfairness, after all, is the willingness of some people to reject unfair deals, even if those deals would still (naively) benefit them.
But if you simply assume that the offer of $47 is the result of your partner trying to take advantage of you, you’ll end up burning bridges over genuine mistakes, leaving everyone worse off!
The key is to have a policy that discourages defecting while still preserving as much value as possible. To do this, we can behave probabilistically (a.k.a. “randomly”). If the deal is fair, always accept. If the deal is totally unfair, always reject. And if the deal is in between, and only somewhat unfair, randomly decide to accept or reject based on how unfair it seems. As long as your willingness to cooperate falls off fast enough, any partner (who knows your policy) will try to offer you the fair price. (An easy, but perhaps unnecessarily vindictive rule of thumb, is “try to give the other player the same expected reward as they offered”, so if they offer $47, and would keep $53, you can accept with a probability p that makes p×$53 = $47
, which in this case is about 90%.)
This trick also applies in situations where the uncertainty doesn’t just come from random mistakes, but also from not knowing whether your business partner (or roommate or whomever) is being honorable. If everyone follows a policy of always accepting fair deals and probabilistically walking away from unfair ones, it sets up an incentive for collaborators to know each other deeply, to err on the side of generosity, and to respect that sometimes when people fail to see eye-to-eye things fall apart.
Utopian Fairness
In Utopia, children learn the basics of fairness, not as about neutral application of rules, or around misleading phrases like “equality of outcome” or “equality of opportunity” but around statements like “a fair situation is one where we expect everyone’s reward agrees with their contribution” and an understanding that in simple contexts that means where everyone gets an equal share of the surplus. To prepare young people for high-stakes deals, negotiation games are played that convey that (consenting) trades always involve an expected win by everyone involved, but the size of that gain can be lopsided if things are unfair.
Utopians place more emphasis on being honorable and open, and will sometimes use random number generators to probabilistically refuse deals that seem exploitative. Because of the cultural recognition of how a small mistake might blow up into a large loss, people in utopia work hard to communicate clearly, including noting when they have a sense that they’re being taken advantage of. As a result, Utopia generally works hard to be fair to people, and to communicate exactly where costs and gains are coming from and be otherwise legible. In the service of this legibility, it is typical in Utopia to focus on fair divisions of time, money, and other concrete goods, rather than subjective value.