A **utility function** is a representation to define individual preferences for goods or services beyond the explicit monetary value of those goods or services. In other words, it is a calculation for how much someone desires something, and it is relative. For example, if someone prefers dark chocolate to milk chocolate, they are said to derive more utility from dark chocolate. A utility function of this relationship could look something like $U(C) = \log(C_d) + \frac{1}{2} \log(C_m),$U(C)=log(Cd)+21log(Cm), where $U(C)$U(C) is the utility of eating dark $(C_d)$(Cd) and milk $(C_m)$(Cm) chocolates. In this example, a consumer derives half as much utility from milk chocolate as they do from dark.

Economists use utility functions to explain human behavior, particularly in different states, or where there is a probability that some state will occur. Someone might desire to go to a coffee shop to sit outside and drink espresso, but the utility they will get out of this depends on the state of the weather: whether they will sit in the rain or in sunshine, whether it will be warm or freezing outside, etc. And a calculation of utility can depend not just whether some state occurs, but on the *considered probability* of these states occurring: the utility an espresso drinker gets changes when they consider that it might rain (regardless of if it will or will not).

This concept helps to explain (and prove mathematically) a number of social constructs, like insurance, different prices for similar goods, or the bundling of services. It dates back to $18^\text{th}$18th century philosophies of utilitarianism, led by philosophers like John Stuart Mill and Jeremy Bentham, but is used today as a key concept in game theory, Nash equilibrium, and rational choice theory.

#### Contents

- Utility Functions Explained
- The Utility of Money
- St. Petersburg Paradox
- Expected Utility Theorem
- Utility Explains Insurance
- References

## Utility Functions Explained

Utility is the measure of value an individual gets from some good or service. For instance, a sick patient gets a great deal of utility from a lifesaving medicine, whereas a well-fed diner gets minimal utility from an additional slice of pizza. Economists believe that these, and other exchanges of goods and serves, are measurable as quantities of utility.

Historically, philosophers believed that utility had a cardinal magnitude, i.e. that there were specific amounts of utility. So, of two athletes, the one who is parched and dehydrated might be able to say they get five times as much utility from a sports drink as the other who is well-hydrated. One way cardinal utility is useful is that it allows researchers to **bundle** preferences. For instance, a consumer might get more utility from a rain jacket than an umbrella, as a rain jacket not only keeps them dry but also warm. But that same consumer might get the same utility from a rain jacket as they do from a (sweater + umbrella) bundle. The consumption set of preferences for this might be given by a function $U(p),$U(p), where $U(p)={\{\text{nothing, umbrella, rain jacket, umbrella} + \text{sweater}\}}$U(p)={nothing,umbrella,rainjacket,umbrella+sweater}. In this case $U(\text{nothing})$U(nothing) = 0, $U(\text{umbrella}) = 4$U(umbrella)=4, and $U(\text{rain jacket}) = 6 = U(\text{umbrella} + \text{sweater})$U(rainjacket)=6=U(umbrella+sweater) with 0, 4, and 6 representing some finite quantities of utility, sometimes denoted by the unit $\text{util}$util, as in "during rainy weather a rain jacket has $6\,\text{util}$6util to a particular consumer."

Today, most think of utility as ordinal: not that there is some finite cardinal utility, but that order of preferences matters. A dehydrated athlete values a sports drink more than they do a bouquet of roses, which they are likely to value more than an ordinary rock. The order of these preferences is what matters, and is transformable. If the utility function of some consumer's preferences $U(p) = U(p_{1},p_{2},p_{3})$U(p)=U(p1,p2,p3) is transformed by an order-preserving function $f$f, where $f\big(U(p)\big) = f\big(U(p_{1},p_{2},p_{3})\big),$f(U(p))=f(U(p1,p2,p3)), then $f\big(U(p)\big)$f(U(p)) is said to be another utility function that represents that consumer's preferences.

## The Utility of Money

For different people, a supposedly fixed asset can have different utility. For instance, an incremental dollar bill given to a billionaire has less utility than an incremental dollar bill given to someone with nothing. Or money received in the future is valued less than equal quantities of money received now, in what is sometimes referred to as the time value of money.

In the case of wealth, most people's utility function increases as the wealth increases up to a point. Most people get little utility from $1, more from $10,000, and a lot of utility from $1,000,000. However, this flattens overtime due to diminishing marginal utility. A person's utility increases more when they go from $1 to $1,000,000 than it does when they go from $1,000,000 to $1,999,999 even though the increase is exactly the same. This sharp increase in the beginning and flattening out is typically represented by a log function.

The differences in individual perceptions to fixed assets like money famously took the form of the **St. Petersburg paradox.**

## St. Petersburg Paradox

No more than $8 No more than $32 No more than $99 No more than $100 Any finite amount of money

The St. Petersburg paradox is a theoretical game first proposed by Nicolas Bernoulli, in which you pretend that you are a player in a casino playing a special coin toss game.

The casino starts with a guaranteed payout to you of $2. The game proceeds using a fair coin, tossed in succession until it flips a tails. After each flip where the coin is heads, the casino doubles the pot. So, if a tails appears right at the first toss, you get $2. If a tails does not arrive until the second toss, you win $4. If a tails arrives on the third toss, you win $8, and so on.

The challenge is that you have to pay some amount of money to be allowed to play this game. If you were the player and were told to act completely rationally, considering only the expected payout, and the casino places no limits on the maximum payout, what is the maximum amount you should be willing to pay to play this game?

This result is paradoxical for any number of reasons. For one reason, the player of the game is sure to walk away with finite winnings, so why should the player pay an infinite amount to play? Another reason is that the odds of making $32 or less are high: $P( \text{32 or smaller payout} ) = \frac{31}{32}.$P(32orsmallerpayout)=3231. So paying even a large finite number seems ridiculous.

Bernoulli's original resolution was the first to present the formal concept of diminishing marginal utility, the concept each additional unit of something will offer smaller and smaller quantities of utility. For instance, think of slices of delicious chocolate cake; while one slice might be delicious, eating the second slice isn't as enjoyable, eating the third may be too much, the fourth may make an eater sick, etc. The happiness, or utility, an eater gets out of each additional slice is said to diminish. This concept extends to wealth and helps to resolve the St. Petersburg paradox.

$$2\log n$2logn $$4\log n$4logn $$6\log n$6logn $$8\log n$8logn $$10\log n$10logn Infinity, a player should pay any amount to play

The traditional resolution of the St. Petersburg paradox involves adding a utility function to the problem, taking into consideration diminishing marginal utility.

If the utility of winning $n$n dollars is a logarithmic function, specifically $\log(n)$log(n), then what is the expected payout, factoring in utility, that a player would get from playing the St. Petersburg paradox game? Put another way, if the value of $n$n diminishes $\log(n)$log(n), what is the break-even point such that a player should only pay less than this answer to play the game?

There are many other proposed resolutions to the Paradox, including ones that factor in time, as opposed to utility; that use sampling across a large number of trials (which have revealed about a $10 average payout to players); and Whitworth's formulation in which a player gambles a percentage of his remaining capital.

The St. Petersburg paradox even appears in popular culture, with the TV show "Who Wants to Be a Millionaire" representing a modified version of the game. ^{[1]}

## Expected Utility Theorem

The **expected utility theory** was developed by Von Neumann and Morgenstern to determine utility in situations of quantifiable risk.

Expected Utility TheoremRational agents, faced with a probabilistic choice, will act to maximize the expected value of their utility$E[u(x)] = \sum\limits_{o=1}^{O} p_ou(x_o),$E[u(x)]=o=1∑Opou(xo),where $E[u(x)]$E[u(x)] is the expected utility function where different outcomes can be indexed $o = 1,2,...,O$o=1,2,...,O and they have a probability of each occurring, $p_1,p_2,...,p_O,$p1,p2,...,pO, and $x_o$xo is the return from that outcome occurring, $x_1,x_2,...,x_O$x1,x2,...,xO. In other words, it is the sum of the utilities from each specific outcome, times the probability of each of those outcomes occurring.

Suppose you're deciding whether to bring an umbrella with you today. You might do a calculation of the expected utility of bringing it versus the expected utility of leaving it at home.

If you bring it, there are three possible outcomes: you lose it (20% chance), you carry it around unnecessarily (50% chance), or you use it to keep you dry (30% chance).

If you do not bring it, there are also three possible outcomes: you lose it (0% chance), you never need it (62.5% chance), or you need it (37.5% chance).Your expected utility if you bring an umbrella:

$\begin{aligned}E_u[u(x)] &= p_1u(x_1) + p_2u(x_2) + p_3u(x_3) \\\\&= 20\%\times u(\text{losing your umbrella}) + 50\%\times u(\text{carry umbrella around unnecessarily}) + 30\%\times u(\text{umbrella keeps you dry}).\end{aligned}$Eu[u(x)]=p1u(x1)+p2u(x2)+p3u(x3)=20%×u(losingyourumbrella)+50%×u(carryumbrellaaroundunnecessarily)+30%×u(umbrellakeepsyoudry).

Your expected utility of no umbrella:

$\begin{aligned}E_n[u(x)] &= p_1u(x_1) + p_2u(x_2) + p_3u(x_3) \\\\&= 0\%\times u(\text{losing your umbrella}) + 62.5\%\times u(\text{umbrella never needed}) + 37.5\%\times u(\text{you get wet}).\end{aligned}$En[u(x)]=p1u(x1)+p2u(x2)+p3u(x3)=0%×u(losingyourumbrella)+62.5%×u(umbrellaneverneeded)+37.5%×u(yougetwet).

For someone who hates getting wet, the utility of staying dry $u(\text{umbrella keeps you dry})$u(umbrellakeepsyoudry) might be much larger than the lost utility from carrying an umbrella around unnecessarily, 10 $\text{utils}$utils versus -1 $\text{utils}$utils, and the expected utility of losing an umbrella might only be -2 $\text{utils}$utils. In this case, the expected utility of keeping an umbrella with them would be

$E_u[u(x)] = 20\%\times(-2) + 50\%\times(-1) + 30\%\times(10) = 2.1 \text{ utils}$Eu[u(x)]=20%×(−2)+50%×(−1)+30%×(10)=2.1utils

versus

$E_n[u(x)] = 0\%\times (2) + 62.5\%\times (1) + 37.5\%\times (-10) = -3.125 \text{ utils}.$En[u(x)]=0%×(2)+62.5%×(1)+37.5%×(−10)=−3.125utils.

## Utility Explains Insurance

Insurance, as a for-profit commercial entity, exists because customers are willing to pay a monetary premium beyond the monetary value of being insured. In an insurance pool, customers pool resources against large risks, and the company takes a profit beyond this. Over a series of payments, participants actually pay above and beyond the net expected monetary loss.

The expected value $V$V of some loss of wealth $w$w can be expressed as $V(w)=pw,$V(w)=pw, where $p$p is the probability that a loss occurs, for instance, that the insured person loses their house in a fire. If the insured person was maximizing their wealth, they would be unwilling to pay more than $V(w)$V(w), which would mean no premium, or profit for the company. But, actually, people are willing to pay $U\big(V(w)\big),$U(V(w)), where $U()$U() is their utility function. They apply some additional value to maintaining their wealth beyond the wealth itself. People are so afraid of what would happen if their house burned down (or some other wealth was lost) that they overestimate the damage it would cause, and pay for that.

For example, suppose an individual has a chance of losing wealth $w,$w, let's say a $\$100,000$$100,000 apartment, in a fire with a probability $p$p of $0.001$0.001 or $1$1 in a $1,000$1,000 in a 10-year time period. In an actuarially fair scenario, meaning the Insurer makes no profit, that individual should pay $pw$pw or $\$100,000 \times 0.001 = \$100$$100,000×0.001=$100.

What actually happens is $U(pw)$U(pw), and this increases depending on how risk-adverse the customer is or as the wealth increases.

A customer might be willing to pay $1 per month every month for the 10 years they live in that apartment. Let's assume that inflation is 2% annually, then the customer pays a total of $120 over 10 years, with an inflation-adjusted present value of $107.56. The inflation-adjusted premium is $7.56, or 7.56%.

## References

- Richards, D.
*The-St.-Petersburg-Paradox-and-the-Quantification-of-Irrational-Exuberance*. Retrieved May 25, 2016, from http://sites.stat.psu.edu/~richards/bgsu/bgtalk1.pdf