Revision as of 18:48, 26 November 2022 by Student10MA279F22 (Talk | contribs)

Part 1: Introduction and Terms

What is game theory and where did it come from?

Game theory is a discipline of mathematics that gives an axiomatic way to represent and examine systems of rational actors. It uses the suspected preferences of the actors to predict outcomes of games with certain rules and/or conditions. For example we could use game theory to determine the prices that two competing duopolies in a local market should set in order to maximize each of their profits.

While some situations studied in modern game theory can trace their roots all the way back to ancient Greece, game theory first emerged as a formal mathematical sub-discipline from the works of mathematician John von Nueman and economist Oskar Morgenstern in the 1940s. In their work, game theory was extremely limited in scope and dealt almost entirely with games limited to two actors and of the “zero sum” variety, meaning one player's losses are always the winnings of the other player. Since then game theory has been applied to a huge range of other fields, most notably economics but also in political science, evolutionary biology, computer science, management, philosophy, epidemiology, and any other discipline that involves competition among self interested agents.

Key Definitions

There are many types of games, conditions, and actors that can be studied in game theory, for completeness most if not all of those terms will be described here even if they are not needed for the applications below as they are important from a pedagogical point of view and can help better illustrate the variety of applications game theory can be useful in.

Player - a rational actor looking to maximize his own outcomes in a game (agent, actor, player, etc can all be used interchangeably and mean the same thing)

Game - a set of strategies that actors/players can undertake whose combinations can lead to different possible outcomes

Strategy Space - the total space of all possible pure strategies each player can play the game with, the ways a single player X can play the game is player X’s strategy space

Pure Strategy - a strategy is pure if it provides a definitive move to make in every possible situation

Mixed Strategy - a strategy is mixed when players assign a probability distribution to every possible pure strategy in their strategy space, mixed strategies may be difficult to understand so consider the case of a penalty shootout. The kicker has a dominant foot, so a pure strategy may be to kick with their dominant foot every time, but the goalie may then defend that side more heavily leading to a worse outcome, so by switching from foot to foot they employ a mixed strategy that creates a better outcome. [Switching observed to be the better strategy in real life by Chiappori, Levitt, and Groseclose, (2002)]

Dominant strategy - a strategy that exists for a player when they have one strategy they should always undertake to reach their optimal outcome regardless of other players’ actions

Strategy Profile - a group formed by choosing one strategy for every player

Outcome Space - the total space of all possible outcomes that can be reached in a single game

Simple Game - a game where each player has only two outcomes, winning or losing

Cooperative Games - a game is cooperative when agents can form alliances/coalitions that can not be violated, a game is noncooperative otherwise

Simultaneous Game - a game is simultaneous if players all move at the same time, or they lack knowledge of previous moves making them effectively simultaneous. If a game is not simultaneous it is considered sequential.

Symmetric Game - a symmetric game is a game in which the outcomes are determined solely based on the strategies employed, not on who is playing them. In other words, all the players are identical, an example of this type is the prisoner's dilemma which we will showcase as our prime introductory example

(In)finite Game - games that can be finished in finitely many moves are considered finite, games that can go on forever with no player being able to achieve a winning or losing strategy can be considered infinite. The games we will discuss are finite, as there is a lack of mathematically rigorous infinite games to study, however an interested reader may want to examine a potential, not so rigorous, foreign policy application of the infinite game here (https://youtu.be/0bFs6ZiynSU)

Best response correspondence - is the choice, or response, a player makes to maximize their own outcome given another player’s actions. Notation: in this project we’ll represent the better response of player i to an opposing strategy X with BRi(X)

Nash Equilibrium - a strategy profile where no player benefits from altering their chosen strategy, essentially it is a “solution” to a noncooperative game. John Nash proved in 1951 that at least one such equilibrium must exist for all games with a finite set of actions.

We can represent games with tables and matrices where each dimension represents a player and the corresponding cells represent the expected outcomes for each combination of individual players’ strategies. We can use these tables to quickly find equilibria and make best response calculations/curves. We will use representations such as these in the coming examples. Now that we've covered all the background, we'll start to make the block of definitions more concrete and interesting with the classic example of game theory, the prisoner's dilemma.

Nash Equilibrium and the Prisoner's dilemma

A strong and popular example of game theory applied to decision making is the age-old “Prisoner’s Dilemma.” In this thought experiment, the authorities have arrested two criminals, we’ll call them prisoners A and B, and are holding them in separate rooms where they cannot speak to each other or receive any indication of each other’s intended decisions. From this description and our definition above we note this game is simultaneous, as neither player is informed of the other’s actions. Next, the authorities realize they don’t have enough hard evidence to convict either prisoner so they have to rely on a confession. The prosecutor offers A a deal, if he snitches on B, he will go free if B stays silent, or serve 5 years if B also snitches on A. The same deal is offered to B. If both stay quiet they each get 2 years on a lowered charge the prosecutors can prove. Note that in this game we impose the condition that staying quiet on the basis of honor or loyalty is irrational. The outcomes are represented in the following table.

Outcome Space for Prisoner's Dilemma

Trivially, we can see that both prisoners staying quiet is the mutually ideal solution. However, we can show that the Nash equilibrium is when both prisoners snitch. If A stays quiet, he runs the risk of being imprisoned for 10 years, so it benefits him to switch to snitching, where he will either go free or serve 5 years. The same goes for B, so neither benefits from staying quiet. Sohe equilibrium is not the ideal solution, since they could both serve 2 years but without knowledge of the other prisoner’s intentions they can never choose to stay quiet. In game theory we say this situation is not Pareto efficient. An equilibrium is considered Pareto efficient or Pareto optimal when the Nash equilibrium is the ideal outcome.

Now that we’ve seen a trivial example to familiarize ourselves with the concepts, the following non-trivial applications should become more clear.

Part 2: Cournot Competition

Nash Equilibrium in Economics In the game of economy, we are interested in the strategic interactions between players in a market/industry, and how the strategy of a player is affected by the existence of another player. In 1838, before John Nash formalized the proof of Nash Equilibrium, even before his birth, Antoine Augustin Cournot showed that a stable equilibrium exists in his model of competition between to producers, "If either of the producers, misled as to his true interest, leaves it temporarily, he will be brought back to it."

To demonstrate Cournot's model of competition, we will look into a simple example of the wall clock market of West Lafayette. Suppose Boilermaker Clockery is the only player in the market, denoted by $ BC $, and $ S_{BC} $ is the set of its possible strategies. In this case, the quantity of wall clocks produced is the only independent variable we take into consideration (we will consider price as a function of quantity), so we will use $ q \in S_{BC} $ for it. Suppose that the potential demand of wall clocks in West Lafayette is $ 20{,}000 $, and we will use a naive model

$ D(p)=20{,}000-200p $

for the demand, where $ p $ is the price of a clock and each dollar of increment in price would lead to $ 200 $ less clocks in demand. Suppose the total production cost is

$ C(q)=5q $

i.e., $ c=5 $ dollars for each clock produced, and Boilermaker Clockery will always produce enough clocks to match the demand as long as it's profitable,

$ \begin{align} q=&D(p)\\ =&20{,}000-200p \end{align} $

we will have the following function for the market unit price for each clock,

$ p(q)=\frac{20{,}000-q}{200} $

and the total revenue would be

$ \begin{align} R(q)=&p(q) \cdot q\\ =&\frac{(20{,}000-q) \cdot q}{200} \end{align} $

Since this is a monopoly, Boilermaker Clockery can pick the best strategy by maximizing the total profit

$ \begin{align} P(q)=&R(q)-C(q)\\ =&\frac{20{,}000q}{200}-\frac{q^2}{200}-5q \end{align} $

To do so, we need to know how total profit respond to change in quantity of clocks produced, a.k.a. the marginal profit,

$ \begin{align} MP(q)=&\frac{d}{dq}P(q)\\ =&\frac{20{,}000}{200}-\frac{2q}{200}-5\\ =&95-\frac{q}{100} \end{align} $

The total profit is maximized when

$ \begin{align} MP(q)=&95-\frac{q}{{100}}=0\\ q=&9500 \end{align} $

i.e., increasing production no longer generate more profit. It is obvious that the extremum is a maximum in this case, but in general, we can determine by computing and showing that the second derivative is negative,

$ \frac{d^2}{dq^2}P(q)=-\frac2{200} $

The unit price would be

$ \begin{align} p(9500)=&\frac{20{,}000-9500}{200}\\ =&52.5 \end{align} $

The maximum total profit would be

$ \begin{align} P(9500)=&9500\cdot\left(p(9500)-c\right)\\ =&9500\cdot(52.5-5)\\ =&451{,}250 \end{align} $

However, the monopoly is no longer possible, as the infamous IU Horology Inc. decides to enter the West Lafayette market. There are now two clock producers in the market, denoted by the set $ \{BC, IU\} $, and $ \forall i \in \{BC, IU\} $, let $ S_i $ be the set of strategies that

......


References

Khan Academy, Prisoner's Dilemma: https://www.khanacademy.org/economics-finance-domain/ap-microeconomics/imperfect-competition/oligopoly-and-game-theory/v/more-on-nash-equilibrium

Mixed strategy in penalty shootouts: https://pricetheory.uchicago.edu/levitt/Papers/ChiapporiGrosecloseLevitt2002.pdf

Nash, John. “Non-Cooperative Games.” Annals of Mathematics, vol. 54, no. 2, 1951, pp. 286–95. JSTOR, https://doi.org/10.2307/1969529. Accessed 26 Nov. 2022.

Stanford Encyclopedia of Philosophy, Game Theory: https://plato.stanford.edu/entries/game-theory/#Mot

University of Illinois's CS440 Artificial Intelligence, Prisoner's Dilemma: https://courses.engr.illinois.edu/ece448/sp2020/slides/lec35.pdf

University of Maryland's ECON414 Game Theory, Simultaneous Games: https://terpconnect.umd.edu/~dvincent/econ414/lec04.pdf

Wikipedia, Prisoner's Dilemma Table: https://en.wikipedia.org/wiki/Prisoner%27s_dilemma

Alumni Liaison

To all math majors: "Mathematics is a wonderfully rich subject."

Dr. Paul Garrett