Game Theory Questions

Class Registration Banner

Game theory questions with solutions are given here for practice and to understand the concept of game theory as a decision theory. In operations research, game theory is a mathematical theory that deals with some kind of decisions in a competitive situation.

The theory of games started in the 20th century and it was proposed by John Von Neuman and Morgenstern. It uses the minimax principle to decide the strategy within a competitive situation.

Some terms related to game theory:

Competitive situations

A situation that has two or more opponent parties/players where each player has a finite list of possible moves with conflicting interests with each other. The action of one depends on the action of the other.

Finite and infinite game

A game which only a finite number of moves, is called a finite game.

A game which is not finite is called an infinite game.

Strategy

It is a predetermined plan that a player uses during the course of the game. There are two types of strategy:

Pure strategy

A particular game plan in a deterministic game situation (where a player what other player is going to do), whose objective is to maximize the gain

Mixed strategy

When a player is guessing the next move of the opponent, a probabilistic situation is created whose objective is to maximise the expected gain.

Thus, a mixed strategy is to select among pure strategies with a fixed probability.

Zero-sum game

If the sum of all the payments to all the players after a play of a game is equal to zero.

Two-person zero-sum game (rectangular game)

In a game where there are only two players and the gain of one results in the loss of the other, such that the net gain of both the player is zero.

Pay-off matrix

A matrix that shows the payment of each player for a particular strategy after a play or end of the game

Value of the game

It is the maximum gain to the maximizing player if both the player uses their best strategy. It is denoted by ‘V’ and is unique.

If the value of the game for player A (maximizing player) is ε then the value for the opponent player will be –ε (a rectangular game)

Maximin-Minimax principle

: The maximising player lists all his minimum gains from the respective strategies, and selects a strategy which gives the maximum gain out the selected minimum gains. The minimising player lists all his maximum gains from the respective strategies, and selects a strategy which gives the minimum gain out of the selected maximum gains

Saddle point

Saddle point of a pay-off matrix is that position where the maximum of the row minima coincides with the minimum of the column maxima.

For a rectangular game,

Maximin of A = Minimax of B is the saddle point of the game.

Game Theory Questions With Solutions

Now let us solve some important questions asked on Game theory.

Question 1:

Solve the following pay-off matrix:

Strategies

I

II

III

I

6

8

6

II

4

12

2

We shall solve the given pay-off matrix by finding the saddle point,

Strategies

I

II

III

Row Minimum

I

6

8

6

Max.

II

4

12

2

2

Column Maximum

Min.

12

Min.

The matrix has two saddle points at (1, 1)and (1, 3). Thus, the solution of game:

(i) Best strategy for player A is I

(ii) Best strategy for player B is I or III

(iii) Value of the game for A is 6 and for B is –6

Question 2:

Strategies

I

II

III

IV

I

1

7

3

4

II

5

6

4

5

III

7

2

0

3

Strategies

I

II

III

IV

Row minimum

I

1

7

3

4

1

II

5

6

4

5

Max.

III

7

2

0

3

0

Column maximum

7

7

Min.

5

The solution of the game:

(i) Best strategy for player A is II

(ii) Best strategy for player B is III

(iii) Value of the game for A is 4 and for B is –4.

Points to remember:

Question 3:

Strategies

I

II

III

IV

V

I

9

3

1

8

0

II

6

5

4

6

7

III

2

4

3

3

8

IV

5

6

2

2

1

Strategies

I

II

III

IV

V

Row Minimum

I

9

3

1

8

0

0

II

6

5

4

6

7

III

2

4

3

3

8

2

IV

5

6

2

2

1

1

Column Maximum

9

6

8

8

The solution for the game:

Question 4:

Find a range of values of a and b for which the following pay-off matrix will a saddle point at (2, 2) position.

Strategies

I

II

III

I

2

4

5

II

10

7

b

III

4

a

6

Finding the row minimum and column maximum of the given pay-off matrix,

Strategies

I

II

III

Row minimum

I

2

4

5

2

II

10

7

b

7

III

4

a

6

4

Column Maximum

10

7

6

Now, given (2,2) is a saddle point, for row 2, 7 will be the minimum then b > 7. In the second column, 7 will be the maximum, then a < 7. If b = 7, then also (2,2) is a saddle point. But if b = 7, then (2, 3) will also be a saddle point.

∴ the range of values for a and b are a ≤ 7 and b > 7.

Question 5:

Strategies

I

II

III

I

1

3

1

II

0

–4

–3

III

1

5

–1

Strategies

I

II

III

Row minimum

I

1

3

1

II

0

–4

–3

–3

III

1

5

–1

–1

Column maximum

5

The given pay-off matrix has two saddle points at (1, 1) and (1, 3).

(iii) Value of the game for A is 1 and for B is –1

A pay-off matrix without any saddle point comes under a game of mixed strategy. Methods of solving such a problem:

Steps:

Strategy

I

II

I

1

5

II

4

2

The matrix have no saddle points, thus solving by the method of odds

Strategy

I

II

Odds

I

1

5

|4 – 2| = 2

II

4

2

|1 – 5| = 4

Odds

|5 – 2| = 3

|1 – 4| = 3

Value of game, V = [1 × 2 + 4 × 4]/[2 + 4] = 18/6 = 3

Probabilities of selecting strategies:

Probability of strategies →

I

II

Players ↓

A

1/3

2/3

B

1/2

1/2

Question 6:

Strategy

I

II

I

–2

8

II

5

–1

Strategy

I

II

Odds

I

2

8

|5 – 1| = 4

II

5

1

|2 – 8| = 6

Odds

|8 – 1| = 7

|2 – 5| = 3

Value of game, V = [2 × 4 + 5 × 6]/[4 + 6] = (6 + 30)/10 = 3.6

Probability of strategies →

I

II

Players ↓

A

2/5

3/5

B

7/10

3/10

The main principle behind the dominance method is that if the strategy of a player dominates over the other in all conditions, then the later strategy is ignored.

Rules of dominance method:

The objective is to reduce the given pay-off matrix into a 2 × 2 matrix which can be solved by the odds method.

  • Linear Programming Problem
  • Simplex Method for Solving LPP
  • Alternate Optima
  • Vogel’s Approximation method

Question 7:

\(\begin{array}{l}\begin{bmatrix} -1& -2 & 8 \\7 & 5 & -1 \\6 & 0 & 12 \\\end{bmatrix}\end{array} \)

Clearly, the given pay-off matrix has no saddle points. Let apply the dominance rules to reduce the given pay-off matrix to a 2 × 2 matrix

Elements of row 1 < Elements of row 3 ⇒ Row 1 is deleted

The resultant pay-off matrix

Strategies

I

II

III

II

7

5

–1

III

6

0

12

Elements of column 1 > Elements of column 2 ⇒ Column 1 is deleted

Strategies

II

III

II

5

–1

III

0

12

Now let us apply the odds method, to find the value and probability of strategies.

Strategies

II

III

Odds

II

5

–1

12

III

0

12

6

Odds

13

5

Value of the game = [5 × 12 + 0 × 6]/[12 + 6] = 60/18 = 10/3

Probability selecting strategies:

Question 8:

\(\begin{array}{l}\begin{bmatrix}3 & 2 & 4 & 0 \\2 & 4 & 2 & 4 \\4 & 2 & 4 & 0 \\0 & 4 & 0 & 8 \\\end{bmatrix}\end{array} \)

Clearly the given pay-off matrix has no saddle point, let us reduce the given matrix using dominance rules:

Strategies

I

II

III

IV

II

2

4

2

4

III

4

2

4

0

IV

0

4

0

8

Elements of column 1 = Elements of column 3 ⇒ Column 1 is deleted

Strategies

II

III

IV

II

4

2

4

III

2

4

0

IV

4

0

8

Now, average of column III and IV (3, 2, 4) is dominated by column II, the matrix is reduced to

Strategies

III

IV

II

2

4

III

4

0

IV

0

8

The average of row III and IV (2, 4) is equal to the elements of row II, the matrix reduced to

Strategies

III

IV

III

4

0

IV

0

8

Strategies

II

III

Odds

III

4

0

8

IV

0

8

4

Odds

8

4

Value of the game = [4 × 8 + 0 × 4]/[8 + 4] = 32/12 = 8/3

  • North West Corner Rule
  • Balanced and Unbalanced Transportation Problem
  • Weak and Strong Duality

Question 9:

\(\begin{array}{l}\begin{bmatrix} 30& 40 & -80 \\0 & 15 & -20 \\90 & 20 & 50 \\\end{bmatrix}\end{array} \)

Clearly, the given pay-off matrix has no saddle point, let us reduce the given matrix using dominance rules:

All elements of row 2 < all elements of row 3 ⇒ deleting the row 2, we get the resultant pay-off matrix

Strategy

I

II

III

I

30

40

–80

III

90

20

50

All elements of column I > all elements of column III ⇒ deleting column I, we get the reduced pay-off matrix

Strategy

II

III

I

40

–80

III

20

50

Strategies

II

III

Odds

I

40

–80

30

III

20

50

120

Odds

130

60

Value of the game = [40 × 30 + 20 × 120]/[30 + 120] = 3600/150 = 24

III = 2/15.

Practice Problems on Game Theory

1. Solve the following pay-off matrices

Strategies

I

II

III

I

1

2

3

II

–3

1

2

III

1

3

2

Strategies

I

II

III

I

–2

2

–1

II

1

1

1

III

3

0

1

Strategies

I

II

III

IV

I

10

0

7

4

II

2

6

4

7

III

5

2

3

8

Learn about various mathematical concepts in a simple manner with detailed information, along with step by step solutions to all questions, only at BYJU’S. Download BYJU’S – The Learning App to get personalised videos.

MATHS Related Links

game theory problems and solutions in operations research

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

Game Theory ¶

This chapter provides an introduction to game theory, where we will learn to model and solve decision problems involving several rational decision makers, where we will focus on solving decision problems where the decision maker must select among various alternatives, focusing on competition and collaboration.

Tutorials ¶

This section contains some basic tutorials:

Exercises ¶

In this section you have a collection of decision tree problems sorted by difficulty:

Easy problems : You can tackle these problems after the first lesson on game theory.

Normal problems : These problems require that you are familiar with concepts like game equilibrium.

Hard problems : This is the type of problems that definitively get you ready for the exam.

Solved Exercises ¶

In this section you have the solution to the different exercises.

  • DOI: 10.1287/opre.50.1.192.17789
  • Corpus ID: 43450725

Game Theory and Operations Research: Some Musings 50 Years Later

  • Published in Operational Research 1 May 2001
  • Computer Science, Economics

47 Citations

Game theory: an overview, games, risk and uncertainty, chapter 2 game theory in supply chain analysis, applications of game theory in finance and managerial accounting, games, game theory and artificial intelligence, game theory in supply chain analysis, modeling capacity investment under competition using game theory, modeling of boom and burst of shadow - a game theory approach, game theory, conditional preferences, and social influence, optimal off-line experimentation for games, 16 references, the uses of game theory in management science, or forum - the next decade in operations research: comments on the condor report, studies and theories of decision making, the role of game theory in economics, what is an application and when is theory a waste of time, evolution of a “science of managing” in america, operations research trajectories: the anglo-american experience from the 1940s to the 1990s.

  • Highly Influential
  • 13 Excerpts

OR/MS Publications: Extension of the Analysis of U.S. Flagship Journals to the United Kingdom

Ten years of the or practice section, methods of operations research, related papers.

Showing 1 through 3 of 0 Related Papers



 

This website is undergoing maintenance!

 

Operations Research - Game Theory

by

Elmer G. Wiens

Egwald's popular web pages are provided without cost to users.
Please show your support by joining Egwald Web Services as a Facebook Fan:
Follow Elmer Wiens on Twitter:

Game Theory - Introduction | | | | | |

Game Theory - Introduction

Max Euwe, 1935 Chess World Champion: Strategy requires thought; Tactics require observation.

| | | | |
| | | |

studies situations in which , and also possibly , to influence the outcome of the parties' interaction to each party's advantage. The situation involves between the participants — called — because at the expense of the other players. What each player obtains from a particular outcome is called the . Each player can choose among a number of actions to influence his pay-off. However, each player's pay-off depends on the other players' choices. Moreover, outcomes can also be influenced by (see the Wikipedia's introduction to .)

game has . A game in which The theory of two-person zero-sum games is the foundation of more complicated games, such as games with more than two players ( ), and games in which the players can benefit through , with or without collusion, side payments, or binding . This introduction is primarily concerned with

The between the two players in a zero-sum game resemble between enemy states. Players make moves and counter-moves, until the declare the game is ended. The rules of engagement determine what each player can or must do at each stage ( ) as the game unfolds. For example, in the game both players simultaneously make one move, with rock beating scissors beating paper beating rock. While this game consists of only one move, games like require many moves to resolve the conflict.

is defined as a plan of action that determines the player's move at each stage of the game (depending on the circumstances of the game at each stage), from the player's first move all the way to the player's final move. Different ploys for play produce different strategies. A player's specify a player's at a particular stage of the game. yield different strategies. Suppose, for example, players agree to play "ten games" of rock, paper, scissors, a game of ten moves. is the tactic of selecting rock, paper, or scissors with probability 1/3 at each of the ten stages. In rapid play, however, choosing moves at random is difficult. Consequently, could emerge. Another strategy is the tactic of replicating the opposing player's previous move in the 3rd, 7th, and 10th stage ( ), and "going with intuition" at the other stages. is the whereby each player's move is determined, at a given stage, on the both players made in the previous stages. Whether exhaustive strategies are depends on such factors as the player's memory and the rate at which the game is played.

Thus, a player's strategy provides him with a move depending on what the other player has done, and/or chance events, as the game unfolds. Different moves and/or means to choose the move in a given situation are . A game with a finite number of stages and a finite number of moves at each stage is called a A finite game has a finite number of distinct strategies available to each player. The game of chess is a finite game, despite its complexity and vast number of strategies.

The choice of a particular strategy by each player determines the pay-off to each player.

The following illustrates the concepts introduced above.

Colonel Blotto must defend two cities with one indivisible regiment of soldiers. His enemy, Colonel Sotto, plans to attack one city with his indivisible regiment. City I has a value of 10 units, while City II has a value of 5 units. If Colonel Sotto attacks a defended city, Sotto loses the battle and obtains nothing. If Colonel Sotto attacks an undefended city, he obtains the value of the city, while Colonel Blotto loses the value of the city. Neither Colonel knows what the other plans to do.

Normal Form.

The following tableau summarizes this situation of conflict — a two-person, zero-sum game. This representation of the game is called the . It lists Sotto's strategies vertically, and Blotto's strategies horizontally. The numbers represent the gains to Sotto and the losses to Blotto, for each choice of strategy.

Defend City I
D1
Defend City II
D2
Attack City I
A1
010
Attack City II
A2
50

Colonel Sotto has two "pure strategies," Attack City I and Attack City II; Colonel Blotto has two "pure strategies," Defend City I and Defend City II. Each Colonel weighs each pure strategy against his enemy's pure strategies.

Examining his options, Colonel Sotto thinks, "Blotto might defend City I, since City I is worth more than City II. Why don't I attack City II"? Conversely, Colonel Blotto thinks, "Sotto might attack City II, since he thinks I will defend City I. Why don't I defend City II"? Thinking further into the game, Sotto thinks, "If Blotto is trying to read my mind, he might defend City II. Maybe I should attack City I after all."

Each Colonel's attempt to read his enemy's mind degenerates into an infinite regression .

However, a solution concept lurks in the Colonels' "gedank" experiments . Sotto attacks each city with positive probability, while Blotto defends each city with positive probability. Furthermore, since City I is worth twice as much as City II, the probability that Blotto defends City I should be twice as great as the probability that he defends City II. Conversely, since Sotto loses in a head-to-head confrontation with Blotto, the probability that Sotto attacks City II should be twice as great as the probability that he attacks City I. Both probabilities for each Colonel must add-up to one. Therefore, the optimal strategy for Colonel Blotto is to defend City I with probability 2/3, and to defend City II with probability 1/3. In the meantime, the optimal strategy for Colonel Sotto is to attack City I with probability 1/3, and to attack City II with probability 2/3.

Strategies consisting of probability weighted pure strategies are called mixed strategies.

If Colonel Sotto plays the mixed strategy (Attack City I, Attack City II) = (1/3, 2/3), he expects to gain:

0 * 1/3 + 5 * 2/3 = 3 1/3 units if Blotto plays Defend City I, 10 * 1/3 + 0 * 2/3 = 3 1/3 units if Blotto plays Defend City II.

If Colonel Blotto plays the mixed strategy (Defend City I, Defend City II) = (2/3, 1/3), he expects to lose:

0 * 2/3 + 10 * 1/3 = 3 1/3 units if Sotto plays Attack City I, 5 * 1/3 + 0 * 2/3 = 3 1/3 units if Sotto plays Attack City II.

Colonel Sotto's strategy assures him of an expected gain of at least 3 1/3 units; Colonel Blotto's strategy assures him of an expected loss of not more than 3 1/3 units. The value of the game is 3 1/3 units.

Extensive Form.

The next diagram shows the extensive form, or game tree , for the Colonel Blotto Game. The blue circle represents Colonel Sotto's decision node, while the red circles represent Colonel Blotto's decision nodes. Colonel Blotto's decision nodes are contained in the yellow ellipse, because Blotto does not know which node he actually is in, since Blotto and Sotto choose their strategies simultaneously. The numbers at the green nodes are the pay-offs to (Sotto, Blotto) with the strategies chosen.

Colonel Blotto Game Tree

The yellow ellipse is called the information set for Colonel Blotto, since Blotto just knows that he is at one or the other node in the ellipse.

The game is symmetrical in the sense that I can start the game tree with Colonel Blotto's decision node instead.

"Doc" Holliday's Revenge

On October 26, 1881, the bad blood between the Earps, Clantons, and McLaurys came to a head at the OK Corral . Billy Clanton, Frank McLaury, and Tom McLaury were killed. Doc Holliday, Virgil and Morgan Earp were injured. Miraculously, Wyatt Earp was unharmed, while the unarmed Isaac "Ike" Clanton survived by running away. Many people believed that Doc Holliday shot-gunned an unarmed Tom McLaury in the back as he was attempting to flee the scene.

Ike Clanton and his friends and associates, known as the "cowboys," swore to get their revenge on the Earps and Holliday. In the ensuing months, Morgan Earp was murdered and Virgil Earp seriously wounded in an ambush. A few days later, Wyatt Earp apparently shot and killed Frank Stillwell, a Clanton associate, and another man believed involved in the ambush. Over the next few years, many more of the "cowboys" were killed.

Although close to death from tuberculosis, in 1887 Doc Holliday decides to look up Ike Clanton and to settle their differences once and for all. On June 1, 1887, Doc Holliday and J.V. Brighton corner Ike Clanton near Springerville, Arizona. Doc tells J.V. to stay out of it for the time being.

Ike and Doc have each a pistol and shotgun. Ike and Doc, spurnning their pistols in favour of their shotguns, press forward towards each other. At long range , Ike — with his cowboy background — is a better shot than Doc. At middle range , Doc — the seasoned gunfighter — out-guns Ike. Both desperadoes are deadly at close range . The probabilities that either rogue will kill the other with a blast from his single-shot shotgun appear in the following table:

 Kill Probability
Long RangeMiddle RangeClose Range
Ike Clanton0.50.61.0
Doc Holliday0.30.81.0

The pay-off to Doc is 10 units if Doc survives and Ike is killed, -10 units if Doc is killed and Ike survives. Doc and Ike's strategies are:

Lblast at long range
Mif enemy has not shot, blast at middle range; otherwise, blast at close range
Cblast enemy at close range

Normal Form.

Always the consummate gambler, Doc Holliday correctly computes his pay-off matrix as:

Shoot-out
Pay-off
Ike's Strategies
 I I I
Doc's
Strategies
-2-7-7
026
0-20

Furthermore, Doc realizes that strategy D M dominates strategies D L and D C , in that his expected pay-off is higher with D M no matter what Ike does. If Ike is rational enough — if Ike correctly computes his pay-off matrix as the negative of Doc's pay-off matrix — Ike will choose strategy I L , because I L dominates strategies I M and I C provided Doc chooses strategy D M .

The pair of pure strategies, the dominant strategies (D M , I L ), is the "solution" to the game. The value of the game is 0, the pay-off when Doc chooses strategy D M and Ike chooses I L .

The pair of strategies, (D M , I L ), determines a saddle point of the pay-off matrix. Notice that the pay-off matrix entry, "0," is the largest entry in the I L column and the smallest entry in the D M row. A strategy pair whose pay-off matrix entry is a saddle point is called an equilibrium strategy pair .

The next diagram shows the extensive form, or game tree , for Doc Holliday's Game. The blue circle represents Doc Holliday's decision node, while the red circles represent Ike Clanton's decision nodes. Ike Clanton's decision nodes are contained in the yellow ellipse, because Ike does not know which node he actually is in, since Ike and Doc choose their strategies simultaneously. The numbers at the green nodes are the pay-offs to (Doc, Ike) with the strategies chosen.

Doc Holliday's Game Tree

The yellow ellipse is called the information set for Ike Clanton, since Ike just knows that he is at one of the three nodes in the ellipse.

The game is symmetrical in the sense that I can start the game tree with Ike Clanton's decision node instead.

Doc Holliday survived the show-down, apparently dying from tuberculosis on November 8, 1887 in Colorado. Not so funny, after all.

Graphical Solution of a 2 x n or m x 2 Game

A two-person zero-sum game with a pay-off matrix with dimensions either 2 x n, or m x 2 can be solved graphically. For example, consider the game with the 2 by 4 pay-off matrix:

Pay-Off
Matrix
Player II's Strategies
II II II II
Player I's
Strategies
4313
0521

Since this matrix does not have a saddle point, one must resort to mixed strategies for a solution. If Player I plays strategy I 1 with probability ß , and strategy I 2 with probability (1 - ß) his expected returns, R, against each of Player II's strategies are:

II's strategyPlayer I's Return
4 * ß + 0 * (1 - ß) = 0 + 4 * ß =
3 * ß + 5 * (1 - ß) = 5 - 2 * ß =
1 * ß + 2 * (1 - ß) = 2 - 1 * ß =
3 * ß + 1 * (1 - ß) = 1 + 2 * ß =

Player I's expected return against a particular strategy of Player II is a function of ß, a straight line . The next diagram graphs these straight-line functions as ß varies from 0 to 1 . The probability ß is graphed along the x-axis, while Player I's return R i against Player II's ith strategy is graphed along the y-axis.

Game Graphical Solution

Player I's "Optimal" Strategy.

If Player I sets ß = .4, Player I can expect at least a return R = 1.6 against any strategy or combination of strategies that Player II chooses to play. The (ß, R) = (0.4, 1.6) point occurs at the intersection of the R 1 and R 3 lines.

Player II's "Optimal" Strategy.

Since Player's I's mixed strategy (0.4, 0.6) assures him of an expected gain of 1.6 units, Player II wants a strategy that will limit his expected loss to 1.6 units . Moreover, Player II can restrict his choice of strategies to II 1 and II 3 , because the diagram shows that strategies II 2 and II 4 expose him to losses greater than 1.6 units.

If Player II plays strategy II 1 with probability µ , and strategy II 3 with probability (1 - µ) his expected losses L against Player I's mixed strategies are:

          L = R * µ + R * (1 - µ),        

          L = (4 * ß) * µ + (2 - ß) * (1 - µ), or      

          L = (5 * µ - 1) * ß + (2 - 2 * µ)          

If Player II sets µ = 1 / 5 = .2 , his expected losses are:


          L = (5 * (1/5) - 1) * ß + (2 - 2 * (1/5) ) = 8 / 5 = 1.6          

Since Player II can limit his losses independently of Player I's choice of mixed strategy, Player II's mixed strategy (0.2, 0, 0.8, 0) guarantees him of an expected loss of no greater than L = 1.6.

Game Solution.

Player I plays a mixed stragey of   (0.4, 0.6)   and Player II plays a mixed strategy of   (0.2, 0, 0.8, 0).  The value of the game is 1.6 units.

The Min-Max Theorem for Matrix Games

A matrix game can be described by its pay-off matrix, an m by n matrix A  =  [a i, j ], the set X of all probability distributions x on the integers {1, 2, . . . , m}, and the set Y of all probability distributions y on the intergers {1, 2, . . . , n}. Thus

= (x , x , ... , x ), with x + x + ... + x = 1

= (y , y , ... , y ), with y + y + ... + y = 1

a a     . . . . .    a a

a a     . . . . .    a a

              . . . . . . . . . . .          

a a     . . .     a a

If Player I selects a probability distribution x and Player II selects a probability distribution y , the pay-off to Player I is P( x , y ) while the pay-off to Player II is -P( x , y ), where:

P( , ) = * A *

ie, the expected pay-off equals the product of the row vector x T and the matrix A, (another row vector), and the column vector y

Lower Value: Max-Min Game.

P( x , y 0 ) = min y in Y P( x , y )

Therefore, Player I should choose x 0 in X so that:

v I = min y in Y P( x 0 , y ) = max x in X [ min y in Y P( x , y ) ]

The number v I is called the lower value, or the maxmin value, of the game.

Upper Value: Min-Max Game.

P( x 0 , y ) = max x in X P( x , y )

Therefore, Player II should choose y 0 in Y so that:

v II = max x in X P( x , y 0 ) = min y in Y [ max x in X P( x , y ) ]

The number v II is called the upper value, or the minmax value, of the game.

Min-Max Theorem for Matrix Games.

If x 0 and y 0 are the strategies chosen as indicated above, then:

v I = v II = v = the value of the game = P( x 0 , y 0 )

If x is any mixed strategy in X, and y is any mixed strategy in Y, then:

P( x , y 0 ) x 0 , y 0 ) x 0 , y )

Thus the strategy pair ( x 0 , y 0 ) is the solution to the game. The two strategies provide a saddle point in mixed strategies in the Cartesian product domain of X x Y.

Linear Programming Solution for Matrix Games

One can conveniently illlustrate the method used to turn a matrix game into a linear programming problem with an example (Hadley, 427). Suppose that player I has 3 strategies, player II has 5 strategies, and the payoff matrix A is given by:

P
l
a
y
e
r

I
P l a y e r   II        
35096
26812
17493

If player I plays strategy 2, his payoff depends on which strategy player II has chosen to play. If player II knows player I is playing strategy 2, she should play her 4th strategy, because then she only pays him 1 unit. But if player I plays strategy 1 instead, she must pay him 9 units. I assume that players know the payoff matrix A, but that players 'announce' their strategies simultaneously. Player I wants to maximize the payoff; player II wants to minimize the payoff.

The Min-Max Theorem suggests each player determine the choice of strategies on the basis of a probability distribution over the player's list of strategies. That is, player I chooses row 1, 2, or 3 with probabilities x 1 , x 2 ,and x 3 . Player II chooses the columns with probabilities y 1 , y 2 , y 3 , y 4 , and y 5 . Note that the x's and y's must be non-negative and each set must add to one.

To turn this into a linear programming problem write:

Primal Linear Program

Maximize P = x , with x nonnegative
3*x + 2*x + 1*x - 1*x >= 0
5*x + 6*x + 7*x - 1*x >= 0
0*x + 8*x + 4*x - 1*x >= 0
9*x + 1*x + 9*x - 1*x >= 0
6*x + 2*x + 3*x - 1*x >= 0
1*x + 1*x + 1*x + 0*x = 1

where the extra variable, x 4 , is the expected payoff (the value) of the game.

The equation,

Maximize P = x 4 , with x 4 nonnegative,

is the objective function. Player I wants to maximize the objective function, the expected payoff.

Then there is an equation for each column of A. Why? If player II plays her 1st strategy, the expected payoff to player I is:

3*x 1 + 2*x 2 + 1*x 3 .

If player I choses his probability distribution optimally, his expected payoff should be greater than or equal x 4 , the expected value of the game. This argument applies to each of A's columns.

The dual linear program to the primal linear program is:

Dual Linear Program

Minimize D = y , with y nonnegative
3*y + 5*y + 0*y + 9*y + 6*y - 1*y
2*y + 6*y + 8*y + 1*y + 2*y - 1*y
1*y + 7*y + 4*y + 9*y + 3*y - 1*y
1*y + 1*y + 1*y + 1*y + 1*y + 0*y = 1

Minimize D = y 6 , with y 6 nonnegative,

is the objective function. Player II wants to minimize the objective function, the expected payoff.

Then there is an equation for each row of A. Why? If player I plays his 1st strategy, the expected payoff to player II is:

3*y 1 + 5*y 2 + 0*y 3 + 9*y 4 + 6*y 5 .

If player II choses her probability distribution optimally, her expected payoff should be less than or equal y 6 , the expected value of the game. This argument applies to each of A's rows.

If one solves this linear programming problem using one of the available methods, the optimal progams:

x T = (x 1 , x 2 , x 3 ) = (0.667, 0.333, 0) and y T = (y 1 , y 2 , y 3 , y 4 , y 5 ) = (0.889, 0, 0.111, 0, 0),

are the Max-Min and Min-Max solutions, while the value, x 4 = y 6 = 2.667, is the value of the matrix game.

Value of the Game - Nonnegative.

To ensure that the value of an arbitrary matrix game is nonnegative when using linear programming to find its solution, add the absolute value of the most negative entry of the pay-off matrix to each entry of the pay-off matrix. After solving the modified game, subtract this constant from its value to get the value of the original matrix game.

You can play this matrix game using the form below. You are the row player, Player I. Use the bonus random number to select the row you want to play. Can you can beat the value of the game over a sequence of games?

Click the row you
want to play. The
program selects a
column using its
optimal strategy.
It ignores the
row you choose.
  y y y y y

Create and play your own matrix game! Specify the payoff matrix dimensions and then input the payoff matrix entries on the webpage at: Play Own Game .

Properties of Optimal Mixed Strategies.

Suppose Player I plays his optimal strategy , x T = (x 1 , x 2 , x 3 ) = (0.667, 0.333, 0), and Player II plays one of her pure strategies. The vector x T * A shows the amount Player II can expect to pay to Player I for each pure strategy she plays.

* A = (2/3, 1/3, 0) *
35096
26812
17493

Notice that only pure strategies 1 and 3 ensure Player II of paying the minimum of 2.6667 units, when Player I plays his optimal mixed strategy.

Likewise, the vector A * y shows the amount Player I can expect to receive from Player II for each pure strategy he plays against her optimal strategy.

A * =
35096
26812
17493

Notice that only pure strategies 1 and 2 ensure Player I of receiving the maximum of 2.6667 units, when Player II plays her optimal mixed strategy.

Worthwhile Strategies.

Pure strategies that appear in an optimal strategy with positive probability are called worthwhile strategies (Thomas, 37).

Set of Optimal Mixed Strategies.

For each player in a two-person zero-sum matrix game, the set of optimal mixed strategies is a closed, convex set (Karlin, 36).

For each player, the optimal strategy calculated from the equivalent linear programming problem is an extreme point of the player's set of optimal strategies. This follows because in a linear programming problem, an optimal program obtains at a vertex of the feasible set, and the feasible set must contain the set of optimal strategies.

Pure Strategy as a Mixed Strategy.

A pure strategy is a mixed strategy with the component corresponding to the probability of playing the pure strategy equal to 1.

The Snow-Shapley Theorem for Matrix Games.

Consider again the two-person zero-sum game with pay-off matrix:

A =
35096
26812
17493

This game has optimal mixed strategies:

If one strikes out the rows and columns of A corresponding to strategies played with zero probability, one gets the reduced matrix M corresponding to the reduced matrix game with pay-off matrix M:

M =
30
28

This procedure of striking out "worthless strategies" produces a matrix M whose inverse M (-1) exists:

M =
 0.33330
-0.08330.1250

Furthermore, letting e T = (1, 1), ie the vector whose components are all one, the value of the game, v, is given by:

v = e T * M (-1) * e = 2.6667.

Moreover, the optimal strategies for Player I and Player II in the reduced game are:

x 0 = v * e T * M (-1) = (0.6667, 0.3333), and y 0 = v * M (-1) * e = (0.8889, 0.1111).         

Finally, the value of the original game is also v, and the optimal strategies for Player I and Player II in the original game are obtained by setting to zero the probability of playing a worthless strategy.

Snow-Shapley Theorem.

Consider any two-person zero-sum matrix game with pay-off matrix A. If x is an extreme point of X 0 , Player I's set of optimal strategies, and if y is an extreme point of Y 0 , Player II's set of optimal strategies, there exists a square, nonsingular submatrix M of A such that:

v = e T * M (-1) * e , x 0 = v * e T * M (-1) , y 0 = v * M (-1) * e ,

where e is the vector of the same dimension as M all of whose components equal 1, v is the value of the game, and x 0 and y 0 correspond to the non-zero components of x and y , respectively.

  • Black, Max. Perplexities. Ithaca: Cornell, 1990.
  • Brahms, Steven J. Biblical Games: A Strategic Analysis of Stories in the Old Testament. Cambridge, Mass: MIT Press, 1980.
  • Brahms, Steven J. Theory of Moves. Cambridge: Cambridge UP, 1994.
  • Dresher, Melvin. Games of Strategy: Theory and Applications. Englewood Cliffs: Prentice-Hall, 1961.
  • Hadley G., Linear Programming. Reading, Mass.: Addison-Wesley, 1962;
  • Howard, Nigel. Paradoxes of Rationality: Theory of Metagames and Political Behavior. Cambridge: MIT, 1971.
  • Intriligator, Michael D. Mathematical Optimization and Economic Theory . Englewood Cliffs: Prentice-Hall, 1971.
  • Karlin, Samuel. Mathematical Methods and Theory of Games, Programming, and Economics. Reading, Mass: Addison-Wesley, 1959.
  • Nash, John F. "Noncooperative Games." Annals of Mathematics. 54 (1951): 286-295.
  • Oxford Dictionary: The Concise Oxford Dictionary of Current English . 5th ed. Ed. H.W. Fowler and F.G. Fowler. Oxford: Oxford UP, 1964
  • Owen, Guillermo. Game Theory, 2nd Edition. New York: Academic, 1982.
  • Rapoport, Anatol. Two-Person Game Theory: The Essential Ideas . Ann Arbor: U Michigan, 1966.
  • Restrepo, Rodrigo A. Theory of Games and Programming: Mathematics Lecture Notes . Vancouver: University of B.C., 1967.
  • Thomas, L. C. Games, Theory and Applications . New York: John Wiley, 1984.
  • Wiens, Elmer G. Reduction of Games Using Dominant Strategies. Vancouver: UBC M.Sc. Thesis, 1969.

      -->   All Rights Reserved.    Inquiries

Jump to navigation

Home

  • Optimization and Game Theory

Optimization is a core methodological discipline that aims to develop analytical and computational methods for solving optimization problems in engineering, data science, and operations research. Research in LIDS focuses on efficient and scalable algorithms for large scale problems, their theoretical understanding, and the deployment of modern optimization techniques to challenging settings in diverse applications ranging from communication networks and power systems to machine learning.

In addition, there is a natural overlap between optimization and control, as much of modern control theory rests on optimization formulations.

Finally the increased interest in systems that involve simultaneous optimization by several, possibly competing agents has led to several research thrusts that rely on game-theoretic approaches.

Sample Activities

  • Distributed nonlinear optimization algorithms
  • Optimization methods for supervised learning
  • Optimization methods that rely on algebraic techniques
  • Optimization in the power grid
  • Reinforcement learning for stochastic optimal control
  • Stochastic gradient descent algorithms and their analysis
  • Cyber-physical systems: architectural design, security and privacy, cross-layer algorithms, and tools for analysis, verification, and performance guarantees
  • Design of incentives and mechanisms in networked, dynamic environments
  • New equilibrium notions and dynamics in games
  • Companies Founded by LIDS Community Members
  • LIDS/All Magazine
  • Directions to LIDS Offices
  • LIDS Overview
  • Statistical Inference and Machine Learning
  • Systems Theory, Control, and Autonomy
  • Aerospace Controls Laboratory (ACL)
  • Communications and Networking Research Group (CNRG)
  • Data to AI Group (DAI)
  • Electric Energy Systems Group (EESG@MIT)
  • Inference and Stochastic Networks Group (ISNG)
  • Stochastic Systems Group (SSG)
  • Wireless Information and Network Sciences Laboratory (WINSLab)
  • Event Calendar
  • LIDS Seminar Series
  • Full Archive
  • Spring 2014
  • Spring 2015
  • Spring 2016
  • Spring 2017
  • Spring 2018
  • Spring 2019
  • Spring 2020
  • Spring 2021
  • Spring 2022
  • LIDS & Stats Tea
  • Spring 2023
  • Conferences and Workshops
  • Faculty and PIs
  • Administrative Staff
  • Research Staff
  • Information for Visitors and New Members
  • LIDS Research Group Snapshots
  • Book a Conference Room
  • Printer Instructions for Windows
  • Printing Instructions for Mac
  • Printing Instructions for Linux
  • LIDS Theses & Publications
  • Request Repairs
  • Ship & Receive Mail
  • Travel Rules and Reimbursements
  • Subscribing to Calendar Feeds
  • Using Vibe boards

Contact Us: 617-253-2142

Laboratory for Information and Systems Decisions

Massachusetts Institute of Technology 77 Massachusetts Avenue Room 32-D608 Cambridge, MA 02136

(Stanford users can avoid this Captcha by logging in.)

  • Send to text email RefWorks EndNote printer

Operations research problems : statements and solutions

Available online.

  • SpringerLink

More options

  • Find it at other libraries via WorldCat
  • Contributors

Description

Creators/contributors, contents/summary.

  • Linear programming
  • Integer programming
  • Non-linear programming
  • Network Modelling.-?Inventory theory
  • Queuing theory
  • Decision theory
  • Game theory
  • Dynamic programming
  • Markov processes.

Bibliographic information

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

game theory problems and solutions in operations research

  • Operations Research Problems

Statements and Solutions

  • © 2014
  • 1st edition
  • View latest edition
  • Raúl Poler 0 ,
  • Josefa Mula 1 ,
  • Manuel Díaz-Madroñero 2

Research Centre on Production Management and Engineering, Polytechnic University of Valencia, Alcoy, Spain

You can also search for this author in PubMed   Google Scholar

Escuela Politécnica Superior de Alcoy, Universidad Politécnica de Valencia, Alcoy, Spain

Universitat politècnica de valència, alcoy, spain.

  • Provides a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science
  • Identifies different operations management problems in order to improve the decision making process concerning readers
  • Addresses the following topics: Linear programming, integer programming, non-linear programming, network modeling, inventory theory, queue theory, tree decision, game theory, dynamic programming and markov processes

49k Accesses

11 Citations

This is a preview of subscription content, log in via an institution to check access.

Access this book

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

Similar content being viewed by others.

game theory problems and solutions in operations research

Applications and Mathematical Modeling in Operations Research

game theory problems and solutions in operations research

From Operations Research to Dynamic Data Mining and Beyond

game theory problems and solutions in operations research

Introduction to Stochastic Optimisation and Game Theory

Dynamic programming.

  • Game Theory

Integer Programming

Inventory theory.

  • Linear and Non-Linear Programming

Markov Processes

  • Network Modeling
  • Queue Theory

Table of contents (10 chapters)

Front matter, linear programming.

  • Raúl Poler, Josefa Mula, Manuel Díaz-Madroñero

Non-Linear Programming

Network modelling, queueing theory, decision theory, games theory, back matter.

From the reviews:

Authors and Affiliations

Josefa Mula

Manuel Díaz-Madroñero

Bibliographic Information

Book Title : Operations Research Problems

Book Subtitle : Statements and Solutions

Authors : Raúl Poler, Josefa Mula, Manuel Díaz-Madroñero

DOI : https://doi.org/10.1007/978-1-4471-5577-5

Publisher : Springer London

eBook Packages : Engineering , Engineering (R0)

Copyright Information : Springer-Verlag London Ltd., part of Springer Nature 2014

Hardcover ISBN : 978-1-4471-5576-8 Published: 22 November 2013

Softcover ISBN : 978-1-4471-7190-4 Published: 23 August 2016

eBook ISBN : 978-1-4471-5577-5 Published: 08 November 2013

Edition Number : 1

Number of Pages : XV, 424

Number of Illustrations : 32 b/w illustrations, 55 illustrations in colour

Topics : Industrial and Production Engineering , Operations Research/Decision Theory , Game Theory, Economics, Social and Behav. Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. Game theory [Operations research]- Part 1- Basic

    game theory problems and solutions in operations research

  2. Game theory [Operations research]- Part 2- Saddle point- 10 solved examples

    game theory problems and solutions in operations research

  3. Game Theory

    game theory problems and solutions in operations research

  4. Game Theory: Problems and Solutions

    game theory problems and solutions in operations research

  5. Game theory [Operations research]- Part 3- Dominance for pure strategy

    game theory problems and solutions in operations research

  6. MaxiMin and MiniMax Principles

    game theory problems and solutions in operations research

VIDEO

  1. GAME THEORY PROBLEMS 2( GRAPHICAL METHOD)

  2. How Game Theory Works in Real Life? UGC NET Economics #shorts #shortvideo #gametheory #ugcnet

  3. Operations Research: Game Theory with Saddle Point for Missing Terms

  4. GAME THEORY PROBLEMS 1

  5. 05 01 Introduction to Game Theory

  6. Game Theory|Short-cut Method|Operation Research|Dream Maths

COMMENTS

  1. Game theory Questions With Solutions

    Game theory questions with solutions are given here for practice and to understand the concept of game theory as a decision theory. In operations research, game theory is a mathematical theory that deals with some kind of decisions in a competitive situation. The theory of games started in the 20th century and it was proposed by John Von Neuman and Morgenstern.

  2. PDF Game Theory with Applications in Operations Management

    related solution concepts and to explicate these concepts with contemporary research problems in operations and supply chain management. The book has a modular struc-ture. Chapters 1 and 2 provide an introduction to operations management and game theory, respectively. Traditionally, game theory is divided into strands—noncoop-erative games ...

  3. (PDF) Game Theory And Operations Research Research

    The ne w game theory in operations research applications. lies in the study of organizations and in systems that in volv e. individuals, networks, and institutions. The success of game. theory in ...

  4. Game Theory with Applications in Operations Management

    Despite research in operations and supply chain management having embraced both non-cooperative and cooperative game-theoretic solution concepts, there is still an abundance of underutilized concepts and tools in game theory that could strongly influence operations management problems.

  5. PDF Game Theory and Operations Research: Some Musings 50 Years Later

    Box 208281, New Haven, Connecticut 06520, [email protected]. yy its very nature, a discursive reminiscence has to of be game theory methods to operations research (Shubik somewhat self-referential. Furthermore, it cannot offer 1953a, 1958a), economics (Shubik 1953b, 1953c), politi- an exhaustive survey of the many special topics to which ...

  6. PDF Chapter 5 Applications of Game Theory in Operation ...

    Chapter 5. Applications of Game Theory in Operation Management and Information Systems. Lode Li and Seungjin Whang. 1 Introduction. In the past 20 years the Operations Management and Information Systems (OM/IS) fields has widely applied game theory in their research. The OM field deals with the creation of products and services from design to ...

  7. Game Theory

    Game Theory. ¶. This chapter provides an introduction to game theory, where we will learn to model and solve decision problems involving several rational decision makers, where we will focus on solving decision problems where the decision maker must select among various alternatives, focusing on competition and collaboration. This chapter ...

  8. PDF IM 2010: Operations Research, Spring 2014 Game Theory (Part 1): Static

    Game Theory (Part 1): Static Games. Ling-Chieh Kung. agement National Taiwan UniversityMay 15, 2014Brief history of game theorySo. r we have focused on decision making problems with only one decision maker.Game the. y provides a framework for analyzing multi-player decision making problems.While it has been implicitly discussed in Ec.

  9. Game Theory

    OPERATIONS RESEARCH AND MANAGEMENT SCIENCE Operations Research and Management Science (ORMS) is a broad, interdisciplinary branch ... Barron Game Theory: An Introduction, Second Edition Forthcoming Titles Nussbaum and Mislick Cost Estimation: ... Problems, 43 1.5 Graphical Solution of 2 ...

  10. Game Theory and Operations Research: Some Musings 50 Years Later

    In this survey of areas of application of game theory some problems which have been completely formulated, solved, and are of immediate practical value have been discussed in Sections 4 and 5, but for those interested in the deeper and more important long-range problems which confront the researcher in the behavioral sciences, Section 6 indicates where some of the work of yesterday has taken ...

  11. Game Theory and Operations Research: Some Musings 50 Years Later

    Institute for Operations Research and the Management Sciences. 5521 Research Park Drive, Suite 200. Catonsville, MD 21228 USA. phone 1 443-757-3500. phone 2 800-4INFORMS (800-446-3676) fax 443-757-3515. email [email protected]. Get the Latest Updates. Discover INFORMS.

  12. Operations Research

    A two-person game has two players.A game in which one player wins what the other player loses is called a zero-sum game. The theory of two-person zero-sum games is the foundation of more complicated games, such as games with more than two players (n-person games), and games in which the players can benefit through cooperation, with or without collusion, side payments, or binding agreements.

  13. Optimization and Game Theory

    Optimization and Game Theory. Optimization is a core methodological discipline that aims to develop analytical and computational methods for solving optimization problems in engineering, data science, and operations research. Research in LIDS focuses on efficient and scalable algorithms for large scale problems, their theoretical understanding ...

  14. Applications of Game Theory in Operation Management and Information

    1 Introduction. In the past 20 years the Operations Management and Information Systems (OM/IS) fields has widely applied game theory in their research. The OM field deals with the creation of products and services from design to delivery to customers. Facing the complexity of the problems, the field had previously been more focused on analyzing ...

  15. Transportation problems and their solutions: literature review

    The transportation problem is a classic problem in operations research that involves finding the optimal way to move goods from one place to another. ... such as network flow models and game theory, can be used to develop effective solutions: • Penalty methods: This method involves adding a penalty cost to the objective function to ...

  16. PDF Game theory: Parts I and II

    Game theory is divided into two main branches. The first is cooperative game theory, which assumes that the players can communicate, form coalitions and sign binding agreements. Cooperative game theory has been used, for example, to analyze voting behavior and other issues in political science and related fields. We will deal exclusively with ...

  17. Operation Research Unit 3

    Operation Research Unit 3 - Free download as PDF File (.pdf), Text File (.txt) or read online for free. 1. The document discusses concepts from game theory including pure strategies, mixed strategies, saddle points, and solving 2x2 games using the arithmetic method. It provides examples of game theory problems involving optimal strategies and determining if games are fair or strictly determinable.

  18. Operations research problems : statements and solutions

    Markov processes. Summary. The objective of this book is to provide a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science. These problems can serve as a basis for the development or study of assignments and exams.

  19. Special Issue: Recent Trends in Operations Research and Game Theoretic

    For this special issue, we expect original research papers of high quality providing theoretical and/or practical applications of Operations Research and Game theory. We would especially welcome innovative contributions of new methods and applications of operations research and game theoretic models involving large-scale data from business or ...

  20. Game Theory in Operations Research and Decision Making

    1.The number of competitors is finite. 2. The player act rationally and intelligently. 3. Each player has available to him a finite set of possible courses of action. 4. The players attempt to maximize gains and minimize losses. 5. The outcome of the game is affected by the choices made by all the players.

  21. Operations Research and Game Theory

    The second part of the course is devoted to game theory is a mathematical language of modern economic and social science. Any time a model involves conflict of interest game theory is needed since it is the study of strategic decision making. Game theory has numerous applications including economic theory, political science, evolutionary ...

  22. Operations Research Problems: Statements and Solutions

    The objective of this book is to provide a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science. These problems can serve as a basis for the development or study of assignments and exams. Also, they can be useful as a guide ...

  23. Game Theory (Open Access textbook with 165 solved exercises)

    This is an Open Access textbook on non-cooperative Game Theory with 165 solved exercises. Discover the world's research. 25+ million members; 160+ million publication pages; 2.3+ billion citations;