| Operations Research - Game Theory
by
Elmer G. Wiens Egwald's popular web pages are provided without cost to users. Please show your support by joining Egwald Web Services as a Facebook Fan: Follow Elmer Wiens on Twitter: Game Theory - Introduction | | | | | | Game Theory - Introduction Max Euwe, 1935 Chess World Champion: Strategy requires thought; Tactics require observation. | | | | | | | | | studies situations in which , and also possibly , to influence the outcome of the parties' interaction to each party's advantage. The situation involves between the participants — called — because at the expense of the other players. What each player obtains from a particular outcome is called the . Each player can choose among a number of actions to influence his pay-off. However, each player's pay-off depends on the other players' choices. Moreover, outcomes can also be influenced by (see the Wikipedia's introduction to .) game has . A game in which The theory of two-person zero-sum games is the foundation of more complicated games, such as games with more than two players ( ), and games in which the players can benefit through , with or without collusion, side payments, or binding . This introduction is primarily concerned with The between the two players in a zero-sum game resemble between enemy states. Players make moves and counter-moves, until the declare the game is ended. The rules of engagement determine what each player can or must do at each stage ( ) as the game unfolds. For example, in the game both players simultaneously make one move, with rock beating scissors beating paper beating rock. While this game consists of only one move, games like require many moves to resolve the conflict. is defined as a plan of action that determines the player's move at each stage of the game (depending on the circumstances of the game at each stage), from the player's first move all the way to the player's final move. Different ploys for play produce different strategies. A player's specify a player's at a particular stage of the game. yield different strategies. Suppose, for example, players agree to play "ten games" of rock, paper, scissors, a game of ten moves. is the tactic of selecting rock, paper, or scissors with probability 1/3 at each of the ten stages. In rapid play, however, choosing moves at random is difficult. Consequently, could emerge. Another strategy is the tactic of replicating the opposing player's previous move in the 3rd, 7th, and 10th stage ( ), and "going with intuition" at the other stages. is the whereby each player's move is determined, at a given stage, on the both players made in the previous stages. Whether exhaustive strategies are depends on such factors as the player's memory and the rate at which the game is played. Thus, a player's strategy provides him with a move depending on what the other player has done, and/or chance events, as the game unfolds. Different moves and/or means to choose the move in a given situation are . A game with a finite number of stages and a finite number of moves at each stage is called a A finite game has a finite number of distinct strategies available to each player. The game of chess is a finite game, despite its complexity and vast number of strategies. The choice of a particular strategy by each player determines the pay-off to each player. The following illustrates the concepts introduced above. Colonel Blotto must defend two cities with one indivisible regiment of soldiers. His enemy, Colonel Sotto, plans to attack one city with his indivisible regiment. City I has a value of 10 units, while City II has a value of 5 units. If Colonel Sotto attacks a defended city, Sotto loses the battle and obtains nothing. If Colonel Sotto attacks an undefended city, he obtains the value of the city, while Colonel Blotto loses the value of the city. Neither Colonel knows what the other plans to do. Normal Form. The following tableau summarizes this situation of conflict — a two-person, zero-sum game. This representation of the game is called the . It lists Sotto's strategies vertically, and Blotto's strategies horizontally. The numbers represent the gains to Sotto and the losses to Blotto, for each choice of strategy. | | Defend City I D1 | Defend City II D2 | | Attack City I A1 | 0 | 10 | Attack City II A2 | 5 | 0 | Colonel Sotto has two "pure strategies," Attack City I and Attack City II; Colonel Blotto has two "pure strategies," Defend City I and Defend City II. Each Colonel weighs each pure strategy against his enemy's pure strategies. Examining his options, Colonel Sotto thinks, "Blotto might defend City I, since City I is worth more than City II. Why don't I attack City II"? Conversely, Colonel Blotto thinks, "Sotto might attack City II, since he thinks I will defend City I. Why don't I defend City II"? Thinking further into the game, Sotto thinks, "If Blotto is trying to read my mind, he might defend City II. Maybe I should attack City I after all." Each Colonel's attempt to read his enemy's mind degenerates into an infinite regression . However, a solution concept lurks in the Colonels' "gedank" experiments . Sotto attacks each city with positive probability, while Blotto defends each city with positive probability. Furthermore, since City I is worth twice as much as City II, the probability that Blotto defends City I should be twice as great as the probability that he defends City II. Conversely, since Sotto loses in a head-to-head confrontation with Blotto, the probability that Sotto attacks City II should be twice as great as the probability that he attacks City I. Both probabilities for each Colonel must add-up to one. Therefore, the optimal strategy for Colonel Blotto is to defend City I with probability 2/3, and to defend City II with probability 1/3. In the meantime, the optimal strategy for Colonel Sotto is to attack City I with probability 1/3, and to attack City II with probability 2/3. Strategies consisting of probability weighted pure strategies are called mixed strategies. If Colonel Sotto plays the mixed strategy (Attack City I, Attack City II) = (1/3, 2/3), he expects to gain: 0 * 1/3 + 5 * 2/3 = 3 1/3 units if Blotto plays Defend City I, 10 * 1/3 + 0 * 2/3 = 3 1/3 units if Blotto plays Defend City II. If Colonel Blotto plays the mixed strategy (Defend City I, Defend City II) = (2/3, 1/3), he expects to lose: 0 * 2/3 + 10 * 1/3 = 3 1/3 units if Sotto plays Attack City I, 5 * 1/3 + 0 * 2/3 = 3 1/3 units if Sotto plays Attack City II. Colonel Sotto's strategy assures him of an expected gain of at least 3 1/3 units; Colonel Blotto's strategy assures him of an expected loss of not more than 3 1/3 units. The value of the game is 3 1/3 units. Extensive Form. The next diagram shows the extensive form, or game tree , for the Colonel Blotto Game. The blue circle represents Colonel Sotto's decision node, while the red circles represent Colonel Blotto's decision nodes. Colonel Blotto's decision nodes are contained in the yellow ellipse, because Blotto does not know which node he actually is in, since Blotto and Sotto choose their strategies simultaneously. The numbers at the green nodes are the pay-offs to (Sotto, Blotto) with the strategies chosen. The yellow ellipse is called the information set for Colonel Blotto, since Blotto just knows that he is at one or the other node in the ellipse. The game is symmetrical in the sense that I can start the game tree with Colonel Blotto's decision node instead. "Doc" Holliday's Revenge On October 26, 1881, the bad blood between the Earps, Clantons, and McLaurys came to a head at the OK Corral . Billy Clanton, Frank McLaury, and Tom McLaury were killed. Doc Holliday, Virgil and Morgan Earp were injured. Miraculously, Wyatt Earp was unharmed, while the unarmed Isaac "Ike" Clanton survived by running away. Many people believed that Doc Holliday shot-gunned an unarmed Tom McLaury in the back as he was attempting to flee the scene. Ike Clanton and his friends and associates, known as the "cowboys," swore to get their revenge on the Earps and Holliday. In the ensuing months, Morgan Earp was murdered and Virgil Earp seriously wounded in an ambush. A few days later, Wyatt Earp apparently shot and killed Frank Stillwell, a Clanton associate, and another man believed involved in the ambush. Over the next few years, many more of the "cowboys" were killed. Although close to death from tuberculosis, in 1887 Doc Holliday decides to look up Ike Clanton and to settle their differences once and for all. On June 1, 1887, Doc Holliday and J.V. Brighton corner Ike Clanton near Springerville, Arizona. Doc tells J.V. to stay out of it for the time being. Ike and Doc have each a pistol and shotgun. Ike and Doc, spurnning their pistols in favour of their shotguns, press forward towards each other. At long range , Ike — with his cowboy background — is a better shot than Doc. At middle range , Doc — the seasoned gunfighter — out-guns Ike. Both desperadoes are deadly at close range . The probabilities that either rogue will kill the other with a blast from his single-shot shotgun appear in the following table: | Kill Probability | Long Range | Middle Range | Close Range | Ike Clanton | 0.5 | 0.6 | 1.0 | Doc Holliday | 0.3 | 0.8 | 1.0 | The pay-off to Doc is 10 units if Doc survives and Ike is killed, -10 units if Doc is killed and Ike survives. Doc and Ike's strategies are: L | blast at long range | M | if enemy has not shot, blast at middle range; otherwise, blast at close range | C | blast enemy at close range | Normal Form. Always the consummate gambler, Doc Holliday correctly computes his pay-off matrix as: Shoot-out Pay-off | Ike's Strategies | | I | I | I | Doc's Strategies | | -2 | -7 | -7 | | 0 | 2 | 6 | | 0 | -2 | 0 | Furthermore, Doc realizes that strategy D M dominates strategies D L and D C , in that his expected pay-off is higher with D M no matter what Ike does. If Ike is rational enough — if Ike correctly computes his pay-off matrix as the negative of Doc's pay-off matrix — Ike will choose strategy I L , because I L dominates strategies I M and I C provided Doc chooses strategy D M . The pair of pure strategies, the dominant strategies (D M , I L ), is the "solution" to the game. The value of the game is 0, the pay-off when Doc chooses strategy D M and Ike chooses I L . The pair of strategies, (D M , I L ), determines a saddle point of the pay-off matrix. Notice that the pay-off matrix entry, "0," is the largest entry in the I L column and the smallest entry in the D M row. A strategy pair whose pay-off matrix entry is a saddle point is called an equilibrium strategy pair . The next diagram shows the extensive form, or game tree , for Doc Holliday's Game. The blue circle represents Doc Holliday's decision node, while the red circles represent Ike Clanton's decision nodes. Ike Clanton's decision nodes are contained in the yellow ellipse, because Ike does not know which node he actually is in, since Ike and Doc choose their strategies simultaneously. The numbers at the green nodes are the pay-offs to (Doc, Ike) with the strategies chosen. The yellow ellipse is called the information set for Ike Clanton, since Ike just knows that he is at one of the three nodes in the ellipse. The game is symmetrical in the sense that I can start the game tree with Ike Clanton's decision node instead. Doc Holliday survived the show-down, apparently dying from tuberculosis on November 8, 1887 in Colorado. Not so funny, after all. Graphical Solution of a 2 x n or m x 2 Game A two-person zero-sum game with a pay-off matrix with dimensions either 2 x n, or m x 2 can be solved graphically. For example, consider the game with the 2 by 4 pay-off matrix: Pay-Off Matrix | Player II's Strategies | II | II | II | II | Player I's Strategies | | 4 | 3 | 1 | 3 | | 0 | 5 | 2 | 1 | Since this matrix does not have a saddle point, one must resort to mixed strategies for a solution. If Player I plays strategy I 1 with probability ß , and strategy I 2 with probability (1 - ß) his expected returns, R, against each of Player II's strategies are: II's strategy | Player I's Return | | 4 * ß + 0 * (1 - ß) = 0 + 4 * ß = | | 3 * ß + 5 * (1 - ß) = 5 - 2 * ß = | | 1 * ß + 2 * (1 - ß) = 2 - 1 * ß = | | 3 * ß + 1 * (1 - ß) = 1 + 2 * ß = | Player I's expected return against a particular strategy of Player II is a function of ß, a straight line . The next diagram graphs these straight-line functions as ß varies from 0 to 1 . The probability ß is graphed along the x-axis, while Player I's return R i against Player II's ith strategy is graphed along the y-axis. Player I's "Optimal" Strategy. If Player I sets ß = .4, Player I can expect at least a return R = 1.6 against any strategy or combination of strategies that Player II chooses to play. The (ß, R) = (0.4, 1.6) point occurs at the intersection of the R 1 and R 3 lines. Player II's "Optimal" Strategy. Since Player's I's mixed strategy (0.4, 0.6) assures him of an expected gain of 1.6 units, Player II wants a strategy that will limit his expected loss to 1.6 units . Moreover, Player II can restrict his choice of strategies to II 1 and II 3 , because the diagram shows that strategies II 2 and II 4 expose him to losses greater than 1.6 units. If Player II plays strategy II 1 with probability µ , and strategy II 3 with probability (1 - µ) his expected losses L against Player I's mixed strategies are: L = R * µ + R * (1 - µ),
L = (4 * ß) * µ + (2 - ß) * (1 - µ), or
L = (5 * µ - 1) * ß + (2 - 2 * µ) | If Player II sets µ = 1 / 5 = .2 , his expected losses are: L = (5 * (1/5) - 1) * ß + (2 - 2 * (1/5) ) = 8 / 5 = 1.6
| Since Player II can limit his losses independently of Player I's choice of mixed strategy, Player II's mixed strategy (0.2, 0, 0.8, 0) guarantees him of an expected loss of no greater than L = 1.6. Game Solution. Player I plays a mixed stragey of (0.4, 0.6) and Player II plays a mixed strategy of (0.2, 0, 0.8, 0). The value of the game is 1.6 units. The Min-Max Theorem for Matrix Games A matrix game can be described by its pay-off matrix, an m by n matrix A = [a i, j ], the set X of all probability distributions x on the integers {1, 2, . . . , m}, and the set Y of all probability distributions y on the intergers {1, 2, . . . , n}. Thus = (x , x , ... , x ), with x + x + ... + x = 1
= (y , y , ... , y ), with y + y + ... + y = 1
| | a a . . . . . a a
a a . . . . . a a
. . . . . . . . . . .
a a . . . a a | If Player I selects a probability distribution x and Player II selects a probability distribution y , the pay-off to Player I is P( x , y ) while the pay-off to Player II is -P( x , y ), where: ie, the expected pay-off equals the product of the row vector x T and the matrix A, (another row vector), and the column vector y Lower Value: Max-Min Game. P( x , y 0 ) = min y in Y P( x , y ) Therefore, Player I should choose x 0 in X so that: v I = min y in Y P( x 0 , y ) = max x in X [ min y in Y P( x , y ) ] The number v I is called the lower value, or the maxmin value, of the game. Upper Value: Min-Max Game. P( x 0 , y ) = max x in X P( x , y ) Therefore, Player II should choose y 0 in Y so that: v II = max x in X P( x , y 0 ) = min y in Y [ max x in X P( x , y ) ] The number v II is called the upper value, or the minmax value, of the game. Min-Max Theorem for Matrix Games. If x 0 and y 0 are the strategies chosen as indicated above, then: v I = v II = v = the value of the game = P( x 0 , y 0 ) If x is any mixed strategy in X, and y is any mixed strategy in Y, then: P( x , y 0 ) x 0 , y 0 ) x 0 , y ) Thus the strategy pair ( x 0 , y 0 ) is the solution to the game. The two strategies provide a saddle point in mixed strategies in the Cartesian product domain of X x Y. Linear Programming Solution for Matrix Games One can conveniently illlustrate the method used to turn a matrix game into a linear programming problem with an example (Hadley, 427). Suppose that player I has 3 strategies, player II has 5 strategies, and the payoff matrix A is given by: P l a y e r
I | P l a y e r II | If player I plays strategy 2, his payoff depends on which strategy player II has chosen to play. If player II knows player I is playing strategy 2, she should play her 4th strategy, because then she only pays him 1 unit. But if player I plays strategy 1 instead, she must pay him 9 units. I assume that players know the payoff matrix A, but that players 'announce' their strategies simultaneously. Player I wants to maximize the payoff; player II wants to minimize the payoff. The Min-Max Theorem suggests each player determine the choice of strategies on the basis of a probability distribution over the player's list of strategies. That is, player I chooses row 1, 2, or 3 with probabilities x 1 , x 2 ,and x 3 . Player II chooses the columns with probabilities y 1 , y 2 , y 3 , y 4 , and y 5 . Note that the x's and y's must be non-negative and each set must add to one. To turn this into a linear programming problem write: Primal Linear Program
Maximize P = x , with x nonnegative | 3*x | + 2*x | + 1*x | - 1*x | >= 0 | 5*x | + 6*x | + 7*x | - 1*x | >= 0 | 0*x | + 8*x | + 4*x | - 1*x | >= 0 | 9*x | + 1*x | + 9*x | - 1*x | >= 0 | 6*x | + 2*x | + 3*x | - 1*x | >= 0 | 1*x | + 1*x | + 1*x | + 0*x | = 1 | where the extra variable, x 4 , is the expected payoff (the value) of the game. The equation, Maximize P = x 4 , with x 4 nonnegative, is the objective function. Player I wants to maximize the objective function, the expected payoff. Then there is an equation for each column of A. Why? If player II plays her 1st strategy, the expected payoff to player I is: 3*x 1 + 2*x 2 + 1*x 3 . If player I choses his probability distribution optimally, his expected payoff should be greater than or equal x 4 , the expected value of the game. This argument applies to each of A's columns. The dual linear program to the primal linear program is: Dual Linear Program
Minimize D = y , with y nonnegative | 3*y | + 5*y | + 0*y | + 9*y | + 6*y | - 1*y | | 2*y | + 6*y | + 8*y | + 1*y | + 2*y | - 1*y | | 1*y | + 7*y | + 4*y | + 9*y | + 3*y | - 1*y | | 1*y | + 1*y | + 1*y | + 1*y | + 1*y | + 0*y | = 1 | Minimize D = y 6 , with y 6 nonnegative, is the objective function. Player II wants to minimize the objective function, the expected payoff. Then there is an equation for each row of A. Why? If player I plays his 1st strategy, the expected payoff to player II is: 3*y 1 + 5*y 2 + 0*y 3 + 9*y 4 + 6*y 5 . If player II choses her probability distribution optimally, her expected payoff should be less than or equal y 6 , the expected value of the game. This argument applies to each of A's rows. If one solves this linear programming problem using one of the available methods, the optimal progams: x T = (x 1 , x 2 , x 3 ) = (0.667, 0.333, 0) and y T = (y 1 , y 2 , y 3 , y 4 , y 5 ) = (0.889, 0, 0.111, 0, 0), are the Max-Min and Min-Max solutions, while the value, x 4 = y 6 = 2.667, is the value of the matrix game. Value of the Game - Nonnegative. To ensure that the value of an arbitrary matrix game is nonnegative when using linear programming to find its solution, add the absolute value of the most negative entry of the pay-off matrix to each entry of the pay-off matrix. After solving the modified game, subtract this constant from its value to get the value of the original matrix game. You can play this matrix game using the form below. You are the row player, Player I. Use the bonus random number to select the row you want to play. Can you can beat the value of the game over a sequence of games? Click the row you want to play. The program selects a column using its optimal strategy. It ignores the row you choose. | | | y | y | y | y | y | | | | | | | | | | | | | | | | | | | | | | | Create and play your own matrix game! Specify the payoff matrix dimensions and then input the payoff matrix entries on the webpage at: Play Own Game . Properties of Optimal Mixed Strategies. Suppose Player I plays his optimal strategy , x T = (x 1 , x 2 , x 3 ) = (0.667, 0.333, 0), and Player II plays one of her pure strategies. The vector x T * A shows the amount Player II can expect to pay to Player I for each pure strategy she plays. * A = | (2/3, 1/3, 0) * | Notice that only pure strategies 1 and 3 ensure Player II of paying the minimum of 2.6667 units, when Player I plays his optimal mixed strategy. Likewise, the vector A * y shows the amount Player I can expect to receive from Player II for each pure strategy he plays against her optimal strategy. A * = | Notice that only pure strategies 1 and 2 ensure Player I of receiving the maximum of 2.6667 units, when Player II plays her optimal mixed strategy. Worthwhile Strategies. Pure strategies that appear in an optimal strategy with positive probability are called worthwhile strategies (Thomas, 37). Set of Optimal Mixed Strategies. For each player in a two-person zero-sum matrix game, the set of optimal mixed strategies is a closed, convex set (Karlin, 36). For each player, the optimal strategy calculated from the equivalent linear programming problem is an extreme point of the player's set of optimal strategies. This follows because in a linear programming problem, an optimal program obtains at a vertex of the feasible set, and the feasible set must contain the set of optimal strategies. Pure Strategy as a Mixed Strategy. A pure strategy is a mixed strategy with the component corresponding to the probability of playing the pure strategy equal to 1. The Snow-Shapley Theorem for Matrix Games. Consider again the two-person zero-sum game with pay-off matrix: A = | This game has optimal mixed strategies: If one strikes out the rows and columns of A corresponding to strategies played with zero probability, one gets the reduced matrix M corresponding to the reduced matrix game with pay-off matrix M: M = | This procedure of striking out "worthless strategies" produces a matrix M whose inverse M (-1) exists: M = | Furthermore, letting e T = (1, 1), ie the vector whose components are all one, the value of the game, v, is given by: v = e T * M (-1) * e = 2.6667. Moreover, the optimal strategies for Player I and Player II in the reduced game are: x 0 = v * e T * M (-1) = (0.6667, 0.3333), and y 0 = v * M (-1) * e = (0.8889, 0.1111). Finally, the value of the original game is also v, and the optimal strategies for Player I and Player II in the original game are obtained by setting to zero the probability of playing a worthless strategy. Snow-Shapley Theorem. Consider any two-person zero-sum matrix game with pay-off matrix A. If x is an extreme point of X 0 , Player I's set of optimal strategies, and if y is an extreme point of Y 0 , Player II's set of optimal strategies, there exists a square, nonsingular submatrix M of A such that: v = e T * M (-1) * e , x 0 = v * e T * M (-1) , y 0 = v * M (-1) * e , where e is the vector of the same dimension as M all of whose components equal 1, v is the value of the game, and x 0 and y 0 correspond to the non-zero components of x and y , respectively. - Black, Max. Perplexities. Ithaca: Cornell, 1990.
- Brahms, Steven J. Biblical Games: A Strategic Analysis of Stories in the Old Testament. Cambridge, Mass: MIT Press, 1980.
- Brahms, Steven J. Theory of Moves. Cambridge: Cambridge UP, 1994.
- Dresher, Melvin. Games of Strategy: Theory and Applications. Englewood Cliffs: Prentice-Hall, 1961.
- Hadley G., Linear Programming. Reading, Mass.: Addison-Wesley, 1962;
- Howard, Nigel. Paradoxes of Rationality: Theory of Metagames and Political Behavior. Cambridge: MIT, 1971.
- Intriligator, Michael D. Mathematical Optimization and Economic Theory . Englewood Cliffs: Prentice-Hall, 1971.
- Karlin, Samuel. Mathematical Methods and Theory of Games, Programming, and Economics. Reading, Mass: Addison-Wesley, 1959.
- Nash, John F. "Noncooperative Games." Annals of Mathematics. 54 (1951): 286-295.
- Oxford Dictionary: The Concise Oxford Dictionary of Current English . 5th ed. Ed. H.W. Fowler and F.G. Fowler. Oxford: Oxford UP, 1964
- Owen, Guillermo. Game Theory, 2nd Edition. New York: Academic, 1982.
- Rapoport, Anatol. Two-Person Game Theory: The Essential Ideas . Ann Arbor: U Michigan, 1966.
- Restrepo, Rodrigo A. Theory of Games and Programming: Mathematics Lecture Notes . Vancouver: University of B.C., 1967.
- Thomas, L. C. Games, Theory and Applications . New York: John Wiley, 1984.
- Wiens, Elmer G. Reduction of Games Using Dominant Strategies. Vancouver: UBC M.Sc. Thesis, 1969.
| --> All Rights Reserved. Inquiries | Jump to navigation - Optimization and Game Theory
Optimization is a core methodological discipline that aims to develop analytical and computational methods for solving optimization problems in engineering, data science, and operations research. Research in LIDS focuses on efficient and scalable algorithms for large scale problems, their theoretical understanding, and the deployment of modern optimization techniques to challenging settings in diverse applications ranging from communication networks and power systems to machine learning. In addition, there is a natural overlap between optimization and control, as much of modern control theory rests on optimization formulations. Finally the increased interest in systems that involve simultaneous optimization by several, possibly competing agents has led to several research thrusts that rely on game-theoretic approaches. Sample Activities- Distributed nonlinear optimization algorithms
- Optimization methods for supervised learning
- Optimization methods that rely on algebraic techniques
- Optimization in the power grid
- Reinforcement learning for stochastic optimal control
- Stochastic gradient descent algorithms and their analysis
- Cyber-physical systems: architectural design, security and privacy, cross-layer algorithms, and tools for analysis, verification, and performance guarantees
- Design of incentives and mechanisms in networked, dynamic environments
- New equilibrium notions and dynamics in games
- Companies Founded by LIDS Community Members
- LIDS/All Magazine
- Directions to LIDS Offices
- LIDS Overview
- Statistical Inference and Machine Learning
- Systems Theory, Control, and Autonomy
- Aerospace Controls Laboratory (ACL)
- Communications and Networking Research Group (CNRG)
- Data to AI Group (DAI)
- Electric Energy Systems Group (EESG@MIT)
- Inference and Stochastic Networks Group (ISNG)
- Stochastic Systems Group (SSG)
- Wireless Information and Network Sciences Laboratory (WINSLab)
- Event Calendar
- LIDS Seminar Series
- Full Archive
- Spring 2014
- Spring 2015
- Spring 2016
- Spring 2017
- Spring 2018
- Spring 2019
- Spring 2020
- Spring 2021
- Spring 2022
- LIDS & Stats Tea
- Spring 2023
- Conferences and Workshops
- Faculty and PIs
- Administrative Staff
- Research Staff
- Information for Visitors and New Members
- LIDS Research Group Snapshots
- Book a Conference Room
- Printer Instructions for Windows
- Printing Instructions for Mac
- Printing Instructions for Linux
- LIDS Theses & Publications
- Request Repairs
- Ship & Receive Mail
- Travel Rules and Reimbursements
- Subscribing to Calendar Feeds
- Using Vibe boards
Contact Us: 617-253-2142 Laboratory for Information and Systems Decisions Massachusetts Institute of Technology 77 Massachusetts Avenue Room 32-D608 Cambridge, MA 02136 (Stanford users can avoid this Captcha by logging in.) - Send to text email RefWorks EndNote printer
Operations research problems : statements and solutionsAvailable online. More options- Find it at other libraries via WorldCat
- Contributors
DescriptionCreators/contributors, contents/summary. - Linear programming
- Integer programming
- Non-linear programming
- Network Modelling.-?Inventory theory
- Queuing theory
- Decision theory
- Game theory
- Dynamic programming
- Markov processes.
Bibliographic information- Stanford Home
- Maps & Directions
- Search Stanford
- Emergency Info
- Terms of Use
- Non-Discrimination
- Accessibility
© Stanford University , Stanford , California 94305 . - Operations Research Problems
Statements and Solutions - © 2014
- 1st edition
- View latest edition
- Raúl Poler 0 ,
- Josefa Mula 1 ,
- Manuel Díaz-Madroñero 2
Research Centre on Production Management and Engineering, Polytechnic University of Valencia, Alcoy, SpainYou can also search for this author in PubMed Google Scholar Escuela Politécnica Superior de Alcoy, Universidad Politécnica de Valencia, Alcoy, SpainUniversitat politècnica de valència, alcoy, spain. - Provides a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science
- Identifies different operations management problems in order to improve the decision making process concerning readers
- Addresses the following topics: Linear programming, integer programming, non-linear programming, network modeling, inventory theory, queue theory, tree decision, game theory, dynamic programming and markov processes
49k Accesses 11 Citations This is a preview of subscription content, log in via an institution to check access. Access this bookSubscribe and save. - Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
- Available as EPUB and PDF
- Read on any device
- Instant download
- Own it forever
- Compact, lightweight edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
- Durable hardcover edition
Tax calculation will be finalised at checkout Other ways to accessLicence this eBook for your library Institutional subscriptions About this bookSimilar content being viewed by others. Applications and Mathematical Modeling in Operations ResearchFrom Operations Research to Dynamic Data Mining and BeyondIntroduction to Stochastic Optimisation and Game TheoryDynamic programming. Integer ProgrammingInventory theory. - Linear and Non-Linear Programming
Markov Processes- Network Modeling
- Queue Theory
Table of contents (10 chapters)Front matter, linear programming. - Raúl Poler, Josefa Mula, Manuel Díaz-Madroñero
Non-Linear ProgrammingNetwork modelling, queueing theory, decision theory, games theory, back matter. From the reviews: Authors and AffiliationsJosefa Mula Manuel Díaz-Madroñero Bibliographic InformationBook Title : Operations Research Problems Book Subtitle : Statements and Solutions Authors : Raúl Poler, Josefa Mula, Manuel Díaz-Madroñero DOI : https://doi.org/10.1007/978-1-4471-5577-5 Publisher : Springer London eBook Packages : Engineering , Engineering (R0) Copyright Information : Springer-Verlag London Ltd., part of Springer Nature 2014 Hardcover ISBN : 978-1-4471-5576-8 Published: 22 November 2013 Softcover ISBN : 978-1-4471-7190-4 Published: 23 August 2016 eBook ISBN : 978-1-4471-5577-5 Published: 08 November 2013 Edition Number : 1 Number of Pages : XV, 424 Number of Illustrations : 32 b/w illustrations, 55 illustrations in colour Topics : Industrial and Production Engineering , Operations Research/Decision Theory , Game Theory, Economics, Social and Behav. Sciences Policies and ethics - Find a journal
- Track your research
| | | | | | | | | |
IMAGES
VIDEO
COMMENTS
Game theory questions with solutions are given here for practice and to understand the concept of game theory as a decision theory. In operations research, game theory is a mathematical theory that deals with some kind of decisions in a competitive situation. The theory of games started in the 20th century and it was proposed by John Von Neuman and Morgenstern.
related solution concepts and to explicate these concepts with contemporary research problems in operations and supply chain management. The book has a modular struc-ture. Chapters 1 and 2 provide an introduction to operations management and game theory, respectively. Traditionally, game theory is divided into strands—noncoop-erative games ...
The ne w game theory in operations research applications. lies in the study of organizations and in systems that in volv e. individuals, networks, and institutions. The success of game. theory in ...
Despite research in operations and supply chain management having embraced both non-cooperative and cooperative game-theoretic solution concepts, there is still an abundance of underutilized concepts and tools in game theory that could strongly influence operations management problems.
Box 208281, New Haven, Connecticut 06520, [email protected]. yy its very nature, a discursive reminiscence has to of be game theory methods to operations research (Shubik somewhat self-referential. Furthermore, it cannot offer 1953a, 1958a), economics (Shubik 1953b, 1953c), politi- an exhaustive survey of the many special topics to which ...
Chapter 5. Applications of Game Theory in Operation Management and Information Systems. Lode Li and Seungjin Whang. 1 Introduction. In the past 20 years the Operations Management and Information Systems (OM/IS) fields has widely applied game theory in their research. The OM field deals with the creation of products and services from design to ...
Game Theory. ¶. This chapter provides an introduction to game theory, where we will learn to model and solve decision problems involving several rational decision makers, where we will focus on solving decision problems where the decision maker must select among various alternatives, focusing on competition and collaboration. This chapter ...
Game Theory (Part 1): Static Games. Ling-Chieh Kung. agement National Taiwan UniversityMay 15, 2014Brief history of game theorySo. r we have focused on decision making problems with only one decision maker.Game the. y provides a framework for analyzing multi-player decision making problems.While it has been implicitly discussed in Ec.
OPERATIONS RESEARCH AND MANAGEMENT SCIENCE Operations Research and Management Science (ORMS) is a broad, interdisciplinary branch ... Barron Game Theory: An Introduction, Second Edition Forthcoming Titles Nussbaum and Mislick Cost Estimation: ... Problems, 43 1.5 Graphical Solution of 2 ...
In this survey of areas of application of game theory some problems which have been completely formulated, solved, and are of immediate practical value have been discussed in Sections 4 and 5, but for those interested in the deeper and more important long-range problems which confront the researcher in the behavioral sciences, Section 6 indicates where some of the work of yesterday has taken ...
Institute for Operations Research and the Management Sciences. 5521 Research Park Drive, Suite 200. Catonsville, MD 21228 USA. phone 1 443-757-3500. phone 2 800-4INFORMS (800-446-3676) fax 443-757-3515. email [email protected]. Get the Latest Updates. Discover INFORMS.
A two-person game has two players.A game in which one player wins what the other player loses is called a zero-sum game. The theory of two-person zero-sum games is the foundation of more complicated games, such as games with more than two players (n-person games), and games in which the players can benefit through cooperation, with or without collusion, side payments, or binding agreements.
Optimization and Game Theory. Optimization is a core methodological discipline that aims to develop analytical and computational methods for solving optimization problems in engineering, data science, and operations research. Research in LIDS focuses on efficient and scalable algorithms for large scale problems, their theoretical understanding ...
1 Introduction. In the past 20 years the Operations Management and Information Systems (OM/IS) fields has widely applied game theory in their research. The OM field deals with the creation of products and services from design to delivery to customers. Facing the complexity of the problems, the field had previously been more focused on analyzing ...
The transportation problem is a classic problem in operations research that involves finding the optimal way to move goods from one place to another. ... such as network flow models and game theory, can be used to develop effective solutions: • Penalty methods: This method involves adding a penalty cost to the objective function to ...
Game theory is divided into two main branches. The first is cooperative game theory, which assumes that the players can communicate, form coalitions and sign binding agreements. Cooperative game theory has been used, for example, to analyze voting behavior and other issues in political science and related fields. We will deal exclusively with ...
Operation Research Unit 3 - Free download as PDF File (.pdf), Text File (.txt) or read online for free. 1. The document discusses concepts from game theory including pure strategies, mixed strategies, saddle points, and solving 2x2 games using the arithmetic method. It provides examples of game theory problems involving optimal strategies and determining if games are fair or strictly determinable.
Markov processes. Summary. The objective of this book is to provide a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science. These problems can serve as a basis for the development or study of assignments and exams.
For this special issue, we expect original research papers of high quality providing theoretical and/or practical applications of Operations Research and Game theory. We would especially welcome innovative contributions of new methods and applications of operations research and game theoretic models involving large-scale data from business or ...
1.The number of competitors is finite. 2. The player act rationally and intelligently. 3. Each player has available to him a finite set of possible courses of action. 4. The players attempt to maximize gains and minimize losses. 5. The outcome of the game is affected by the choices made by all the players.
The second part of the course is devoted to game theory is a mathematical language of modern economic and social science. Any time a model involves conflict of interest game theory is needed since it is the study of strategic decision making. Game theory has numerous applications including economic theory, political science, evolutionary ...
The objective of this book is to provide a valuable compendium of problems as a reference for undergraduate and graduate students, faculty, researchers and practitioners of operations research and management science. These problems can serve as a basis for the development or study of assignments and exams. Also, they can be useful as a guide ...
This is an Open Access textbook on non-cooperative Game Theory with 165 solved exercises. Discover the world's research. 25+ million members; 160+ million publication pages; 2.3+ billion citations;