Collective Decision Making with Elections

Canvas not supported?
Player True Stated Votes Utility

Outcome:

Debug log:


What is this?

In the introduction to his Game Theory and Algorithms course, Dave Pritchard, who is currently a Post-Doc in our group at EPFL, mentioned Hotelling's election game and Collective Decision Making games end-to-end.

Hotelling's game is often cited when it comes to the fact that political parties often seem indistinguishable from each other. The story goes as follows. Let's say we put all voters in a country on a line indicating their political alignment from left to right, and let's assume that they vote for whichever candidate is closest to them on that line. Then if there are only two candidates, and the candidate's goal is to be elected (that is, get more votes than their opponent), then they will move to where the median of the voter distribution lies and end up in a tie there. The case with three or more candidates is more complicated.

In a Collective Decision Making game, there is a group of people who have to set some policy, which we can again think of as a point in some space of possible policies, let's say an unbounded line. Each member of the group has a true desired policy, and each member states a desired policy (which may be different from the true desire) in discussion. By some mechanism, the stated desires are aggregated to form the actual outcome policy, and the goal of every group member is to get the actual outcome policy to be as close as possible to their true desire. If the actual outcome is determined simply by averaging all stated desires, then the group members have an incentive to exaggerate their desire, and if the space of policies is unbounded, the group members' stated desires are expected to move further and further apart: the game has no Nash equilibrium. This is also reminiscent of techniques that one commonly sees in the political process.

What you see above is a combination of these two games. There is a space of possible policies, represented by a square - you could think of it as the Political Compass. So every point in the square represents a possible policy. There is a distribution of voters' preferences across this policy space - I used a (two-dimensional) normal distribution that is restricted to the square. Finally, there are players (the "politicians"), who each have a private true desire. Their goal is to get the actual outcome, that is, the implemented policy, as close as possible to their true desire. To achieve this, they can state a desired policy. Voters vote for the politician whose stated desire is closest to them, and then the actual outcome is determined by weighting the politicians' stated desired policies according to how many votes they got.

The simulation

In the simulation, there are three players that start out with true and stated desires being equal. Voters are colored by which player they vote for, and the intensity of the colors represents the density of the voters, which you can think of as the number of voters per pixel. The actual outcome is represented by a small black square, and the utility of each player is minus the squared Euclidean distance of the actual outcome to the player's true desired policy.

When you click on the Step button, each player will evaluate whether they can improve their utility (that is, move the actual outcome closer to their true desire) by moving their stated desire to one of the adjacent pixels. With this type of local search, the stated desires will eventually converge close to a stable situation after a sufficiently large number of steps.

The simulation typically does not truly converge, but enter some short cycle where all players' stated desires move around some attractor, because I have limited players to integral policies. The discretization prevents them from getting even closer to what the stable point would be in the idealized game with real numbers.

Note that a point that is stable for this type of local search is not necessarily a Nash equilibrium (though the converse is true: every Nash equilibrium is also stable for local search). You can select one of the radio buttons next to the True and Stated entries in the table and then click on the game plan to manually move a player's desire. In this way, it is possible to achieve configurations in which no player can improve their utility (that is, get the outcome closer to their true desire) with local steps, however they would be able to improve by performing a huge step, say to the opposite side of the square. The fact that this is possible is an artifact of the choice of local search. I chose local search because it runs faster and seems like a reasonable model for how people behave in the real world.

Does this all have some higher meaning?

Try the simulation, and you will see that some politicians move their stated desire closer to the center to increase their share of the vote - like in Hotelling's game - while others exaggerate their position to pull the outcome in the direction they want - like in the Collective Decision Making game with averaging. It is plausible and tempting to conclude that this explains why in the real world, we also see a mixture of politicians fighting over the center and totally exaggerating their position. It is a nice thought experiment, and it certainly makes for a nice story.

However, I would advise against taking it too seriously. First of all, the modeling of the game makes a lot of assumptions, many of which are very likely to be implausible:

Second, the whole idea that game theory tells us anything about the real world is problematic. There are definitely important insights that can be learned from it, such as when it comes to the design of auctions. However, the very simple and common introduction to the field, the prisoner's dilemma, teaches us that even basic results from game theory may, though mathematically and philosophically interesting, be totally irrelevant to the real world.

In the prisoner's dilemma, there is only one Nash equilibrium: non-cooperation by both prisoners. Unfortunately, this is interpreted by many to say that non-cooperation is the natural outcome in the natural world, but this interpretation is highly flawed. An intelligent player in the game will certainly have to think through some very circular lines of thought. From a state of cooperation, she might argue that she or her opponent might defect, which ultimately leads to a state of non-cooperation from both sides. Then again, both players can clearly see that the state of cooperation would be an improvement, and the cycle starts again. Any attempt to break the cycle ultimately does so either by fiat or by making the problem statement more precise, which is an act of modeling, which includes making assumptions. Therefore, it seems to me that the fact that non-cooperation is deemed a natural outcome of the prisoner's dilemma by game theorists is mostly a statement about the kind of people who study game theory. (This may be a bit harsh; perhaps it is mostly a statement about the kind of people who popularize game theory.)

How this circular reasoning tends to be broken when actual humans are involved in a single-instance run of the game is a question that mathematics cannot answer. That is what psychology and related disciplines are for - and, of course, to get the true out-of-the-box-thinking correct answer: talk to your lawyer.

Implementation notes

This is my first use of JavaScript beyond Hello World examples, and it was certainly an interesting learning experience. There may be things that I've missed simply because I am new to this world of "web programming", and comments in that direction are welcome - see below. It Works For Me(tm) in Firefox 3.6 and Chrome 9, but I have taken no special steps to ensure browser compatibility, so YMMV.

Generally, my impression was that JavaScript is actually a very neat language, but the state of JavaScript engines leaves much to be desired. Obviously, JavaScript was not originally intended for computation-heavy usage, but I was still rather shocked when my first naive implementation took 5 seconds on Firefox 3.6 to run one step of the simulation. Chrome was faster roughly by a factor of 5, but that is still horrible.

The most time-consuming part of the simulation is the function tally_votes(), which computes the number of votes every player gets. My very first implementation simply looped over the entire field and determined for each pixel which player was closest. However, the bad performance cited above was visible even with an algorithmic change to subdivide the field into 8x8 cells. The current algorithm first determines which player is closest to all cell corners. If the same player is closest to all corners, then we can simply assign all voters in the cell to that player, without further computation, because the Voronoi cells in a Voronoi diagram using the Euclidean norm are convex. Otherwise, we process all pixels inside the cell individually. This needs to be done only for few of the cells, so this is an algorithmic improvement - but the algorithm was horribly slow despite it.

The culprit was function calls, which seem to be very inefficient, especially when it comes to closures. That is really a shame, because both function calls and closures should be encouraged programming practice in many situations. In any case, inlining all computations manually and rearranging the loops to reduce object attribute accesses reduced the time needed to compute one step of the simulation in Firefox to around 60ms on my computer. That's a pretty hefty improvement. It's also interesting to note that Chrome is still about twice as fast, so both JavaScript implementations were helped by the inlining, though at different scales.

What is most depressing, however, is that even quite stupid micro-optimizations, such as moving the computation of commonly computed values out of inner loops, had a significant further impact on the performance (around 10%) for both Firefox and Chrome. This is the kind of transformation that modern compilers really should be doing automatically, especially once it has been established that a program fragment is a hot spot. It's pretty likely that further micro-optimizations can improve the performance even more. For example, multiplications can be eliminated from the inner loops, but I didn't bother testing this (I could pretend that this was in the interest of keeping the code more understandable, but laziness definitely played a significant role as well).

Finally, the implementation always computes distances to all players, which is easy to implement but does not scale too well with the number of players. An obvious algorithmic alternative would be to compute the Voronoi diagram explicitly first. Then computing the number of voters that vote for a player, which is conceptually computing an integral over the voter density, can be done by rasterization of the Voronoi cells.

Fine print

This site was created by Nicolai Hähnle in March 2011 and released under CC-BY-SA (see below) in the hope that it may be interesting and provide a non-typical example of JavaScript usage. Feedback is welcome, and should be sent to nhaehnle ta gmail dot com. The enclosed JavaScript code within this HTML file is released to the public domain.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.