or=Wikimedia y=2007 r=20071211 15:58 UTC
In psychology and cognitive science, confirmation bias is a tendency to search for or interpret new information in a way that confirms one's preconceptions and avoids information and interpretations which contradict prior beliefs
or=Wikimedia y=2007 r=20071211 15:58 UTC
In attribution theory, the fundamental attribution error (also known as correspondence bias or overattribution effect) is the tendency for people to over-emphasize dispositional, or personality-based, explanations for behaviors observed in others while under-emphasizing situational explanations. In other words, people have an unjustified tendency to assume that a person's actions depend on what "kind" of person that person is rather than on the social and environmental forces influencing the person. Overattribution is less likely, perhaps even inverted, when people explain their own behavior; this discrepancy is called the actor-observer bias.
or=Wikimedia y=2007 r=20070912 20:37 UTC
In prospect theory, loss aversion refers to the tendency for people strongly to prefer avoiding losses than acquiring gains. Some studies suggest that losses are as much as twice as psychologically powerful as gains. Loss aversion was first convincingly demonstrated by Amos Tversky and Daniel Kahneman.
or=Wikimedia y=2007 r=20070912 20:39 UTC
The endowment effect (or divestiture aversion) is a hypothesis that people value a good or service more once their property right to it has been established. In other words, people place a higher value on objects they own relative to objects they do not. In one experiment, people demanded a higher price for a coffee mug that had been given to them but put a lower price on one they did not yet own. The endowment effect was described as inconsistent with standard economic theory which asserts that a person's willingness to pay (WTP) for a good should be equal to their willingness to accept (WTA) compensation to be deprived of the good. This hypothesis underlies consumer theory and indifference curves.
m=September y=2003 r=20070912
Frames are cognitive shortcuts that people use to help make sense of complex information. Frames help us to interpret the world around us and represent that world to others. They help us organize complex phenomena into coherent, understandable categories. When we label a phenomenon, we give meaning to some aspects of what is observed, while discounting other aspects because they appear irrelevant or counter-intuitive. Thus, frames provide meaning through selective simplification, by filtering people's perceptions and providing them with a field of vision for a problem.
m=October y=2003 r=20070912
Framing is the method by which we give situations meaning. Loss-oriented framing implies that people tend to frame any choice in terms of the possible risks involved. Studies have shown that, in new situations, people tend to consider and weigh the possible risks more strongly than the possible benefits. Work in rational-choice theory (a theory about how people make choices) illustrates that most people tend to value the consequences of losses more than the consequences of gains, and act in such a way as to minimize losses. Put another way, they fear a bad outcome, more than they expect or pursue the good one when risks are involved. This method of making choices is sometimes referred to as worst-case or loss-oriented framing.
or=The Beyond Intractability Project m=August y=2003 r=20070901
Game theory is a tool that can help explain and address social problems. Since games often reflect or share characteristics with real situations -- especially competitive or cooperative situations -- they can suggest strategies for dealing with such circumstances. Just as we may be able to understand the strategy of players in a particular game, we may also be able to predict how people, political factions, or states will behave in a given situation.
Simple mathematical models can provide insight into complex societal relationships, by showing that mutual cooperation can benefit even mutually distrustful participants.
or=Wikimedia y=2007 r=20070912 20:41 UTC
Will the two prisoners cooperate to minimize total loss of liberty or will one of them, trusting the other to cooperate, betray him so as to go free?
In game theory, the prisoner's dilemma (sometimes abbreviated PD) is a type of non-zero-sum game in which two players may each "cooperate" with or "defect" (i.e. betray) the other player. In this game, as in all game theory, the only concern of each individual player ("prisoner") is maximizing his/her own payoff, without any concern for the other player's payoff. In the classic form of this game, cooperating is strictly dominated by defecting, so that the only possible equilibrium for the game is for all players to defect. In simpler terms, no matter what the other player does, one player will always gain a greater payoff by playing defect. Since in any situation playing defect is more beneficial than cooperating, all rational players will play defect, all things being equal.
The unique equilibrium for this game is a Pareto-suboptimal solution—that is, rational choice leads the two players to both play defect even though each player's individual reward would be greater if they both played cooperate. In equilibrium, each prisoner chooses to defect even though both would be better off by cooperating, hence the dilemma.
In the iterated prisoner's dilemma the game is played repeatedly. Thus each player has an opportunity to "punish" the other player for previous non-cooperative play. Cooperation may then arise as an equilibrium outcome. The incentive to defect is overcome by the threat of punishment, leading to the possibility of a cooperative outcome. If the game result is infinitely repeated, cooperation may be a Nash equilibrium although both players defecting always remains an equilibrium.
or=Wikimedia y=2007 r=20070912 20:42 UTC
Tit for tat is a highly effective strategy in game theory for the iterated prisoner's dilemma. It was first introduced by Anatol Rapoport in Robert Axelrod's two tournaments, held around 1980. Based on the English saying meaning "equivalent retaliation" ("tit for tat"), an agent using this strategy will initially cooperate, then respond in kind to an opponent's previous action. If the opponent previously was cooperative, the agent is cooperative. If not, the agent is not. This is similar to reciprocal altruism in biology.