HUMANITARIAN LIBRARY ANDREY PLATONOV

Daniel Kahneman, Paul Slovik, Amos Tversky

Decision making in uncertainty

The book presented to your attention contains the results of reflections and experimental studies of foreign scientists, little known to the Russian-speaking reader.

We are talking about the peculiarities of thinking and behavior of people in assessing and predicting uncertain events and values, such as, in particular, the chances of winning or getting sick, preferences in elections, assessment of professional suitability, expertise of accidents and much more.

As convincingly shown in the book, when making decisions under uncertain conditions, people usually make mistakes, sometimes quite significantly, even if they have studied the theory of probability and statistics. These errors are subject to certain psychological laws that have been identified and well substantiated experimentally by researchers.

I must say that not only the natural errors of human decisions in a situation of uncertainty, but also the very organization of experiments that reveal these natural errors is very interesting and practically useful.

It is safe to think that the translation of this book will be interesting and useful not only for domestic psychologists, doctors, politicians, and various experts, but also for many other people, one way or another connected with the assessment and prognosis of essentially random social and personal events.

Scientific editor

Doctor of Psychology

Professor of St. Petersburg State University

G.V. Sukhodolsky,

Saint Petersburg, 2004

The approach to decision-making presented in this book is based on three lines of research that developed in the 50s and 60s of the twentieth century. For example, a comparison of clinical and statistical forecasting pioneered by Paul Teehl; the study of subjective probability in the Bayes paradigm, presented in psychology by Ward Edwards; and a study of heuristics and reasoning strategies presented by Herbert Simon and Jerome Bruner.

Our collection also includes contemporary theory at the interface of decision making with another branch of psychological research: the study of causal attribution and everyday psychological interpretation, pioneered by Fritz Heider.

Thiel's classic book, published in 1954, confirms the fact that simple linear combinations of statements surpass the intuitive judgments of experts in predicting significant behavioral criteria. The intellectual legacy of this work, which continues to be relevant today, and the tumultuous controversy that followed, probably did not prove that clinicians were poorly done with their work, which, as Teale noted, they should not have undertaken.

Rather, it was a demonstration of a significant discrepancy between people's objective measures of success at predictive tasks and their sincere belief in their own productivity. This conclusion is not only true for clinicians and clinical predictions: people's opinions of how they draw conclusions and how well they do it cannot be taken as a basis.

After all, researchers practicing the clinical approach often used themselves or their friends as subjects, and the interpretation of errors and deviations was more cognitive than psychodynamic: impressions of errors rather than actual errors were used as a model.

Since the introduction of Bayesian ideas into psychological research by Edwards and his colleagues, psychologists for the first time have been offered a holistic and clearly formulated model of optimal behavior under conditions of uncertainty, with which one can compare human decision-making. The conformity of decision making to normative models has become one of the main paradigms of research in the field of judgment in the face of uncertainty. This inevitably raised the issue of the biases that people gravitate towards in inductive inference, and the methods that could be used to correct them. These issues are addressed in most sections of this publication. However, much of the early work used a normative model to explain human behavior and introduced additional processes to explain deviations from optimal performance. On the contrary, the aim of research in the field of heuristics in decision making is to explain both right and wrong judgments in terms of the same psychological processes.

The emergence of such a new paradigm as cognitive psychology has had a profound impact on the study of decision making. Cognitive psychology looks at internal processes, mental constraints, and how constraints affect those processes. Early examples of conceptual and empirical work in this area were the study of thinking strategies by Bruner and his colleagues, as well as the treatment of reasoning heuristics and Simon's bounded rationality. Both Bruner and Simon have worked on simplification strategies that reduce the complexity of decision-making tasks in order to make them fit with the way people think. We have included most of the work in this book for similar reasons.

In recent years, a large body of research has been devoted to judgment heuristics, as well as the study of their effects. This publication takes a comprehensive look at this approach. It contains new works written specifically for this collection, and already published articles on judgments and assumptions. While the line between judgment and decision making is not always clear, we have focused on judgment rather than choice. The topic of decision making is important enough to be the subject of a separate publication.

The book is divided into ten parts. The first part contains early research on heuristics and stereotypes in intuitive decision making. Part II deals specifically with the representativeness heuristic, which, in Part III, expands to problems of causal attribution. Part IV describes the accessibility heuristic and its role in social judgment. Part V examines the understanding and study of covariance, and also shows the existence of illusory correlations in decision-making by ordinary people and specialists. Part VI discusses testing probabilistic estimates and substantiates the common phenomenon of overconfidence in forecasting and explaining. Stacked inference biases are discussed in Part VII. Part VIII discusses formal and informal procedures for correcting and improving intuitive decision-making. Part IX summarizes the study of the implications of stereotypes in risk decision making. The final part contains some contemporary thoughts on several conceptual and methodological problems in the study of heuristics and bias.

For convenience, all links are collected in a separate list at the end of the book. Boldface numbers refer to material included in the book, denoting the chapter in which that material appears. We have used brackets (...) to indicate deleted material from pre-published articles.

Our work in preparing this book was supported by the Naval Research Service, Grant N00014-79-C-0077 to Stanford University and the Naval Research Service, Decision Research Contract N0014-80-C-0150.

We want to thank Peggy Rocker, Nancy Collins, Jerry Henson, and Don MacGpegop for their help in preparing this book.

Daniel Kahneman

Paul Slovik

Amos Tversky

Introduction

1. Decision Making Under Uncertainty: Rules and Biases *

Amos Tversky and Daniel Kahneman

Many decisions are based on beliefs about the likelihood of uncertain events, such as the outcome of an election, the defendant's guilt in court, or the future value of the dollar. These beliefs are usually expressed in statements like I think that ... the likelihood is ... it is unlikely that ...

Etc. Sometimes beliefs about uncertain events are expressed numerically as odds or subjective probabilities. What determines such beliefs? How do people assess the likelihood of an uncertain event or the value of an uncertain quantity? This section shows that humans rely on a limited number of heuristic principles that reduce the complex tasks of estimating probabilities and predicting values ​​of quantities to simpler judgments. In general, these heuristics are quite useful, but sometimes they lead to serious and systematic errors.

The subjective assessment of probability is similar to the subjective assessment of physical quantities such as distance or size. All of these estimates are based on limited confidence data processed according to heuristic rules. For example, the estimated distance to an object is partly determined by its clarity. The sharper the subject, the closer it appears. This rule has some justification, because in any area, more distant objects appear less clear than closer objects. However, constant adherence to this rule leads to systematic errors in distance estimation. Typically, in poor visibility, distances are often overestimated because the contours of objects are blurred. On the other hand, distances are often underestimated when visibility is good because objects appear sharper. Thus, using clarity as a measure of distance leads to common biases. Such biases can also be found in intuitive estimates of probability. This book describes three types of heuristics that are used to estimate probabilities and predict the values ​​of quantities. The biases that these heuristics lead to are presented, and the practical and theoretical implications of these observations are discussed.

* This chapter first appeared in Science, 1974, 185, 1124-1131. Copyright (c) 1974 by the American Science Achievement Association. Reprinted by permission.

Representativeness

Most questions about probability are of one of the following types: What is the probability that object A belongs to class B? What is the probability that process B is the cause of event A? What is the likelihood that process B will lead to event A? In answering such questions, people usually rely on the representativeness heuristic, in which the likelihood is determined by the degree to which A is representative of B, that is, the degree to which A is similar to B. For example, when A is highly representative of B, the probability is that event A originates from B is considered high. On the other hand, if A does not look like B, then the probability is assessed as low.

To illustrate the judgment of representativeness, consider the description of a person by his former neighbor: “Steve is very withdrawn and shy, he is always ready to help me, but has too little interest in other people and reality in general. . " How do people rate the likelihood of who Steve is by profession (for example, a farmer, salesman, airplane pilot, librarian, or doctor)? How do people rank these occupations from most to least likely? In the representativeness heuristic, the likelihood that Steve is a librarian, for example, is determined by the degree to which he is representative of the librarian, or conforms to the stereotype of a librarian. Indeed, research into such problems has shown that people distribute occupations in exactly the same way (Kahneman and Tvegsky, 1973, 4). This approach to assessing likelihood leads to serious errors because similarity or representativeness is not influenced by the individual factors that should influence the assessment of likelihood.

Insensitivity to the prior probability of the result

One factor that does not affect representativeness but significantly influences likelihood is the antecedent likelihood, or frequency of baseline outcomes (outcomes). In Steve's case, for example, the fact that there are many more farmers than librarians in the population is necessarily taken into account in any reasonable assessment of the likelihood that Steve is a librarian rather than a farmer. Taking the baseline frequency into account, however, does not really affect Steve's conformity to the stereotype of librarians and farmers. If people estimate probability by means of representativeness, then they will neglect the antecedent probabilities. This hypothesis was tested in an experiment in which antecedent probabilities were changed (Kahneman and Tvegsky, 1973.4). The subjects were shown short descriptions of several people, chosen one way from a group of 100 specialist engineers and lawyers. Test subjects were asked to rate, for each description, the likelihood that it belonged to an engineer rather than a lawyer. In one experimental case, subjects were told that the group from which the descriptions were given consisted of 70 engineers and 30 lawyers. In another case, the subjects were told that the group consisted of 30 engineers and 70 lawyers. The chances that each individual description belongs to an engineer rather than a lawyer should be higher in the first case, where the majority of engineers are, than in the second, where the majority of lawyers. This can be shown by applying Bayes' rule that the proportion of these odds should be (0.7 / 0.3) 2, or 5.44, for each description. In gross violation of Bayes' rule, the subjects in both cases demonstrated essentially the same estimates of probability. Obviously, the subjects judged the likelihood that a particular description belonged to an engineer rather than a lawyer as the degree to which that description was representative of the two stereotypes, with little, if any, consideration for the antecedent probabilities of these categories.

The subjects correctly used antecedent probabilities when they had no other information. In the absence of a concise personality description, they rated the probability that the unknown person is an engineer as 0.7 and 0.3, respectively, in both cases, under both conditions of baseline frequency. However, prior probabilities were completely ignored when the description was presented, even if it was completely uninformative. The reactions to the description below illustrate this phenomenon:

Dick is a 30-year-old man. He is married and has no children yet. A very capable and motivated employee, shows great promise. Recognized by colleagues.

This description was intended not to provide information on whether Dick is an engineer or a lawyer. Therefore, the probability that Dick is an engineer must equal the proportion of engineers in the group, as if no description were given at all. The subjects, however, rated the likelihood that Dick is an engineer as 5 regardless of the proportion of engineers in the group given (7 to 3 or 3 to 7). Obviously, people react differently in situations where a description is missing and when an unhelpful description is given. Where descriptions are not available, prior probabilities are used appropriately; and prior probabilities are ignored when an unhelpful description is given (Kahneman and Tvegsky, 1973,4).

Insensitive to sample size

To estimate the likelihood of a particular outcome in a sample drawn from a specified population, people typically use the representativeness heuristic. That is, they estimate the likelihood of a result in a sample, for example, that the average height in a random sample of ten people will be 6 feet (180 centimeters), to the extent that this result is similar to the corresponding parameter (that is, the average height of people in the entire population). The similarity of statistics in a sample to a typical parameter in the entire population does not depend on the sample size. Therefore, if the likelihood is calculated using representativeness, then the statistical probability in the sample will be essentially independent of the sample size.

Indeed, when test subjects evaluated the mean height distribution for samples of different sizes, they produced identical distributions. For example, the likelihood of obtaining an average height of more than 6 Feet (180 cm) has been estimated to be similar for samples of 1000, 100, and 10 people (Kahneman and Tvegsky, 1972b, 3). In addition, subjects failed to appreciate the role of sample size even when it was emphasized in the problem statement. Let's give an example that confirms this.

Some city is served by two hospitals. In the larger hospital, approximately 45 babies are born every day, and in the smaller hospital, approximately 15 babies are born every day. As you know, approximately 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it can be higher than 50%, sometimes lower.
Within one year, each hospital kept records of the days when more than 60% of the babies born were boys. Which hospital do you think recorded more of these days?
Large Hospital (21)
Lesser Hospital (21)
Approximately equally (i.e. within the 5% difference) (53)

The numbers in round brackets indicate the number of undergraduates who answered.

Most of the test takers estimated the likelihood that there will be more than 60% of boys equally in a small and a large hospital, perhaps because these events are described by the same statistics and, thus, equally representative of the entire population.

In contrast, according to sampling theory, the expected number of days on which more than 60% of babies born are boys is much higher in a small hospital than in a large one, because a deviation from 50% is less likely for a large sample. This fundamental concept of statistics is obviously not part of people's intuition.

A similar insensitivity to sample size was recorded in estimates of the a posteriori probability, that is, the probability that the sample was drawn from one population rather than another. Consider the following example:

Imagine a basket filled with balls, of which 2/3 are of the same color and 1/3 of the other. One person takes 5 balls out of the basket and finds that 4 of them are red and 1 is white. Another person takes out 20 balls and discovers that 12 of them are red and 8 are white. Which of these two people should be more confident in saying that there are more 2/3 red balls and 1/3 white balls in the basket than vice versa? What are the chances of each of these people?

In this example, the correct answer is to estimate the subsequent odds as 8 to 1 for a 4: 1 sample and 16 to 1 for a 12: 8 sample, assuming the prior probabilities are equal. However, most people think that the first sample provides much stronger support for the hypothesis that the basket is mostly filled with red balls, because the percentage of red balls in the first sample is greater than in the second. This again shows that intuitive estimates prevail due to the proportion of the sample, rather than its size, which plays a decisive role in determining the real subsequent chances. (Kahneman and Tvegsky, 1972b). In addition, intuitive estimates of subsequent odds (postegioг odds) are much less radical than correct values. In problems of this type, underestimation of the influence of the obvious has been repeatedly observed (W. Edwadds, 1968, 25; Slovic and Lichtenstein, 1971). This phenomenon has been called "conservatism."

Misconceptions of chance

People believe that a sequence of events organized as a random process represents an essential characteristic of this process even when the sequence is short. For example, when it comes to heads or tails, people think the O-P-O-P-P-O sequence is more likely than the O-O-O-P-P-R sequence, which doesn't seem to random, and also more likely than the sequence O-O-O-O-P-O, which does not reflect the equivalence of the sides of the coin (Kahneman and Tvegsky, 1972b, 3). Thus, people expect that the essential characteristics of the process will be represented, not only globally, i.e. in full sequence, but also locally in each of its parts. However, the locally representative sequence deviates systematically from the expected odds: it has too many alternations and too few repetitions. Another consequence of the belief about representativeness is the well-known gambler's fallacy in the casino. Seeing that reds fall for too long on the roulette wheel, for example, most people, for example, mistakenly believe that, rather, black should now come out, because a drop of black will complete a more representative sequence than another red. Chance is usually seen as a self-regulating process in which deflection in one direction results in deflection in the opposite direction in order to restore balance. In fact, deviations are not corrected, but simply "dissolve" as the random process proceeds.

Chance misconceptions are not unique to inexperienced test takers. The study of intuition under statistical assumptions by experienced theoretical psychologists (Tvegsky and Kahneman, 1971, 2) has shown a strong belief in what might be called the law of small numbers, according to which even small samples are highly representative of the populations from which they are selected. The results of these researchers reflected the expectation that a hypothesis that was valid across the entire population would be presented as a statistically significant result in the sample, with sample size irrelevant. As a consequence, experts place too much faith in the results obtained on small samples and overestimate the repeatability of these results too much. In conducting research, this bias leads to the selection of samples of inadequate size and to an exaggerated interpretation of the results.

Insensitivity to forecast reliability

People are sometimes forced to make numerical predictions such as the future price of a stock, demand for a product, or the outcome of a football game. Such predictions are based on representativeness. For example, suppose someone has received a description of a company and is asked to predict its future earnings. If the description of the company is very favorable, then, according to this description, very high profits would seem to be the most representative; if the description is mediocre, the most representative will seem to be an ordinary course of events. How favorable a description is does not depend on the credibility of the description or the extent to which it allows accurate predictions.

Therefore, if people make a prediction based solely on the favorableness of the description, their predictions will be insensitive to the reliability of the description and to the expected accuracy of the prediction.

This way of making judgments violates normative statistical theory in which the extremum and range of predictions depends on predictability. When predictability is zero, the same prediction must be made in all cases. For example, if company descriptions do not contain information on profit, then the same amount (in terms of average profit) should be predicted for all companies. If the predictability is perfect, of course, the predicted values ​​will match the actual values, and the range of the forecasts will be equal to the range of results. In general, the higher the predictability, the wider the range of predicted values.

Some numerical prediction studies have shown that intuitive predictions violate this rule and that subjects consider little, if any, considerations of predictability (Kahneman and Tvegsky, 1973, 4). In one of these studies, subjects were presented with several paragraphs of text, each describing the work of a university teacher during a given practice session. Some test takers were asked to rate the quality of the lesson described in the text using a percentage scale in relation to the specified population. The other test takers were asked to predict, also using a percentage scale, the position of each university teacher 5 years after this practice session. The judgments made under both conditions were identical. That is, the prediction of a distant criterion in time (the teacher's success in 5 years) was identical to the assessment of the information on the basis of which this prediction was made (the quality of the practical lesson). The students who assumed this were undoubtedly aware of how limited the predictability of teacher competence, based on a single trial lesson, conducted 5 years earlier; however, their predictions were as extreme as their estimates.

The illusion of validity

As we discussed earlier, people often make predictions by choosing an outcome (eg, a profession) that is most representative of the input (eg, a description of a person). The extent to which they are confident in their forecast depends primarily on the degree of representativeness (that is, the quality of correspondence of the choice to the input data), regardless of the factors that limit the accuracy of their forecast. Thus, people are quite confident in predicting that a person is a librarian when a description of their personality is given that matches the stereotype of a librarian, even if it is meager, unreliable, or out of date. Unreasonable confidence that results from a good match between the predicted outcome and the input data can be called a validity illusion. This illusion persists even when the subject knows the factors that limit the accuracy of his predictions. It is quite common to say that psychologists who conduct sample interviews often have considerable confidence in their predictions, even if they are familiar with the extensive literature that shows that selective interviews are highly error prone.

Long-term confidence in the correctness of the clinical sample interview results, despite repeated evidence of its adequacy, is sufficient evidence of the strength of this effect.

The internal consistency of a sample of inputs is a key measure of the confidence in a forecast based on those inputs. For example, people express more confidence in predicting the average grade of a student whose report card for the first year of study consists entirely of B (4 points) than in predicting the average grade of a student whose report card for the first year has many grades like A (5 points ) and C (3 points). Highly consistent patterns are most often observed when the input variables are highly redundant or interrelated. Consequently, people tend to be confident in predictions based on redundant input variables. However, a rule of thumb in correlation statistics is that if we have input variables of a certain validity, a prediction based on several such inputs can achieve higher accuracy when the variables are independent of each other than if they are redundant or interrelated. Thus, redundancy in input data reduces accuracy, even if it increases confidence, thus people are often confident in predictions that are likely to be wrong (Kahneman and Tvegsky, 1973, 4).

Misconceptions about regression

Suppose a large group of children were tested using two similar versions of aptitude test. If someone selects ten children from among those who did the best on one of these two versions, they will usually be disappointed in their performance on the second version of the test. Conversely, if someone selects ten children from among those who did the worst on the first version of the test, then on average they will find that they did slightly better on the other version. To summarize, consider two variables X and Y that have the same distribution. If you choose people whose average X estimates deviate from the average X by k units, then the average of their Y scale will usually deviate from the average Y by less than k units. These observations illustrate a common phenomenon known as regression to the middle, which was discovered by Galton over 100 years ago.

In everyday life, we all are faced with a large number of cases of regpecca to the average, comparing, for example, the height of fathers and sons, the level of intelligence of husbands and wives, or the results of passing exams one after the other. However, people have no idea about this. First, they do not expect regression in many contexts where it must occur. Second, when they acknowledge the occurrence of regression, they often invent wrong explanations for the reasons. (Kahneman and Tvegsky, 1973.4). We believe that the phenomenon of regression remains elusive because it is incompatible with the notion that the predicted outcome should be as representative of the input data as possible, and therefore the value of the outcome variable should be as extreme as the value of the input variable.

Failure to recognize the meaning of regression can be detrimental, as illustrated in the following observations (Kahneman and Tvegsky, 1973.4). When discussing training flights, experienced instructors noted that praise for an exceptionally soft landing is usually accompanied by a more unsuccessful landing on the next attempt, while harsh criticism after a hard landing is usually accompanied by an improvement in results on the next attempt. The instructors concluded that verbal rewards are harmful to learning, while reprimands are beneficial, contrary to accepted psychological doctrine. This conclusion is untenable due to the presence of the regpecca to the mean. As in other cases, when examinations follow one after another, improvement usually follows poor performance and worsening after excellent work, even if the teacher or instructor does not react to the student's achievement at the first attempt. Because instructors praised their students after good landings and flapped them after bad ones, they have come to the mistaken and potentially harmful conclusion that punishment is more effective than reward.

Thus, the inability to understand the effect of regression leads to the fact that the effectiveness of punishment is valued too high, and the effectiveness of the reward is underestimated. In social interaction, as well as in learning, rewards are usually applied when the work is done well and punished when the work is poorly done. By following only the law of regression, behavior is likely to improve after punishment, and most likely to worsen after natrasion. Therefore, it turns out that, by pure chance, people are rewarded for punishing others and punished for rewarding them. People, in general, are not aware of this circumstance. In fact, the elusive role of regression in determining the obvious consequences of reward and punishment seems to have escaped the attention of scientists working in this field.

Availability

There are situations in which people estimate the frequency of a class or the likelihood of events based on the ease with which they recall examples of incidents or events. For example, you can estimate the likelihood of the risk of heart attack among middle-aged people by recalling such cases among their acquaintances. Likewise, one can assess the likelihood that a business venture will fail by imagining the various difficulties it might face. This scoring heuristic is called availability. Accessibility is very useful for estimating the frequency or likelihood of events because events belonging to large classes are usually recalled and faster than cases of less frequent classes. However, availability is influenced by factors other than frequency and likelihood. Consequently, confidence in accessibility leads to highly predictable biases, some of which are illustrated below.

Recoverability Bias

When the size of a class is estimated based on the accessibility of its elements, a class whose elements are easily recoverable in memory will appear more numerous than a class of the same size, but whose elements are less accessible and less likely to be remembered. In a simple demonstration of this effect, subjects were read out a list of famous people of both genders, and then asked to rate whether there were more male names than female names on the list. Different lists were provided to different groups of test takers. On some of the lists, men were more famous than women, and on others, women were more famous than men. On each of the lists, subjects erroneously believed that the class (in this case, gender) in which the better-known people were included was more numerous (Tvegsky and Kahneman, 1973, 11).

In addition to recognizability, there are other factors, such as brightness, that affect the recoverability of events in memory. For example, if a person witnessed a fire in a building with his own eyes, then he will consider the occurrence of such accidents, probably, more subjectively probable than if he read about this fire in the local newspaper. In addition, recent incidents are likely to be remembered somewhat more easily than earlier ones. It often happens that the subjective assessment of the likelihood of road accidents is temporarily increased when a person sees an overturned car near the road.

Search Direction Bias

Suppose a word (of three letters or more) is selected from the English text naygad. Which is more likely that the word starts with the letter r or that r is the third letter? People approach this problem by remembering words that start with r (road) and words that have r in third position (eg car), and estimate the relative frequency based on the ease with which these two types of words arrive. to mind. Because it is much easier to search for words by the first letter than by the third, most people find that there are more words that start with this consonant than words in which the same consonant appears in the third position. They draw this conclusion even for consonants such as r or k, which appear more often in the third position than in the first (Tvegsky and Kahneman, 1973, 11).

Different tasks require different search directions. For example, suppose you are asked to rate the frequency with which words with abstract meaning (thought, love) and concrete meaning (door, water) appear in written English. The natural way to answer this question is to find the contexts in which these words might appear. It seems easier to recall contexts in which an abstract meaning may be mentioned (love in women's novels) than to recall contexts in which a word with a specific meaning (such as a door) is mentioned. If the frequency of words is determined based on the availability of the contexts in which they appear, words with an abstract meaning will be judged to be relatively more numerous than words with a specific meaning. This stereotype was observed in a recent study (Galbgaith and Undegwood, 1973), which showed that “the frequency of occurrence of words with an abstract meaning was much higher than the frequency of words with a specific meaning, while their objective frequency was equal. appeared in a much wider variety of contexts than words with a specific meaning.

Prejudice due to the ability to represent images

Sometimes it is necessary to estimate the frequency of a class whose elements are not stored in memory, but can be created according to a certain rule. In such situations, some elements are usually reproduced, and the frequency or probability is estimated by the ease with which the corresponding elements can be constructed. However, the ease with which the relevant elements are reproduced does not always reflect their actual frequency, and this way of judging leads to bias. To illustrate this, consider a group of 10 people who form committees of k members, with 2< k < 8. Сколько различных комитетов, состоящих из k членов может быть сформировано? Правильный ответ на эту проблему дается биноминальным коэффициентом (k10), который достигает максимума, paвнoгo 252 для k = 5. Ясно, что число комитетов, состоящих из k членов, paвняется числу комитетов, состоящих из (10-k) членов, потому что для любогo комитета, состоящего из k членов, существует единственно возможная грyппа, состоящая из (10-k) человек, не являющихся членами комитета.

One way to answer without calculating is to mentally create committees of k members and estimate their number using the ease with which they come to mind. Committees with a small number of members, for example, 2, are more accessible than committees with a large number of members, for example 8. The simplest scheme for creating committees is to divide a group into disjoint sets. It is immediately evident that it is easier to create five non-overlapping committees of 2 members each, while it is impossible to generate two non-overlapping committees of 8 members. Hence, if the frequency is assessed by the ability to represent it, or by the availability of mental reproduction, it will seem that there are more small committees than large ones, as opposed to the correct parabolic function. Indeed, when nonspecialists tested were asked to estimate the number of different committees of different sizes, their estimates were a monotonically decreasing function of committee size (Tvegsky and Kahneman, 1973, 11). For example, the average estimate for the number of 2-member committees was 70, while the estimate for 8-member committees was 20 (correct 45 in both cases).

The ability to represent images plays an important role in assessing the likelihood of real life situations. The risk involved in a dangerous expedition, for example, is assessed by mentally replaying contingencies that the expedition does not have sufficient equipment to overcome. If many of these difficulties are vividly portrayed, the expedition may seem extremely dangerous, although the ease with which disasters are imagined does not necessarily reflect their actual likelihood. Conversely, if a possible hazard is difficult to imagine, or simply does not come to mind, the risk associated with an event can be grossly underestimated.

Illusory relationship

Chapman and Chapman (1969) have described an interesting bias in estimating the frequency with which two events will occur simultaneously. They provided non-specialist subjects with information regarding several hypothetical patients with mental disorders. Data for each patient included clinical diagnosis and patient drawings. Subjects later rated the frequency with which each diagnosis (such as paranoia or persecution mania) was accompanied by a different pattern (specific eye shape). The subjects markedly overestimated the frequency of co-occurrence of two natural events, such as persecution mania and specific eye shape. This phenomenon is called illusory correlation. In erroneous assessments of the data presented, the subjects “rediscovered” much of the already known, but unfounded, clinical knowledge regarding the interpretation of the drawing test. The illusory correlation effect was extremely resistant to conflicting data. It persisted even when the relationship between the trait and the diagnosis was actually negative, which did not allow the subjects to determine the actual relationship between them.

Accessibility is a natural explanation for the illusory correlation effect. An assessment of how often two phenomena are interconnected and occur simultaneously can be based on the strength of the associative connection between them. When the association is strong, it is more likely to come to the conclusion that events often happened at the same time. Therefore, if the association between events is strong, then, according to people, they will often occur simultaneously. According to this view, the illusory correlation between the diagnosis of pursuit mania and the specific shape of the eyes in the drawing, for example, arises because the pursuit mania is more associated with the eyes than with any other part of the body.

Long-term life experience has taught us that, in general, elements of large classes are remembered better and faster than elements of less frequent classes; that more probable events are easier to imagine than less probable; and that associative links between events are reinforced when events often occur concurrently. As a result, a person has at his disposal a procedure (accessibility heuristic) for assessing the size of the class, the probability of an event, or the frequency with which events can occur simultaneously, are assessed by the ease with which the corresponding mental processes of recall, reproduction or association can be performed. However, as previous examples have shown, these assessment procedures systematically lead to errors.

Correction and "anchoring" (anchoging)

In many situations, people make judgments based on an initial value that has been specially selected in such a way as to get the final answer. The initial value, or starting point, can be obtained through the formulation of the problem, or it can be partly the result of a calculation. In any case, this “guesswork” is usually not enough (Slovic and Lichtenstein, 1971). That is, different starting points lead to different estimates that are biased towards those starting points. We call this phenomenon anchoging.

Insufficient "adjustment"

To demonstrate the 'anchoring' effect, test subjects were asked to rate various percentages (eg percentage of African countries in the United Nations). Each quantity was assigned a number from 0 to 100 by random selection in the presence of test takers. Test takers were first asked to indicate whether this number is greater or less than the value of the quantity itself, and then estimate the value of this quantity, moving up or down relative to its number ... Different groups of test takers were offered different numbers for each dimension, and these arbitrary numbers had a significant impact on test taker scores. For example, the average estimates of the percentage of African countries in the United Nations were 25 and 45 for the groups that received 10 and 65 as starting points, respectively. Monetary rewards for accuracy did not diminish the anchoring effect.

Anchoring occurs not only when the subject is given a starting point, but also when the subject bases his assessment on the result of some incomplete calculation. Exploring intuitive numerical estimation illustrates this effect. Two groups of high school students evaluated, within 5 seconds, the value of a numeric expression that was written on the board. One group evaluated the meaning of the expression

8 x 7 x 6 x 5 x 4 x 3 x 2 x 1,

while the other group was evaluating the meaning of the expression

1 x 2 x 3 x 4 x 5 x 6 x 7 x 8.

To quickly answer such questions, people can perform several steps of calculation and estimate the meaning of the expression using extrapolation or "adjustment". Since “adjustments” are usually insufficient, this procedure should lead to underestimation of the value. Moreover, since the result of the first few steps of the multiplication (performed from left to right) is higher in a decreasing sequence than in an ascending one, the first expression mentioned must be evaluated more than the last one. Both predictions have been confirmed. The average score for the ascending sequence was 512, while the average score for the descending sequence was 2250. The correct answer is 40320 for both sequences.

Bias in the chain of conjunctive and disjunctive events

In a recent study by Bar-Hillel (1973), test subjects were given the opportunity to bet on one of two events. Three types of events were used: (i) a simple event, such as drawing a red ball from a bag containing 50% red and 50% white balls; (ii) a related event, such as drawing a red ball seven times in a row from a bag (with balls returning) containing 90% of red balls and 10% of white balls; and (iii) an unrelated event, such as drawing a red ball at least at least 1 time in seven consecutive attempts (with balls returning) from a bag containing 10% red balls and 90% white balls. In this problem, a significant majority of the testers preferred to bet on a related event (the probability of which is 0.48), rather than on a simple one (the probability of which is 0.50). The subjects also preferred to bet on a simple event rather than a disjunctive one, which has a probability of 0.52.

Thus, most of the test takers bet on the less likely event in both comparisons. These test-taker decisions illustrate a general finding: the study of gambling decisions and probability estimates indicate that people: tend to overestimate the likelihood of conjunctive events (Cohen, Chesnik, and Haran, 1972, 24) and tend to underestimate the likelihood of disjunctive events. These stepeotypes are fully explained by the 'anchoring' effect. The established probability of an elementary event (success at any stage) provides a natural starting point for assessing the probabilities of both conjunctive and disjunctive events. Since “adjustments” from the starting point are usually insufficient, the final estimates remain too close to the probabilities of elementary events in both cases. Note that the total probability of conjunctive events is lower than the probability of each atomic event, while the total probability of an unrelated event is higher than the probability of each atomic event. The consequence of the "binding" is that the total probability will be overestimated for conjunctive events and underestimated for disjunctive events.

Bias in assessing complex events is especially significant in the planning context. The successful completion of a business venture, such as the development of a new product, is usually complex: for the venture to succeed, every event in a series must occur. Even if each of these events is highly likely, the overall probability of success can be quite low if the number of events is large.

The general tendency to overestimate the likelihood of conjunctive events leads to unreasonable optimism in assessing the likelihood that the plan will be successful or that the project will be completed on time. Conversely, disjunctive event structures are commonly encountered in risk assessment. A complex system, such as a nuclear reactor or the human body, will be damaged if any of its essential components fail. Even when the probability of failure in each component is small, the probability of failure of the entire system can be high if many components are involved. Due to the biased bias, people tend to underestimate the likelihood of failure in complex systems. Thus, anchor bias can sometimes depend on the structure of an event. The structure of an event or phenomenon, similar to a chain of links, leads to an overestimation of the probability of this event, the structure of an event, similar to a funnel, consisting of disjunctive links, leads to an underestimation of the probability of an event.

"Binding" in assessing the distribution of subjective probability

When analyzing decision-making, experts are often required to express their opinion on a quantity, for example, the average value of the Dow-Jones index on a given day, in the form of a probability distribution. Such a distribution is usually constructed by choosing values ​​for a quantity that correspond to its percentage scale of the probability distribution. For example, an expert may be asked to choose a number, X90, so that the subjective probability that this number will be higher than the Doy-Jones mean is 0.90. That is, he must choose the value of X90 so that in 9 cases to 1 the average value of the Doy-Jones index does not exceed this number. The subjective probability distribution of the Dow Jones mean value can be constructed from several such estimates, expressed using different percentage scales.

By accumulating such subjective probability distributions for different quantities, the correctness of the expert's estimates can be verified. An expert is considered to be calibrated (see Ch. 22) properly in a given set of problems if only 2 percent of the correct values ​​of the estimated values ​​are below the specified X2 values. For example, the correct values ​​should be below X01 for 1% of values ​​and above X99 for 1% of values. Thus, the true values ​​should strictly fall within the interval between X01 and X99 in 98% of tasks.

Several researchers (Alpert and Raiffa, 1969, 21; Stael von Holstein, 1971b; Winkler, 1967) analyzed the biases in estimating the probability for many quantitative values ​​for a large number of experts. These distributions indicated wide and systematic deviations from the proper estimates. In most studies, the actual estimated values ​​are either less than X01 or greater than X99 for about 30% of the tasks. That is, the subjects set narrow strict intervals for the ingot, which reflect their confidence, rather than their knowledge of the estimated values. This bias is common in both trained and simple test takers, and this effect is not eliminated by introducing assessment rules that provide incentives for external assessment. This effect is, at least in part, related to "snapping."

To select X90 as the Dow Jones average, for example, it is natural to start by thinking about the best estimate of the Dow Jones and "adjust" the upper values. If this "adjustment", like most others, is insufficient, then the X90 will not be extreme enough. A similar fixation effect will occur in the choice of X10, which is presumably obtained by adjusting someone's best estimate downward. Therefore, the valid interval between X10 and X90 will be too narrow and the estimated probability distribution will be rigid. In support of this interpretation, it can be shown that subjective probabilities change systematically through a procedure in which someone else's best estimate does not serve as an "anchor."

The subjective probability distributions for a given quantity (average Dow Jones number) can be obtained in two different ways: (i) ask the subject to choose the value of the Doy-Jones number that corresponds to the probability distribution expressed using a percentage scale and (ii) ask the subject to estimate the probability of that that the true value of the Doy-Jones number will exceed some of the indicated values. These two procedures are formally equivalent and should result in identical distributions. However, they offer different ways of correcting from different "ties". In procedure (i), the natural starting point is the best quality score. In procedure (ii), on the other hand, the test-taker can “stick” to the value set in the question. In contrast, he can “attach” to equal chances, or 50 to 50 chances, which are the natural starting point for assessing probability. In any case, procedure (ii) should end with less extreme estimates than procedure (i).

To contrast these two procedures, the test taker group was provided with a set of 24 quantitative measurements (such as air travel from New Delhi to Beijing) that were scored either X10 or X90 for each task. The other group of test subjects obtained the average scores of the first group for each of these 24 values. They were asked to rate the chances that each of the given values ​​exceeded the true value of the corresponding value. In the absence of any bias, the second group should reconstruct the probability indicated by the first group, that is, 9: 1. However, if the equal odds or a given value serves as an "anchor", the probability indicated by the second group should be less extreme, that is, closer to 1: one. In fact, the average probability reported by this group, across all problems, was 3: 1. When judgments from these two groups were tested, it was found that subjects in the first group were too extreme in their ratings, in line with earlier studies. The events, the probability of which, they defined as 0.10, actually occurred in 24% of cases. On the contrary, those tested in the second group were too conservative. Events, the probability of which, they determined as 0.34, actually occurred in 26% of cases. These results illustrate how the degree of correctness of the assessment depends on the assessment procedure.

Discussion

This part of the book has examined the cognitive stereotypes that arise as a result of confidence in assessment heuristics. These stereotypes are not characteristic of motivational effects, such as wishful thinking or distorted judgments due to approval and censure. Indeed, as previously reported, some serious grading errors occurred despite the fact that test takers were rewarded for accuracy and rewarded for answering correctly (Kahneman and Tvegsky, 1972b, 3; Tvegsky and Kahneman, 1973,11).

Confidence in heuristics and the prevalence of stereotypes are not unique to ordinary people. Experienced researchers are also prone to the same biases when they think intuitively. For example, the tendency to predict a result that is most representative of the data without paying sufficient attention to the a priori probability of such an outcome occurring was observed in the intuitive judgments of people who had extensive knowledge of statistics (Kahneman and Tvegsky, 1973.4; Twegsku and Kahnеman, 1971 , 2). Although those who have knowledge of statistics and avoid elementary mistakes, such as the mistakes of a gambler in a casino, make similar mistakes in intuitive judgments for more confusing and less understandable tasks.

Not surprisingly, useful varieties of heuristics such as representativeness and accessibility persist, even though they sometimes lead to errors in predictions or estimates. What is perhaps and surprising is the inability of people to infer, from long life experience, such fundamental statistical rules as regression to mean or the effect of sample size when analyzing within-sample variability. Although we all encounter numerous situations throughout our lives to which these rules can be applied, very few independently discover the principles of sampling and regpecca from their own experience. Statistical principles are not learned through everyday experience because the corresponding examples are not properly coded. For example, people don’t find that the average word length in lines next to each other in the text differs more than on the following pages after another, because they simply do not pay attention to the average word length in individual lines or pages. Thus, humans do not study the relationship between sample size and intra-sample variability, although there is ample data to draw such a conclusion.

The lack of proper coding also explains why people usually do not find stereotypes in their judgments about probability. A person could find out whether his estimates are correct by counting the number of events that actually occur from those that he considers equally probable. However, it is not natural for people to group events according to their likelihood. In the absence of such a grouping, a person cannot find, for example, that only 50% of the predictions, the probability of which he estimated as 0.9 or higher, actually came true.

Empirical analysis of cognitive stereotypes has implications for the theoretical and applied role of assessing probabilities. Modern decision theory (de Finetti, 1968; Savage, 1954) views subjective probability as the quantitative opinion of an idealized person. Definitely, the subjective probability of a given event is determined by the set of chances regarding this event, from which a person is asked to choose. An internally consistent or holistic measurement of subjective probability can be obtained if a person's choices among the offered chances are subject to certain principles, that is, the axioms of the theory. The resulting probability is subjective in the sense that different people may have different estimates of the likelihood of the same event. The main contribution of this approach is that it provides a rigorous subjective interpretation of probability that is applicable to unique events and is part of the general theory of rational decision making.

It may be worth noting that while subjective probabilities can sometimes be inferred from the choice of odds, they are usually not formed in this way. The person is betting on team A rather than team B, because he believes that team A is most likely to win; he does not derive his opinion as a result of preference for certain chances.

Thus, in reality, subjective probabilities determine, but are not inferred from, odds preferences, in contrast to the axiomatic theory of rational decision making (Savage, 1954).

The subjective nature of probability has led many scientists to believe that integrity, or internal consistency, is the only valid criterion against which probabilities should be judged. From the point of view of the formal theory of subjective probability, any set of internally consistent probabilistic estimates is as good as any other. This criterion is not entirely satisfactory, because an internally consistent set of subjective probabilities may also be incompatible with other opinions held by a person. Consider a person whose subjective probabilities for all possible coin toss outcomes reflect the gambler's fault in the casino. That is, his estimate of the likelihood of "tails" appearing in each particular toss increases with the number of consecutive heads that preceded that toss. The judgments of such a person can be internally consistent and therefore acceptable as adequate subjective probabilities according to the criterion of the formal theory. These probabilities, however, are inconsistent with the conventional wisdom that a coin has no memory and therefore is incapable of producing consistent dependencies. For the estimated probabilities to be considered adequate, or rational, internal consistency is not enough. Judgments must be consistent with all other views of this person. Unfortunately, there can be no simple formal procedure for assessing the compatibility of a set of probabilistic estimates with the subject's full frame of reference. The rational expert will, however, struggle for compatibility, even though the internal consistency is easier to achieve and evaluate. In particular, he will try to make his probabilistic judgments consistent with his knowledge of the subject, the laws of probability, and his own heuristics of estimation and bias.

This article describes three types of heuristics that are used in assessments under uncertainty: (i) representativeness, which is commonly used when people are asked to estimate the likelihood that an object or case A belongs to a class or process B; (ii) the availability of events or scenarios, which is often used when people are asked to rate the frequency of the class or the likelihood of a given scenario; and (iii) an adjustment or “anchoring” that is commonly used in quantitative forecasting when a quantity is available. These heuristics are highly economical and usually effective, but they lead to forecast bias. A better understanding of these heuristics and the bias they lead to could contribute to assessment and decision making in the face of uncertainty.

Consider the mathematical foundations of decision-making under uncertainty.

Essence and sources of uncertainty.

Uncertainty is a property of an object, expressed in its indistinctness, ambiguity, unreasonableness, leading to insufficient opportunities for the decision-maker to realize, understand, determine its present and future state.

Risk is a possible danger, an action at random, requiring, on the one hand, courage in the hope of a happy outcome, and on the other hand, taking into account the mathematical justification of the degree of risk.

The practice of decision-making is characterized by a set of conditions and circumstances (situation) that create certain relations, conditions, and position in the decision-making system. Taking into account the quantitative and qualitative characteristics of the information at the disposal of the decision-maker, it is possible to highlight the decisions made in the following conditions:

certainty (reliability);

uncertainty (unreliability);

risk (probabilistic certainty).

In conditions of certainty, decision-makers are quite accurate in determining the possible alternatives to the decision. However, in practice, it is difficult to assess the factors that create conditions for decision-making; therefore, situations of complete certainty are often absent.

The sources of uncertainty about the expected conditions in the development of an enterprise can be the behavior of competitors, the organization's personnel, technical and technological processes, and market changes. In this case, the conditions can be subdivided into socio-political, administrative-legislative, production, commercial, financial. Thus, the conditions that create uncertainty are the impact of factors from the external to the internal environment of the organization. The decision is made in conditions of uncertainty, when it is impossible to assess the likelihood of potential results. This should be the case when the factors to be taken into account are so new and complex that it is not possible to obtain sufficient relevant information about them. As a result, the likelihood of a certain consequence cannot be predicted with a sufficient degree of certainty. Uncertainty is characteristic of some decisions that have to be made in a rapidly changing environment. The highest potential for uncertainty is possessed by the socio-cultural, political and science-intensive environment. Defense Department decisions to develop extremely sophisticated new weapons are often initially vague. The reason is that no one knows how the weapon will be used and whether it will happen at all, as well as what weapon the enemy can use. Therefore, the ministry is often unable to determine whether a new weapon will be truly effective by the time it enters the army, which may happen, for example, in five years. However, in practice, very few management decisions have to be made under conditions of complete uncertainty.

When faced with uncertainty, a leader can use two main opportunities. First, try to get additional relevant information and analyze the problem again. This often reduces the novelty and complexity of the problem. The manager combines this additional information and analysis with accumulated experience, judgment, or intuition to give subjective or perceived credibility to a range of outcomes.

The second possibility is to act in strict accordance with past experience, judgment or intuition and make an assumption about the likelihood of events. Time and informational constraints are essential when making management decisions.

In a situation of risk, using the theory of probability, it is possible to calculate the probability of a particular change in the environment; in a situation of uncertainty, the values ​​of the probability cannot be obtained.

Uncertainty manifests itself in the impossibility of determining the likelihood of the onset of various states of the external environment due to their unlimited number and the lack of assessment methods. Uncertainty is taken into account in various ways.

Rules and Criteria for Making Decisions in Conditions of Uncertainty.

Here are some general criteria for the rational choice of solutions from the set of possible ones. The criteria are based on an analysis of a matrix of possible environmental states and decision alternatives.

The matrix given in Table 1 contains: Аj - alternatives, that is, options for actions, one of which must be selected; Si - possible variants of environmental conditions; aij is an element of the matrix denoting the value of the cost of capital taken by the alternative j under the state of the environment i.

Table 1. Decision matrix

Various rules and criteria are used to select the optimal strategy in a situation of uncertainty.

Maximin rule (Waald criterion).

In accordance with this rule, from the alternatives aj, the one that, in the most unfavorable state of the external environment, has the highest value of the indicator is selected. For this purpose, in each line of the matrix, alternatives with the minimum value of the indicator are fixed and the maximum is selected from the marked minimum. The alternative a * with the highest of all the lowest is given priority.

The decision maker in this case is minimally ready for risk, assuming the maximum negative development of the state of the external environment and taking into account the least favorable development for each alternative.

According to Waald's criterion, decision-makers choose a strategy that guarantees the maximum value of the worst payoff (maximin criterion).

The maximax rule.

In accordance with this rule, the alternative with the highest achievable value of the estimated indicator is selected. At the same time, the decision maker does not take into account the risk from unfavorable changes in the environment. The alternative is found by the formula:

a * = (ajmaxj maxi Pij)

Using this rule, determine the maximum value for each row and select the largest one.

A big drawback of the maximax and maximin rules is the use of only one scenario for each alternative when making a decision.

Minimax rule (Savage criterion).

Unlike maximin, minimax is focused on minimizing not so much losses as regrets about lost profits. The rule allows for reasonable risk for the sake of additional profit. The Savage criterion is calculated by the formula:

min max П = mini [maxj (maxi Xij - Xij)]

where mini, maxj is the search for the maximum by iterating over the corresponding columns and rows.

The minimax calculation consists of four stages:

  • 1) Find the best result for each graph separately, that is, the maximum Xij (market reaction).
  • 2) Determine the deviation from the best result for each individual column, that is, maxi Xij - Xij. The results obtained form a matrix of deviations (regrets), since its elements are lost profits from unsuccessful decisions made due to an erroneous assessment of the possibility of market reaction.
  • 3) For each line of regrets, we find the maximum value.
  • 4) Choose a solution in which the maximum regret will be less than others.

Hurwitz rule.

In accordance with this rule, the maximax and maximin rules are combined by linking the maximum of the minimum values ​​of the alternatives. This rule is also called the rule of optimism - pessimism. The optimal alternative can be calculated using the formula:

a * = maxi [(1-?) minj Пji +? maxj Пji]

where? - coefficient of optimism,? = 1 ... 0 at? = 1 alternative is chosen according to the maximax rule, at? = 0 - according to the maximin rule. Given the fear of risk, is it advisable to ask? = 0.3. The highest value of the target value determines the required alternative.

The Hurwitz rule is applied taking into account more essential information than when using the maximin and maximax rules.

Thus, when making a management decision, in the general case, it is necessary:

predict future conditions, such as demand levels;

develop a list of possible alternatives

evaluate the payback of all alternatives;

determine the likelihood of each condition;

evaluate alternatives according to the selected decision criterion.

The direct application of the criteria when making a managerial decision under conditions of uncertainty is considered in the practical part of this work.

uncertainty management decision

Kahneman D., Slovik P., Tversky A. Decision Making in Uncertainty: Rules and Bias

I have been getting close to this book for a long time ... For the first time I learned about the work of the Nobel laureate Daniel Kahneman from the book of Nassim Taleb Fooled by chance. Taleb quotes Kahneman a lot and relish, and, as I learned later, not only in this, but also in his other books (Black Swan. Under the sign of unpredictability, On the secrets of stability). Moreover, I found numerous references to Kahneman in the books: Evgeniy Ksenchuk Systems thinking. The boundaries of mental models and a systemic vision of the world, Leonard Mlodinov. (Not) perfect coincidence. How chance rules our life. Unfortunately, I could not find Kahneman's book on paper, so I "had" to buy an e-book and download Kahneman from the Internet ... And believe me, I did not regret a single minute ...

D. Kahneman, P. Slovik, A. Tversky. Making Decisions in Uncertainty: Rules and Bias. - Kharkov: Publishing house Institute of Applied Psychology "Humanitarian Center", 2005. - 632 p.

This book is about the peculiarities of thinking and behavior of people when assessing and predicting uncertain events. As convincingly shown in the book, when making decisions under uncertain conditions, people usually make mistakes, sometimes quite significantly, even if they have studied the theory of probability and statistics. These errors are subject to certain psychological laws that have been identified and well substantiated experimentally by researchers.

Since the introduction of Bayesian ideas into psychological research, psychologists for the first time have been offered a holistic and clearly formulated model of optimal behavior in conditions of uncertainty, with which it was possible to compare human decision-making. The conformity of decision making to normative models has become one of the main paradigms of research in the field of judgment in the face of uncertainty.

PartI... Introduction

Chapter 1. Decision Making Under Uncertainty: Rules and Biases

How do people assess the likelihood of an uncertain event or the value of an uncertain quantity? People rely on a limited number of heuristic 1 principles that reduce the complex tasks of estimating probabilities and predicting values ​​of quantities to simpler judgments. Heuristics are very useful, but sometimes they lead to serious and systematic errors.

The subjective assessment of probability is similar to the subjective assessment of physical quantities such as distance or size.

Representativeness. What is the probability that process B will lead to event A? Answering people usually rely on representativeness heuristic, in which the probability is determined by the degree to which A is representative of B, that is, the degree to which A is similar to B. Consider the description of a person by his former neighbor: “Steve is very withdrawn and shy, always ready to help me, but has too little interest other people and reality in general. He is very meek and tidy, loves order, and is also prone to detail. " How do people rate the likelihood of who Steve is by profession (for example, a farmer, salesman, plane pilot, librarian, or doctor)?

In the representativeness heuristic, the likelihood that Steve is, for example, a librarian is determined by the degree to which he is representative of the librarian, or conforms to the stereotype of a librarian. This approach to assessing likelihood leads to serious errors because similarity or representativeness is not influenced by the individual factors that should influence the assessment of likelihood.

Insensitivity to the prior probability of the result. One of the factors that do not affect representativeness, but significantly affect the likelihood is the antecedent (prior) probability, or the frequency of the baseline values ​​of the results (outcomes). In Steve's case, for example, the fact that there are many more farmers than librarians in the population is necessarily taken into account in any reasonable assessment of the likelihood that Steve is a librarian rather than a farmer. Taking the baseline frequency into account, however, does not really affect Steve's conformity to the stereotype of librarians and farmers. If people estimate probability by means of representativeness, then they will neglect the antecedent probabilities.

This hypothesis was tested in an experiment in which the antecedent probabilities were changed. The subjects were shown short descriptions of several people chosen at random from a group of 100 specialists - engineers and lawyers. Test subjects were asked to rate, for each description, the likelihood that it belonged to an engineer rather than a lawyer. In one experimental case, subjects were told that the group from which the descriptions were given consisted of 70 engineers and 30 lawyers. In another case, subjects were told that the team consisted of 30 engineers and 70 lawyers. The chances that each individual description belongs to an engineer rather than a lawyer should be higher in the first case, where the majority of engineers are, than in the second, where the majority of lawyers. This can be shown by applying Bayes' rule that the proportion of these odds should be (0.7 / 0.3) 2, or 5.44 for each description. In gross violation of Bayes' rule, the subjects in both cases demonstrated essentially the same estimates of probability. Obviously, the subjects judged the likelihood that a particular description belonged to an engineer rather than a lawyer as the degree to which that description was representative of the two stereotypes, with little, if any, consideration for the antecedent probabilities of these categories.

Insensitive to sample size. People usually use the representativity heuristic. That is, they estimate the likelihood of a result in a sample, to the extent that this result is similar to the corresponding parameter. The similarity of statistics in a sample to a typical parameter for the entire population does not depend on the sample size. Therefore, if the probability is calculated using representativeness, then the statistical probability in the sample will be essentially independent of the sample size. On the contrary, according to sampling theory, the larger the sample, the smaller the expected deviation from the mean. This fundamental concept of statistics is obviously not part of people's intuition.

Imagine a basket filled with balls, of which 2/3 are in one color and 1/3 in another. One person takes out 5 balls from the basket and finds that 4 of them are red and 1 is white. Another person takes out 20 balls and finds that 12 of them are red and 8 are white. Which of these two people should be more confident in saying that there are more 2/3 red balls and 1/3 white balls in the basket than vice versa? In this example, the correct answer is to estimate the subsequent odds as 8 to 1 for a sample of 5 balls and 16 to 1 for a sample of 20 balls (Figure 1). However, most people think that the first sample provides much stronger support for the hypothesis that the basket is mostly filled with red balls, because the percentage of red balls in the first sample is greater than in the second. This again shows that intuitive estimates prevail at the expense of sample proportion, rather than sample size, which plays a decisive role in determining the actual subsequent odds.

Rice. 1. Probabilities in the problem with balls (see the formulas in the Excel file on the "Balls" sheet)

Misconceptions of chance. People believe that a sequence of events organized as a random process represents an essential characteristic of this process even when the sequence is short. For example, when it comes to "heads" or "tails", people think that the O-O-O-P-P-O sequence is more likely than the O-O-O-P-P-P sequence, which does not seem random, and also more likely than the O-O-O-O-P-O sequence, which does not reflect the equivalence of the sides of the coin. Thus, people expect that the essential characteristics of the process will be represented, not only globally, i.e. in full sequence, but also locally - in each of its parts. However, the locally representative sequence deviates systematically from the expected odds: it has too many alternations and too few repetitions. 2

Another consequence of the belief about representativeness is the well-known gambler's mistake in the casino. For example, seeing reds fall for too long on a roulette wheel, most people mistakenly believe that black should most likely come up now, because black will complete a more representative sequence than another red. Chance is usually seen as a self-regulating process in which deflection in one direction results in deflection in the opposite direction in order to restore balance. In fact, deviations are not corrected, but simply "dissolve" as the random process proceeds.

Showed a strong belief in what might be called the Law of Small Numbers, according to which even small samples are highly representative of the populations from which they are selected. The results of these researchers reflected the expectation that a hypothesis that was valid across the entire population would be presented as a statistically significant result in a sample, with sample size irrelevant. As a consequence, experts place too much faith in the results obtained on small samples and overestimate the repeatability of these results too much. In the conduct of the study, this bias leads to the selection of samples of inadequate size and to an exaggerated interpretation of the results.

Insensitivity to forecast reliability. People are sometimes forced to make numerical predictions such as the future price of a stock, demand for a product, or the outcome of a football game. Such predictions are based on representativeness. For example, suppose someone has received a description of a company and is asked to predict its future earnings. If the description of the company is very favorable, then very high profits would appear to be the most representative of this description; if the description is mediocre, the most representative will seem to be an ordinary course of events. How favorable a description is does not depend on the credibility of the description or the extent to which it allows accurate predictions. Therefore, if people make a prediction based solely on the favorableness of the description, their predictions will be insensitive to the reliability of the description and to the expected accuracy of the prediction. This way of making judgments violates normative statistical theory in which the extremum and range of predictions depends on predictability. When predictability is zero, the same prediction must be made in all cases.

The illusion of validity. People are quite confident in predicting that a person is a librarian when a description of their personality is given that matches the stereotype of a librarian, even if it is meager, unreliable, or out of date. Unreasonable confidence that results from a good match between the predicted outcome and the input data can be called a validity illusion.

Misconceptions about regression. Suppose a large group of children were tested using two similar versions of aptitude test. If someone selects ten children from among those who did the best on one of these two versions, they will usually be disappointed in their performance on the second version of the test. These observations illustrate a common phenomenon known as regression to the mean, which was discovered by Galton over 100 years ago. In everyday life, we are all faced with a large number of cases of regression to the mean, comparing, for example, the height of fathers and sons. However, people have no idea about this. First, they do not expect regression in the many contexts where it should occur. Second, when they acknowledge the occurrence of regression, they often invent wrong explanations for the reasons.

Failure to recognize the meaning of regression can be detrimental. When discussing training flights, experienced instructors noted that praise for an exceptionally soft landing is usually accompanied by a more unsuccessful landing on the next attempt, while harsh criticism after a hard landing is usually accompanied by an improvement in results on the next attempt. The instructors concluded that verbal rewards are harmful to learning, while reprimands are beneficial, contrary to accepted psychological doctrine. This conclusion is untenable due to the presence of regression to the mean. Thus, the inability to understand the effect of regression leads to the fact that the effectiveness of punishment is valued too high, and the effectiveness of the reward is underestimated.

Availability. People rate the frequency of a class, or the likelihood of events, based on the ease with which they recall examples of incidents or events. When the size of a class is estimated based on the accessibility of its members, a class whose members are easily recoverable in memory will appear more numerous than a class of the same size, but whose members are less accessible and less likely to be remembered.

The subjects were read out a list of famous people of both genders, and then asked to rate if there were more male names than female names on the list. Different lists were provided to different groups of test takers. On some of the lists, men were more famous than women, and on others, women were more famous than men. On each of the lists, the subjects mistakenly believed that the class (in this case, gender), in which the more famous people were, was more numerous.

The ability to represent images plays an important role in assessing the likelihood of real life situations. The risk involved in a dangerous expedition, for example, is assessed by mentally replaying contingencies that the expedition does not have sufficient equipment to overcome. If many of these difficulties are vividly portrayed, the expedition may seem extremely dangerous, although the ease with which disasters are imagined does not necessarily reflect their actual likelihood. Conversely, if a possible hazard is difficult to imagine, or simply does not come to mind, the risk associated with an event can be grossly underestimated.

Illusory relationship. Long-term life experience has taught us that, in general, elements of large classes are remembered better and faster than elements of less frequent classes; that more probable events are easier to imagine than less probable; and that associative links between events are reinforced when events often occur concurrently. As a result, a person gets at his disposal the procedure ( availability heuristic) to estimate the class size. The likelihood of an event, or the frequency with which events can occur simultaneously, is assessed by the ease with which the corresponding mental processes of recall, reproduction, or association can be performed. However, these assessment procedures are systematically error prone.

Adjustment and "snapping" (anchoring). In many situations, people make estimates based on an initial value. Two groups of high school students evaluated, for 5 seconds, the value of a numeric expression that was written on a blackboard. One group evaluated the value of the expression 8x7x6x5x4x3x2x1, while the other group evaluated the value of the expression 1x2x3x4x5x6x7x8. The average score for the ascending sequence was 512, while the average score for the descending sequence was 2250. The correct answer was 40 320 for both sequences.

Bias in assessing complex events is especially significant in the planning context. The successful completion of a business venture, such as the development of a new product, is usually complex: for the venture to succeed, every event in a series must occur. Even if each of these events is highly likely, the overall probability of success can be quite low if the number of events is large. The general tendency to overestimate the likelihood of conjunctive 3 events leads to unreasonable optimism in assessing the likelihood that the plan will be successful or that the project will be completed on time. Conversely, disjunctive 4 event structures are commonly encountered in risk assessment. A complex system, such as a nuclear reactor or the human body, will be damaged if any of its essential components fail. Even when the probability of failure in each component is small, the probability of failure of the entire system can be high if many components are involved. Due to the biased bias, people tend to underestimate the likelihood of failure in complex systems. Thus, anchor bias can sometimes depend on the structure of an event. The structure of an event or phenomenon similar to a chain of links leads to an overestimation of the probability of this event, the structure of an event, similar to a funnel, consisting of disjunctive links, leads to an underestimation of the probability of an event.

"Binding" when assessing the distribution of subjective probability. When analyzing decision making, experts are often required to express their opinion on a quantity. For example, an expert may be asked to select a number, X 90, so that the subjective probability that this number will be higher than the Dow Jones average is 0.90.

An expert is considered to be properly calibrated in a given set of problems if only 2% of the correct values ​​of the estimated values ​​are below the specified values. Thus, the true values ​​should strictly fall between X 01 and X 99 in 98% of tasks.

Confidence in heuristics and the prevalence of stereotypes are not unique to ordinary people. Experienced researchers are also prone to the same biases - when they think intuitively. It is surprising that people are unable to infer from long life experiences such fundamental statistical rules as regression to the mean or the effect of sample size. While we all encounter numerous situations throughout our lives to which these rules can be applied, very few independently discover the principles of sampling and regression from their own experience. Statistical principles are not learned through everyday experience.

PartIIRepresentativeness

DECISION THEORY

Topic 5: Decision Making Under Uncertainty

Introduction

1. The concept of uncertainty and risk

3. Classification of risks in the development of management decisions

4. Decision making technologies in conditions of stochastic risk

Conclusion

A managerial decision is made in conditions of certainty, if the manager knows exactly the result of the implementation of each alternative. It should be noted that a management decision is made in conditions of certainty rather rarely.

Uncertainties are the main reason for the emergence of risks. Reducing their volume is the main task of the head.


Managers often have to develop and make management decisions in conditions of incomplete and unreliable information, and the results of the implementation of management decisions do not always coincide with the planned indicators. These conditions are classified as circumstances of uncertainty and risk.

A managerial decision is made in conditions of uncertainty, when the manager does not have the ability to assess the likelihood of future results. This happens when the parameters to be taken into account are so new and unstructured that the likelihood of a particular consequence cannot be predicted with sufficient confidence.

Management decisions are made in conditions of risk, when the results of their implementation are not determined, but the probability of each of them occurring is known. The uncertainty of the result in this case is associated with the possibility of unfavorable situations and consequences for achieving the intended goals.

Uncertainty in decision-making is manifested in the parameters of the information used at all stages of its processing. Uncertainty is difficult to measure and is more often assessed in terms of quality (high or low). It is also estimated as a percentage (information uncertainty at the level of 30%).

Uncertainty is associated with the development of a management decision, and risk - with the results of implementation.

Uncertainties are the main reason for the emergence of risks. Reducing their volume is the main task of the head.

“Uncertainty is viewed as a phenomenon and as a process. If we consider it as a phenomenon, then we are dealing with a set of fuzzy situations, incomplete and mutually exclusive information. The phenomena also include unforeseen events that arise against the will of the leader and can change the course of planned events: for example, a sharp change in weather led to a change in the program of celebrating the city's day. "

As a process, uncertainty is the activity of an incompetent manager who makes the wrong decisions. For example, when assessing the investment attractiveness of a municipal loan, mistakes were made, and as a result, the city budget did not receive 800 thousand rubles. In practice, it is necessary to consider uncertainty as a whole, since a phenomenon is created by a process, and a process forms a phenomenon.

Uncertainties are objective and subjective.

Objective ones do not depend on the decision maker, and their source is outside the system in which the decision is made.

Subjective ones are the result of professional mistakes, shortcomings, inconsistencies in action, while their source is located within the system in which the decision is made.

There are four levels of uncertainty:

Low, which does not affect the main stages of the development and implementation of management decisions;

Medium, which requires a revision of some stages of development and implementation of the solution;

High implies the development of new procedures;

Superhigh, which does not allow assessing and adequately interpreting the data on the current situation.

2. Levels of uncertainty in assessing the effectiveness of management decisions

Consideration of the levels of uncertainty allows you to analytically represent their use, depending on the nature of the manager's management activities.

Figure 1. the matrix of the effectiveness of management decisions in the form of interaction between levels of uncertainty and the nature of management activities is presented.

Effective solutions include reasonable, well-developed, feasible, understandable to the performer. To ineffective - unreasonable, incomplete, impracticable and difficult to implement.

Within the framework of stable management activities, standard, repetitive procedures are carried out in conditions of weak disturbing influences of the external and internal environment.

The corrective nature of management activity is used in case of medium disturbing influences of the external and internal environment, when the leader has to adjust the key processes of the management system.

Innovative management activities are characterized by a constant search and implementation of new processes and technologies to achieve the set goals.

The combination of a low level of uncertainty with a stable and corrective nature of activities (areas A1 and B1) allows the leader to make informed decisions with minimal risk of implementation. With the innovative nature of the activity

and a low level of uncertainty (area B 1) deterministic information will slow down the process of making effective decisions.

The combination of the average level of uncertainty with the corrective and innovative nature of management activities gives the area of ​​effective solutions (B 2 and C 2).

A high level of uncertainty, combined with a stable nature of management activities, leads to ineffective decisions (area A3), but is well suited for the innovative nature of management activities (area B 3).


Fig. 1. Matrix of the effectiveness of management decisions

“An extremely high level of uncertainty leads to ineffective decisions, as poorly structured, difficult to perceive and unreliable information makes it difficult to make effective decisions. "

Consideration of the levels of uncertainty allows you to analytically represent their use, depending on the nature of the manager's management activities. Effective solutions include reasonable, well-developed, feasible, understandable to the performer. To ineffective - unreasonable, incomplete, impracticable and difficult to implement.

Within the framework of stable management activities, standard, repetitive procedures are carried out in conditions of weak disturbing influences of the external and internal environment. The corrective nature of management activity is used in case of medium disturbing influences of the external and internal environment, when the leader has to adjust the key processes of the management system. Innovative management activities are characterized by a constant search and implementation of new processes and technologies to achieve the set goals. The combination of low with a stable and corrective nature of activities allows the leader to make informed decisions with minimal risk of implementation. With an innovative nature of activity and a low level of uncertainty, deterministic information will slow down the process of making effective decisions.

The combination of the average level of uncertainty with the corrective and innovative nature of management activities provides an area of ​​effective solutions. A high level of uncertainty, combined with a stable nature of management activities, leads to ineffective decisions, but is well suited for the innovative nature of management activities. An extremely high level of uncertainty leads to ineffective decisions, since poorly structured, difficult to perceive and unreliable information makes it difficult to make effective decisions.


Oleg Levyakov

There are no unsolvable problems, there are unsolved solutions.
Eric Bourne

Decision making is a special kind of human activity aimed at choosing a way to achieve a goal. In a broad sense, a decision is understood as the process of choosing one or more options for action from a variety of possible ones.

Decision making has long been considered the primary responsibility of the ruling elite. This process is based on the choice of direction of activity in conditions of uncertainty, and the ability to work in conditions of uncertainty is the basis of the decision-making process. If there was no uncertainty about which direction of activity should be chosen, there would be no need to make a decision. Decision-makers are assumed to be reasonable, but that reasonableness is "limited" by a lack of knowledge about what should be preferred.


A well-formulated problem is a half-solved problem.
Charles Kettering

In 1979, Daniel Kahneman and Amos Tversky published Prospect Theory: An Analysis of Risk-Based Decision Making, which gave rise to so-called behavioral economics. In this work, scientists presented the results of their psychological experiments, which proved that people cannot rationally assess the magnitude of expected benefits or losses, and even more so, the quantitative values ​​of the probability of random events. It turns out that people tend to be wrong when assessing probability: they underestimate the likelihood of events that are likely to occur, and they overestimate the much less likely events. Scientists have found that mathematicians who know the theory of probability well do not use their knowledge in real life situations, but proceed from their stereotypes, prejudices and emotions. Instead of decision-making theories based on probability theory, D. Kahnemann and A. Tversky proposed a new theory - prospect theory. According to this theory, a normal person is not able to correctly assess future benefits in absolute terms, in fact, he evaluates them in comparison with some generally accepted standard, trying, first of all, to avoid worsening his position.


You will never solve a problem if you think the same way as those who posed it.
Albert Einstein

Making decisions in the face of uncertainty does not even imply knowing all the possible gains and the degree of their probability. It is based on the fact that the probabilities of various scenarios for the development of events are unknown to the subject making the risk decision. In this case, when choosing an alternative to the decision, the subject is guided, on the one hand, by his risk preference, and on the other, by the appropriate selection criterion from all alternatives. That is, decisions made in the face of uncertainty are when it is impossible to assess the likelihood of potential results. The uncertainty of the situation can be caused by various factors, for example: the presence of a significant number of objects or elements in the situation; lack of information or its inaccuracy; low level of professionalism; time limit, etc.

So how does the probability estimate work? According to D. Kahneman and A. Tversky (Decision making in uncertainty: rules and biases. Cambridge, 2001) - subjective. We estimate the likelihood of random events, especially in a situation of uncertainty, extremely imprecise.

The subjective assessment of probability is similar to the subjective assessment of physical quantities such as distance or size. So, the estimated distance to an object largely depends on the clarity of its image: the clearer the object is seen, the closer it seems. That is why the number of accidents on the roads during fog increases: in poor visibility, distances are often overestimated, because the contours of objects are blurred. Thus, using clarity as a measure of distance leads to common biases. Such biases also manifest themselves in the intuitive assessment of probability.


There is more than one way to look at a problem, and they may all be correct.
Norman Schwarzkopf

The activity related to choice is the main activity in making decisions. If the degree of uncertainty of the results and the ways of achieving them is high, the decision-makers, apparently, will face an almost impossible task of choosing a certain sequence of actions. The only way forward is by inspiration, and individual decision makers act on a whim or, in special cases, rely on divine intervention. In such circumstances, errors are considered possible, and the challenge is for them to be corrected by subsequent solutions. With this concept of decision-making, the emphasis is on the concept of decision-making as a choice from the stream of an uninterrupted chain of decisions (as a rule, the matter does not end with one decision, one decision entails the need to make the next, etc.)

Often, decisions are made representatively, i.e. there is a kind of projection, mapping of one into another or onto another, namely, we are talking about an internal representation of something, formed in the process of a person's life, in which his picture of the world, society and himself is presented. More often than not, people estimate probability by means of representativeness, and the antecedent probabilities are neglected.


The difficult problems we face cannot be solved at the same level of thinking that we were at when they were born.
Albert Einstein

There are situations in which people judge the likelihood of events based on the ease with which they recall examples of incidents or events.

The easy accessibility of recalling events in memory contributes to the formation of biases in assessing the likelihood of an event.


It is true that which corresponds to the practical success of the action.
William james

Uncertainty is a fact that all forms of life have to contend with. At all levels of biological complexity, there is uncertainty about the possible consequences of events and actions, and at all levels, action must be taken before the uncertainty is clarified.

Kahneman's research has shown that people respond differently to equivalent (in terms of the ratio of gains and losses) situations, depending on whether they lose or gain. This phenomenon is called an asymmetric response to changes in welfare. A person is afraid of loss, i.e. his feelings of loss and gain are asymmetric: the degree of a person’s satisfaction from an acquisition is much lower than the degree of frustration from an equivalent loss. Therefore, people are willing to take risks in order to avoid losses, but are not inclined to take risks in order to gain benefits.

His experiments showed that people are prone to misjudgment of probability: they underestimate the likelihood of events that are likely to occur, and they overestimate much less likely events. Scientists have discovered an interesting pattern - even mathematics students who know the theory of probability well do not use their knowledge in real life situations, but proceed from their stereotypes, prejudices and emotions.

Thus, Kahneman came to the conclusion that human actions are governed not only and not so much by the mind of people as by their stupidity, since a great many actions performed by people are irrational. Moreover, Kahneman experimentally proved that the illogicality of human behavior is natural and showed that its scale is incredibly large.

According to Kahneman and Tversky, people do not calculate and do not calculate, but make decisions in accordance with their ideas, in other words, they estimate. This means that the inability of people to fully and adequately analyze leads to the fact that in conditions of uncertainty, we rely more on random choice. The likelihood of an event occurring is assessed based on "personal experience", i.e. based on subjective information and preferences.

Thus, people irrationally prefer to believe what they know, flatly refusing to admit even the obvious fallacy of their judgments.