write about cognitive biases and heuristics and how they impact decsion making.A

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

write about cognitive biases and heuristics and how they impact decsion making.A

write about cognitive biases and heuristics and how they impact decsion making.As you will discover, many researchers have studied human judgements by setting real-world problems for their participants and seeing whether their responses accord with the rules of probability or logic. Often they do not, but perhaps this should not be surprising. The mathematics that allow an understanding of probabilistic concepts only began to be developed in the seventeenth century, with the works of people such as Blaise Pascal (1623–1662) and Pierre de Fermat (1601– 1665), whereas human cognitive processes have been evolving for considerably longer. Even with a modern education, many people struggle to understand and consciously apply the rules of probability unless they have had a considerable degree of education in, and practice using, probability theory. For many domains of judgement, including those for which probabilistic reasoning would be helpful, researchers such as Kahneman, Tversky and others have proposed that people respond by applying the general ‘rules of thumb’, or in other words simplifying operations, that psychologists refer to as heuristics.Heuristics are generally supposed to be useful, and may often be so, but they do not guarantee a correct solution to a problem. Tversky and Kahneman (1974) described three such heuristics that have subsequently been the subject of much investigation: representativeness, availability, and anchoring (see also the influential collection of papers and original articles in Kahneman et al., 1982). In the following sections you will explore these heuristics, including trying out some of the problem-based tasks that Kahneman and colleagues used in their research which discovered them.The logical principle that most people (85 per cent in the original study) fail to take into account in examples like Activity 11.2 is that the conjunction of two events can never be more likely than the occurrence of either event by itself. Tversky and Kahneman (1983) referred to people’s responses on this kind of task as a conjunction fallacy. They proposed that this fallacy occurs when people apply the representativeness heuristic to the problem (the original work on the representativeness heuristic is described in Kahneman and
Tversky, 1972). This heuristic involves making a judgement of resemblance between the specific person or thing that is being judged and a stereotypical example of the category to which that person or thing belongs. In the case of Toyah, the description of her does not seem to bear much resemblance to the stereotype of the category ‘bank worker’. However, add ‘environmental activist’ to ‘bank worker’ and the resemblance to Toyah seems to increase. Critically, a judgement of resemblance is logically not the same as a judgement of probability, but the combination of the representativeness heuristic and the conjunction fallacy makes people respond as though it is.
The conjunction fallacy is one way that judgements by representativeness can manifest themselves. Another way is lack of sensitivity to sample size, that is, how many of something are being considered. You may have some intuitive idea about sample size already; for example, you would probably trust the results of a survey which sampled 1000 people more than one which sampled 10 people. Activity 11.3 will test how accurate your intuitions about sample
size are.Tversky and Kahneman (1971) refer to this principle as the ‘law of small numbers’, a corollary to the ‘law of large numbers’. The law of large numbers states that a larger sample size will be more representative of what is being sampled than a smaller sample size. The law of small numbers extends this by pointing out that smaller sample sizes will therefore be less representative of what is being sampled, and are thus more likely to give extreme (i.e. not close to the average) results, than larger sample sizes. This point is important to remember, both academically and in the real world. When reading about scientific studies, always be cautious of those with very small sample sizes as they may not be representative. (It is perhaps ironic that Tversky and Kahneman’s original ‘law of small numbers’ study itself had a small sample size.) Similarly, when engaging with everyday data such as surveys and official statistics, always treat small sample sizes
with caution.
Another aspect of the representativeness heuristic is the gambler’s
fallacy. Someone who applies representative thinking to randomising devices (e.g. coins, dice, roulette wheels) will expect short sequences of outcomes to resemble the outcomes from longer sequences. For example, when thinking about a sequence of coin tosses, most people believe that the sequence HTHTTH is more likely than the sequence HHHTTT, which appears less reflective of a random process, and they believe it is more likely than HHHHTH, which does not appear to reflect a fair coin (Kahneman and Tversky, 1972). Think back to your and my coin toss sequences in Activity 11.1 and (unless you think one or both of us had a biased coin) the illogic of the gambler’s fallacy should become clear.
Using representative thinking in the context of gambling means that the person is treating the randomising device as though it has a memory: for example, the coin is somehow keeping track of how many times it has landed heads and how many tails and is wilfully aiming to balance them out. Of course, this is not the case: a coin does not have a memory and each toss is independent of the one(s) that went before. Nonetheless, people do treat randomising devices as though they have a memory, sometimes with tragic consequences, as the news story in Extract 11.1 indicates.This story presents a case where people expected a specific random event to occur, namely the number ‘53’ to come up on an Italian lottery, simply because it had not occurred for some time. Lottery players also do the converse: they avoid picking numbers that have recently appeared as part of a winning combination. For example, a study of Maryland’s ‘Pick 3’ lottery found that it took three months before winning numbers regained their popularity (Clotfelter and Cook, 1993; see also Terrell, 1994). A more astute lottery player might deliberately choose numbers that have recently won, on the basis that this reduces the chance of sharing the jackpot, should he or she be lucky enough to win!
John Haigh (2003) observed that the first 282 draws of the British National Lottery included 132 occasions when the winning combination contained an adjacent pair of numbers, which lottery players seldom choose because consecutive numbers do not ‘seem’ random. Consequently, there were fewer winners (330) than would be expected on the basis of genuinely random selection (514, given the number of tickets sold), resulting in a larger prize for those who did pick the winning combination. Lottery organisers do nothing to combat errors in thinking such as the gambler’s fallacy, and even. actively encourage them as the lottery announcers typically give information on how often each number has come up before, how long it has been since a number last came up, and point out any clusters (e.g. ‘number 42, that’s the third week running’).
Another phenomenon that can arise as a result of representative thinking is the neglect of base rates. The term ‘base rate’ refers to the frequency with which certain events occur in the population, and making a mistake about something due to misunderstanding or ignorance of its base rate is called the base rate fallacy.
In one of their classic studies, Kahneman and Tversky (1973) presented university students with a description of ‘Tom W’. Tom W was said to have been chosen at random from the whole population of students at the university, and was described as being:
of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to feel little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.
(Kahneman and Tversky, 1973, p. 238)
Some students were asked how similar Tom was to a typical student in various fields of specialisation, while others were asked how likely it was that Tom was actually enrolled in each of these fields. Overall, people tended to think that Tom was a student on one of the less popular courses, even though logically it is more likely that a randomly selected student would be on a more popular course than a less popular one, simply because there are more students on the popular courses. For example, a randomly chosen Open University student is more likely to be studying psychology than economics, because The Open University has more psychology students than economics students. Moreover, the judgements of how likely Tom was to be a student on any given course were almost perfectly correlated with the judgements of how similar people thought Tom was to a ‘typical’ student in that subject. Thus, people appeared to be neglecting base rates (how many students were on each course) and relying on. The second major type of heuristic identified by Kahneman and colleagues is the availability heuristic. Availability heuristic
The assumption that more easily remembered things occur more often than less easily remembered things.
In the original study from which Activity 11.4 was adapted, the participants were presented with lists of 39 names, in which there were either (a) 19 famous women and 20 less famous men, or (b) 19 famous men and 20 less famous women. After reading the lists, 81 per cent of participants (80 out of 99) judged that the gender with the more famous names was the most numerous. A second group of 86 participants was asked to actually recall the names that had been presented, and recalled on average 12.3 of the 19 famous names (63 per cent) and 8.4 of the 20 less famous names (43 per cent). In other words, it seems that familiarity with names made them more easy to recall and this influenced people’s assessments of which gender was more frequent. Assuming that you were familiar with more of the women on the list than the men, you too probably found the women’s names easier to remember and therefore assumed that there were more of them.
When people make judgements of probability and frequency based on the ease with which they can think of relevant examples they are said to be using the availability heuristic (Kahneman and Tversky, 1973). In this heuristic, things that are more readily available to people’s conscious minds (i.e. those that are more easily remembered) are judged to be more frequent, common or likely than things that are less readily available. Sometimes this heuristic can lead to a correct inference, such as when easily remembered things really are more common. For example, whether you are a fan of them or not, you can probably recall more Beatles hits than hits by most other bands, which would not be surprising as they are the biggest-selling pop group in history. However, although more frequent events do tend to be more easily brought to mind, other factors also have an influence. For example, things that happened recently tend to be more easily recalled than similar things that happened longer ago, and dramatic events that lend themselves to the construction of vivid mental images tend to be more easily recalled than bland or mundane events.
To return to the example used at the beginning of this chapter, aeroplane crashes are exceedingly rare but when they do occur they are widely reported (and, as noted earlier, rare events often happen in clusters, which may make them seem more frequent than they truly are). They are also easy to visualise, regardless of whether pictures appear in the media. Such factors may increase the availability of fatal crashes in the minds of the public, at least for a period of time following a crash, feeding into their fear of such events. Having a fear. that is out of proportion to the risk can have serious consequences. For example, people who are afraid to fly may seek an alternative mode of transport that is actually more dangerous, such as driving. An analysis of US traffic fatalities between October 2001 and
September 2002 (the 12 months following the 9/11 terror attacks) found that there were 1600 more road deaths than would be expected in a typical year. This figure is six times the number of passengers and crew who were killed in the aircraft on 11 September (Gigerenzer, 2006; see also Gaissmaier and Gigerenzer, 2012).As well as affecting the perception of events in the world, the availability heuristic can also affect people’s perceptions of themselves. Activity 11.5 asks you to consider some aspects of your own past behaviour and how you see yourself in relation to those behaviours. Please try to answer the activity as honestly and accurately as you can. Tversky and Koehler (1994) proposed that when people assess the likelihood of events, what they actually make a judgement about is a description of that event. These descriptions can be seen as analogous to a scientific hypothesis, and the estimated likelihood of the event depends on the support that a person can call on for that hypothesis (for that reason, this is sometimes referred to as ‘support theory’). Differences in the way an event is described can therefore lead to different assessments of its probability. Tversky and Koehler asked one group of people to estimate the likelihood of dying from ‘heart disease, cancer, or any other natural cause’, while another group were simply asked to estimate the likelihood of dying from ‘natural causes’. The two groups should have given similar estimates, because ‘natural causes’ includes heart disease and cancer. However, the participants who had to consider only the ‘natural causes’ category underestimated the likelihood relative to those who also had to consider some specific categories of natural causes. The reason for this is that naming heart disease and cancer in the question brought them to mind when the participants were considering their judgements, so they were available to support an estimate of higher likelihood.The third major heuristic considered in this section is the anchoring effect, sometimes known as the anchoring and adjustment heuristic.anchoring effect, first identified by Tversky and Kahneman (1974). In this effect, when someone is asked to estimate a quantity, date, probability or anything else that can be expressed numerically, presenting them with some other number before they do so will influence the estimate they give. Their estimates will tend to be closer to the anchor than estimates given by someone who was not presented with an anchoring value first. When an anchor provides no useful information (as was the case in Activity 11.6), being influenced by the anchor is a source of error.Anchoring and adjustment occurs when someone arrives at a numerical estimation by making an adjustment away from an initial value that has been arrived at. Sometimes the anchor may be fairly logical and potentially useful, such as using the roughly nine-month human gestation period as a starting point to estimate the gestation period of an elephant (e.g. thinking ‘elephants are bigger than humans, so their gestation period is probably more than nine months’; see Epley and Gilovich, 2001). Adjustments, however, are typically not sufficient, so that the final value is often unreasonably close to the original anchor. Furthermore, even random numbers can form an anchor. In one of the original demonstrations of anchoring and adjustment, Tversky and Kahneman (1974) spun a wheel of fortune, similar to a roulette wheel, in participants’ presence and asked them to assess whether some quantity (e.g. the percentage of African countries in the United Nations) was more or less than the number given by the wheel of fortune. Having answered a comparative question, the participants then estimated the exact percentage, in a similar way to the task you did in Activity 11.6 with Bach’s and Carroll’s ages of death. Tversky and Kahneman reported that estimates tended to be close to the anchor provided by the wheel of fortune. For example, when the anchor was 10 the median estimate was 25, and when the anchor was 65 the median estimate was 45.
Subsequent research has identified anchoring effects in a wide range of domains, including many everyday contexts, indicating that it may be an even more general cognitive process than the representativeness or availability heuristics (Keren and Teigen, 2004). In one of the most worrying examples, several researchers have found evidence for anchoring effects in legal contexts. Section 3.4 outlines some of these studies.In a study by Birte Englich and Thomas Mussweiler (2001), 19 professional German criminal court judges were asked to read materials relating to a fictional trial. The materials themselves had been developed by the researchers in close collaboration with another group of experienced trial judges to be as realistic as possible. After reading the information about the case, half the judges were told that the prosecutor had asked for the defendant to be given a jail sentence of 34 months and the other half were told that the prosecutor had askedfor the defendant to be given a jail sentence of two months. The judges were asked to indicate whether the sentence asked for by the prosecutor was too high, too low, or about right, and then to specify what they thought the appropriate term would be. Judges who were told that the prosecutor had asked for a sentence of 34 months suggested, on average, that an appropriate sentence would be 28.7 months. Those who were told that the prosecutor had asked for a sentence of two months suggested an average sentence of 18.78 months: the anchoring effect had made a difference of around ten months’ imprisonment in this case.Similar results have been found for personal injury lawsuits. Gretchen Chapman and Brian Bornstein (1996) asked mock jurors (research participants acting as jurors for the purpose of the study, rather than people serving on a real trial) to consider a case in which a woman claimed that her birth control pill had caused ovarian cancer. Depending on the anchoring condition that participants had been assigned to, the plaintiff’s claim was for $100, $20,000, $5 million, or $1 billion. Among those participants who decided that the defendant was liable, the monetary award increased across these anchoring conditions. The practical implication, of course, is that claimants. should ask for larger awards rather than more conservative ones, as that will provide the legal decision makers with a higher anchor.
You may be thinking that the use of anchors in the studies above is reasonable. After all, in real courtrooms prosecutors have information about the sentences typically given in similar cases, and plaintiffs will have assessed their losses before deciding on their demands. Perhaps, then, the professional judges and mock jurors in these studies were justified in using the prosecutor’s or plaintiff’s demands as a starting point from which to anchor and adjust. Perhaps. However, research by Englich and colleagues (2006) suggests that these anchoring effects were not based on such calculated logic. In a series of studies, using the same fictitious but realistic case scenario as Englich and Mussweiler (2001), and again using legal professionals, Englich and colleagues made the anchors increasingly – and increasingly obviously – irrelevant. In the first study the anchors were presented in the form of a question from a (hypothetical) journalist, who asked: ‘Do you think the sentence for the defendant in this case will be higher or lower than [one or three] years?’ Participants who had been presented with the ‘one year’ anchor gave an average sentence of 25.43 months, while participants who had been presented with the ‘three years’ anchor gave an average sentence of 33.38 months.
The potential for the media, even unintentionally, to influence sentencing decisions is worrying enough. However, two follow-up studies are even more worrying. In one of these the judges were told that the suggested sentence had been randomly determined, and yet a higher suggested sentence (i.e. anchor) increased the sentence that they said they themselves would give. In the final study the judges randomly determined the suggested sentence themselves by rolling a pair of dice. Unbeknown to the judges, the dice were loaded so that half of them ‘randomly’ rolled a low sentence and half a high sentence. Even knowing that the anchor had been determined by a dice roll, judges gave a longer sentence in the high anchor condition compared with the low anchor condition (and a no anchor control condition).

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now