Once I left the house in a red jacket and got caught in the rain. The next time I dressed the same way and went outside. So every time I wear a red jacket, it rains? Of course not. But our thinking is set up in a way that can play tricks on us – now every time I wear a red jacket, I take an umbrella with me. This makes absolutely no sense at all, so why do I continue to associate outerwear of a certain color with rain? The answer to this question, oddly enough, dates back to 1975, when researchers at Stanford University conducted a series of fascinating experiments in an attempt to understand how our beliefs are formed. Inviting a group of students, the scientists handed them pairs of suicide notes, one written by a random person, the other by a person who later committed suicide, and then asked them to distinguish between the genuine notes and the fake ones. The scientific results were surprising and have since been confirmed by numerous other studies.
How are beliefs formed?
According to The New York Times, citing the study, some students, while studying suicide notes, discovered that they had a talent for determining who really committed suicide. Out of twenty-five pairs of notes, this group of students correctly identified the real one twenty-four times. The others realized they were hopeless – they identified the real note only ten times.
As is often the case with psychological research, the whole set-up was a sham. While half of the notes were indeed genuine-they came from the Los Angeles County coroner’s office-the assessments were fictitious. Students who were told they were almost always right were, on average, no more astute than those who were told they were mostly wrong.
In the second phase of the study, the deception was exposed. The students were told that the real purpose of the experiment was to gauge their reactions to being told they thought they were right or wrong. (This, as it turned out, was also a hoax.) Finally, the students were asked to estimate how many suicide notes they actually categorized correctly and how many they thought the average student had identified.
At this point, something curious happened: the students in the high-scoring group said that they thought they had actually done quite well – significantly better than the average student – even though, as they had just been told, they had no reason to think so. Conversely – those students in the low-scoring group reported that they had, in their own opinion, done significantly worse than the average student – a conclusion that was equally unwarranted. So what’s the point?
A few years later, a new set of Stanford students were recruited for a similar study. This time they were handed packets of information about a pair of firefighters, Frank K. and George H. Frank had a young daughter and loved to scuba dive. George had a young son and played golf. The packets also included the men’s responses to what the researchers called a “risky-conservative choice test.” In one packet of information, Frank was a successful firefighter who almost always chose the safest option. In another version, Frank also chose the safest option, but was a lousy firefighter who received several warnings from his superiors.
In the middle of the study, the students were told that they had been deliberately misled and that the information they had received was completely fictitious. They were then asked to describe their own beliefs: how do they think a firefighter should view risk? Students who received the first packet thought that a firefighter would try to avoid risk, while students in the second group believed that a firefighter would take risks.
It turns out that even after “the evidence for their beliefs has been completely refuted, people are unable to make appropriate changes to those beliefs,” the researchers wrote. In this case, the failure was “particularly impressive” because two data points would never be enough to generalize the information.
Ultimately, the Stanford study became famous. The claim made by a group of scientists in the seventies that humans could not think straight sounded shocking. Today it is not – thousands of subsequent experiments have confirmed the discovery of American scientists. Today, any graduate student with a tablet can demonstrate that seemingly reasonable people are often completely irrational. Rarely has this insight seemed more relevant than it does today, has it not?