Tuesday, 16 April 2013

Failures of the human mind

As an executive of my local Centre for Inquiry branch, I recently helped host an event where we brought in a Professor of Psychology, Dr. Chris Oriet, from the University of Regina to do a talk on human reasoning titled: Are humans reasonable?

Dr. Oriet holds his Ph.D in Experimental Psychology from the University of Waterloo. He is currently employed as an Associate Professor and as the Experimental and Applied Psychology graduate program coordinator.


His current research largely focuses on attention and its relationship with human information processing and perception. You can find more information about him on the UofR website: http://www.uregina.ca/arts/psychology/faculty-staff/faculty/oriet-chris.html

After the event, I was asked to do a brief synopsis. Unfortunately, I don’t do brief (Boxers only. Boxer-briefs, perhaps).

I reproduce for you my summary of the talk, which will be broken into two parts. The first part is an (unbiased) analysis of a paper by Daryl Bem that seeks to demonstrate the existence of psi phenomenon. The second part discusses human reasoning in more general terms, with references to how our failure to reason under certain circumstances might apply to Bem's work.

You can find the full paper by Bem here, if you like: http://www.dbem.ws/FeelingFuture.pdf

Part 1



The particular psi phenomenon Bem discussed was precognition. In his paper, Bem presents evidence that ordinary undergraduate students have extraordinary powers to, for example, predict the location of a yet-to-be-presented evocative picture, or to avoid seeing a yet-to-be-presented threatening image. Particularly striking is his evidence for the ability of a person to be able to (retroactively) recall information better during a task if they subsequently study for the memory test afterwards.

So, if Bem is correct, you would be able to go into a final exam, write the test, then go home to study for it and retroactively increase your performance on the test. How would that work? Because during the test, you are (according to Bem) able to recall information before you have even learned/studied/reinforced it – hence, precognition. The catch, of course, is that you would still have to go home and actually study for the test afterwards (I know, total bummer).

Now, before you go thinking this is complete madness (Madness? This is Sparta!), Dr. Oriet wanted to point out that certain criticisms of this study are not fair. First, this was published by a very prestigious journal – The Journal of Personality and Social Psychology, which is ranked number 2 in that particular field of Psychology. This journal, in turn, is published by the American Psychological Association, which is a major name that lends a lot of credibility to the paper.

Second, Bem has a very credible education – he received his BA in physics and a PhD in social psychology, both from credible universities. He is employed at Cornell University – not exactly on the low-end of the academic totem pole. Furthermore, the methodology – though it had a few odd quirks – was extremely rigorous and very detailed. Bem has also been very open and forthcoming about the methods he used in his experiments and has been open to letting other people verify his results.

So, regarding the results, what Bem found is that there were very reliable deviations in people’s performance from what is expected by guessing alone, mostly consistent across 9 different experiments, verifying a very small psi effect. How small? As Dr. Oriet noted, the difference was something along the lines of participants “guessing” an additional 50 correct responses (each with a 50/50 chance of being correct purely by guessing) out of over 1,500 attempts. So while that’s an extremely small effect (basically, no one will be describing in gruesome detail the time and manner of your death), the effect was nevertheless highly statistically significant.

There were some issues with things like sample sizes. Bem apparently used some variation of power analysis to determine that a sample size of 100 was the ideal number to get a statistically significant answer, so that was the basis for many of the experiments – though the experiment with the greatest effect also happened to have the fewest number of participants – 50. Naturally, fewer participants means fewer trials, increasing your odds of getting a false positive result purely by chance.

Also noted was the fact that, if meta-analyses were done comparing Bem’s studies to other similar experiments done by different researchers, it appears as if Bem is one of only a handful of researchers who has observed positive results.

But again, just to bring back the initial point, this particular study appears to have met all the benchmarks for what any other paper would have been required to meet to be published. The only distinguishing thing about this particular paper was how controversial the topic was.

For this reason, I felt there was a general consensus in the room that the journal was justified in publishing the paper. However, for some of the reasons alluded to above, and what will follow in a moment, that is not to say that this paper is therefore conclusive proof of the existence of psi.

Part 2

The second part of the talk got to the heart of human reasoning. While much of this portion need not be unique to Bem’s study, it could have significant relevance.

One of the failures of human reasoning mentioned was the representative heuristic, which is essentially the judgements people often make when attempting to judge the likely probability of an event. The example Dr. Oriet used was a coin toss. Generally speaking, unless the coin is actually magic (yes, magic – any sufficiently advanced technology is indistinguishable from magic. Ergo, loaded coins are magic!), it’s going to land heads 50% of the time and tails the other 50% of the time.


However, most people think that this means in a given series of coin tosses, it should *look* random. For example, HTHT looks like a random series, while HHHH looks entirely non-random (you wouldn’t want to bet against the person who flips HHHH. You might, however, burn them at the stake – depending on your proclivities).

Yet both series have precisely the same probability to flip that way. To point this out, consider a smaller series of tosses. Here’s all the possible ways a coin can flip in a series of 3:

HHH HHT HTH THH HTT THT TTH TTT

You’ll notice that HHH, though this *looks like* a non-random sequence, only appears once. Likewise, HTH, which has the appearance of being entirely random, also appears only once. That’s because they both have exactly the same probability to show up. Having said that, if you’re looking for any sequence that has only one tails flip in it, then there are now 3 chances for this result (THH, HTH, HHT), so the probability is higher for that.

Thus, if you see a sequence like HHHHHHHHTH, this sequence is just as likely to occur as HHHTHTTTHT – even though the later one appears more probable.

By extension, the Gambler’s Fallacy is that, if you toss two heads in a row, it is common to think that you will actually be less likely to flip a third head. In reality, this is completely false. You still have exactly a 50/50 chance on that third flip to land either heads or tails.

For relevance to Bem’s paper, there is a possible hint that the results may not be quite as unlikely or improbable as the paper would appear to suggest. Now, again, the sample size was (apparently) supposed to be sufficient to smooth out this inconsistency, which is why the results still came out statistically significant, but at the same time since the effects were so minor (as small as one extra guess correct in a set of 12 per participant), it’s not out of the ballpark to suspect these results may have happened purely by chance.

Another really important cognitive bias mentioned was confirmation bias. In this, people tend to look for evidence that supports their conclusion while dismissing or rejecting information that contradicts their opinions.

For example, if a professor has a very high estimation of his/her own teaching ability and receives course evaluations at the end of the year, and say he/she receives two absolutely glowing reviews, he/she might take those two and say “Hey, yeah, I AM a rockstar!” By contrast, if he/she receives 2 abysmal reviews, he/she might conclude: “Wait a second, I know who that is! That student got a 30 in my class, so of COURSE they gave me a bad review! What a knob!”

In Bem’s case, we don’t really know to what degree he actually sought to include types of trials and conditions that would yield results confirming his hypotheses. At the very least, confirmation bias would suggest that whatever results seemed to conflict with his hypothesis he almost certainly would have spent significantly more time dissecting those results rather than the trials that gave him the results he did want – this isn’t a problem specific to Bem, but rather to all research everywhere. Confirmation bias is always an issue.

However, what is specific to Bem is that, in a 1993 paper he wrote as advice to budding researchers, he specifically indicated how research should be organized such that it weaves a pretty story. It needs to be interesting, and it needs to show a positive result. This is extremely problematic because, if he’s willing to manipulate the data to weave a better story, what does this say for the true validity of the results?

Anyone can simply do trial after trial until they get the result they want, then keep the positive findings while discarding the null results. So what experiments might he have decided to omit from this paper? We’ll never know.

What we do know is that the order of the experiments in the paper is not the chronological order in which the experiments were actually done. They were reordered in such a way as to weave the narrative Bem intended to tell. This is not proof the study is wrong, but it is highly suspicious.

Conclusion

This was an incredible and enlightening talk. I was very happy we were able to bring in such a great speaker. Conclusive evidence wasn't presented for or against the existence of psi, which is exactly as it was intended. It served as a jumping-off point to discuss the larger issue of human reasoning in the context of Bem's paper.

Human reasoning, and the ways in which it can fail, is something of an interest of mine because it has such profound implications not just for superstition and paranormal beliefs, such as was the case with Bem’s study, but even for religion itself.

If we can do more work on understanding human cognition, I think we can make some major and perhaps even sweeping inroads into understanding the role of religion in our lives and why it continues to exist.

No comments:

Post a Comment