This morning I remembered something I had not thought about in many years. It is relevant to my approach to institutional research, so I thought I would share. I must have been six (6) or seven (7) years old when I had an argument with one of my slightly younger cousins. He was convinced that firefighters set people on fire, police killed people and “ambulance drivers” made people sick. We had the childhood version of a heated debate about the nature of these professions and no amount of reasoning would convince him he was wrong. At the time I had no idea why I couldn’t win what should have been a very winnable argument and it is only now, viewing this phenomenon through the eyes of a psychologist do I begin to understand what happened.
I should mention that this particular cousin is now very successful and probably no longer thinks that firefighters set people on fire. Given the national news over the past few months, I won’t focus on police shooting people, but I suspect that he understands the normal chain of events here as well.
I share this story because this specific example is amusing and as adults strikes us as an odd way of seeing the world. However, this type of error is remarkably common in the human experience and is not limited to the less experienced minds of an elementary school student. In fact, this type of reasoning is among the major reasons that scientists rely on math to provide evidence for our findings. You see, humans are remarkable at pattern recognition. See see patterns everywhere; we seemed to be wired to do so even with near-random stimuli (see pareidolia). Unfortunately, humans are pretty horrible statisticians. In this case, my young cousin saw the relationship between paramedics and illness, a relationship we might call a correlation. What he got wrong was the normative causal relationship between these two (2) concepts. In other words, firefighters and fire often occurred together, but firefighters did not normally cause the fires.
As a psychologist, these types of errors in reasoning are particularly salient. This is not because I was born better at thinking about probability and causal relationships, but because I was trained over the course of many years to find ways to combat this type of thinking from infecting my professional output (I still occasionally mess this up in my personal life). This is why it is often so hard to get a simple yes or no answer from me, or why my reports are littered with terms like “may, might” and “imply.” It is also why I prepare so many graphs when talking about trends or relationships, rather than discuss raw figures (these require very specific wording that may discourage those without extensive statistical training).
So next time you sit down for a chat with your friendly neighborhood institutional researcher, keep this in mind. We are not trying to be obtuse, we are trying to be accurate.