Ariel Esperante, https://creativecommons.org/licenses/by/2.0/
In my last post, I wrote that I have been struggling with this concept of normal and wanted to thrash it a bit, but that it would take two posts, starting with the use of normal. This post stirred some strong reactions and I’m grateful for that. If I’m wrong, I want to hear it.
To me, normal in psychology is analogous to majority rule in American government. Majority rule isn’t perfect and the founders of this country, men and women, didn’t think that it was. At its best, majority rule culls the collective wisdom of the masses. At its worst, majority rule results in the smallest number of grumpy people. The founding mothers and fathers built into government specific protections for the minority because the majority is always protected.
Before I go back to and deeper into the problems with normal, I want to describe an important principle in psychology, which is that we really do not want to make major decisions based on a single data point. You may have heard the expression, “Measure twice. Cut once.” You would think that the same person (rater) with the same measuring tape (instrument) measuring the same construct (length of a board) would get the same result every time. That’s called reliability in psychology, and it is certainly the hope, the expectation, and the goal, but it doesn’t often happen, not exactly.
There are lots of reasons why that is, and a discussion of them would take this post far off track and add about 200 pages to it. Neither of us wants either of those to happen, so I will restrict this conversation to one of the many reasons why measurement is imprecise. Confirmation bias is the tendency to see things that fit our perspective on the world and ignore the things that do not fit. We all want to be right and so we perceive a version of the world as we expect it to be.
With that in mind, let’s look at the problems with normal.
From the previous post, remember that part of the definition of a mental illness is that it interferes with functioning and/or makes a person unhappy. I value better functioning over worse functioning and happiness over unhappiness. Some value judgment is intrinsic in every human endeavor. An oncologist wants to kill cancer and wants a patient to have no cancer. Those are values and they are necessary and reasonable.
However, the construct of normal seems to often imply that normal is right. Every human advance and innovation starts as a departure from the norm. Every star thinker and star athlete is abnormal, but nobody would regard these stars as inferior. We know the terms “strange” and “odd” and “weirdo” and we know that they are insults, but Albert Einstein and Julius Erving were weirdos. So are Neil DeGrasse Tyson and Nelson Mandela.
Science uses normal as a tool, that’s all. It’s a way to analyze information, with no “goodness” or “badness” meant at all. This is one of my problems with normal, that it falsely implies value.
Normal makes you numb
Knowing what is normal is a good way to identify danger. If you can normally climb five sets of steps without becoming winded, and today you are gasping after the first set, you have a problem. However, that is only one data point. If you have bronchitis then you know what the problem is. If not then your doctor will have to collect data from several sources to determine what’s wrong.
As Gavin de Becker (1997) said, we know what is normal in our environment, know who belongs and who acts like they belong in the places where we go. Behaviors that don’t fit make us afraid and we need to listen to that fear.
The problem is that we want our environments to be normal, want to be safe. Whether you prefer Beck’s (1967) cognitive schema or Nietzsche’s schema of intelligibility, we have a sense of how the world is. We (think that we) know how things are in the world. We are confident that we can handle events in our version of the world. We have an investment in seeing it that way, which contributes to confirmation bias.
I’m going to recommend another good book to you. In Making Habits, Breaking Habits, author Jeremy Dean (2013) says that we form habits to conserve cognitive energy for other purposes. People have come to me scared because they drove their usual route to work and arrived and could not remember the trip. Yes, this is normal. I am 100% sure that if a child or an animal had jumped out in front of your car, you would have noticed and taken every possible step to avoid a collision. You don’t remember the trip because it wasn’t memorable and you were driving a habitual route to save cognitive energy to prepare for work. I’m betting that you don’t recall what you had for dinner five years ago today, either, unless the meal were associated with some major event in your life. You don’t remember because it’s not worth remembering, not worth using long-term storage for that event.
Concentration takes effort and energy. In our daily life, we run when we must and coast when we can. Habits and normal help us to do that, but they also make us numb and weak. We stop paying attention and we stop learning. That’s another problem with normal.
Sometimes the average is misleading
Humans in general, and psychology in particular, take the average as a single, representative, summary of a group or a phenomenon. Statistically, the average flattens out the individual variations. The average is normal, and it often summarizes the group, but sometimes it is badly misleading.
Suppose that one of my friends from Kentucky were to move to Pennsylvania. In preparation for the move, this average-wise friend asked me for the average high temperature here in January and July and I replied 40 degrees and 80 degrees, respectively. Yes, the humidity in July sucks, but bear with me for a moment. Kentucky Friend (not Fried – watch that confirmation bias) adds 40 to 80, divides by 2, and concludes that the average high temperature in the Philadelphia suburbs is 60 degrees and thus gives away his winter coat and his sandals before moving because he does not expect to need either.
Yes, the average of 40 and 80 is 60. So is the median (mid-point). We can’t derive a mode (most common value) for this distribution. Yes, January is often the coldest month and July can be the hottest month. The average high temperature in Philadelphia might be 60 degrees, but dressing for 60 degrees will leave you miserable on our 90-degrees/90% humidity days in summer and in danger on our 20-degrees/20 MPH wind/2 inches of snow days in winter. The process is sound and the calculations are correct but the interpretation is wrong.
Returning to our example from the previous post, the Wechsler Intelligence Scale for Children, Fifth Edition (Pearson, 2014) calculates a child’s Full Scale IQ from the sum of the scaled (subtest scores). It’s not exactly an average, and Pearson doesn’t publish the algorithm, but humans, life, doesn’t always fit into neat little equations. Yes, I feel like Jeff Goldblum’s character in Jurassic Park, and I’m speaking heresy in the Google and Facebook era, but stay with me for a moment.
Intellectual disabilities typically present as “flat” score profiles. A student will have a bunch of index scores that are all below 70 and all within a few points of each other and when that happens, it’s easy to be confident that the Full Scale IQ is an accurate single-number summary of the student’s cognitive ability. When we combine those results from results of other assessments and all of the numbers are consistent then we can be fairly sure of the decision. We have many data points. That’s when the average works, when the comparison to normal creates a true picture.
Sometimes, though, a student will have a wide scatter, or variation of scores. I have met students with great mechanical abilities but few social skills or self-care skills. Parts of those students are disabled, and parts are not. When I review and interpret the scores from a cognitive assessment, I’m mostly interested in verbal comprehension (understanding verbal information and expressing understanding verbally), processing speed (keeping up with the rest of the class), and of course, my favorite and specialty, working memory (used for following multi-step directions and in reading, writing, and math). In fact, there is only one time when I am particularly interested in the Full Scale IQ, only one time when it makes any difference, and that leads me to the next problem with normal.
Life, death, and law
As we have already discussed, the process of measurement can be messy. I could go Total Geek here and talk about standard error of measurement and confidence intervals and the Hawthorne Effect. I can tell you more than you want to know about this stuff, and I’m not sure if that is a threat, a promise, or a boast.
There are some things that can be measured with objective certainty. If there are 40 peanut butter M&Ms in that bag and I say that there are 39 or 41 then I’m wrong. Really, if I can get that close to a bag of peanut butter M&Ms then the total will be zero very quickly, and counting them seems an abject waste, but many things, especially abstract concepts like verbal comprehension and most of the other qualities of the human experience and expression, are not easily or precisely measured.
That is why the ethical codes for psychologists run dozens of pages. That is why the plan for the study in my dissertation runs 50 pages and my application to the ethics review board runs another 25 or so pages, with every word and punctuation mark scrutinized multiple times for reliability.
This is why I sometimes have sleepless nights, thinking about work. I’m using the best instruments and the best science available to me, but what I do changes the lives of others, and that should never be done casually. The only proper and ethical use of a diagnosis or classification or evaluation, indeed of psychology overall, is to benefit the person being diagnosed, classified, or evaluated.
The science can be messy but the implications are profound. IQ scores of 69 and 71 are so close that they essentially represent the same level of ability. With other, consistent data, though, a Full Scale IQ of 69 qualifies a person for support in school and for life afterward – help with housing and employment and funding from the state and federal governments. Unless other data suggest that the 71 is more like 69, the 71 does not elicit the same support. A person with that level of ability may get support in school but probably not afterward. It’s a harder life than the one that comes with 69.
I also mentioned death. For about 20 years, I have been actively opposed to the death penalty. In Atkins v. Virginia (2002), the U.S. Supreme Court banned the execution of persons with Intellectual Disabilities. The court did not identify a standard for determining who has an Intellectual Disability. In one way, that was wise, because the professional standards can change, like they did in 2014, but the court left the standards to the states, and that creates problems because Intellectual Disability is subject to a vast amount of misunderstanding. For example, I recall a case in which a defendant threw a chair at the prosecutor, and a juror later said that this was proof that the defendant did not have an Intellectual Disability because it meant that he knew that the prosecutor was trying to punish the defendant. That’s a ridiculously low standard. Persons with ID know many things, and actually, I would take that poor impulse control, the lack of understanding of the nature of a trial, and actions that create the appearance of dangerousness when appearing to be dangerous will get you killed, as evidence of ID.
Putting all of this together, though, it means that a person with an IQ of 71, with essentially the same problems with impulse control and future planning and understanding of consequences as a person with an IQ of 69, could be sentenced to death. Nobody would say that the former has normal cognitive ability, but that this person’s ability is probably not far enough from normal to be a disability.
By the way, the Supreme Court extended this logic in Roper v. Simmons (2005), which banned the execution of persons who committed their capital crimes as juveniles. In the United States, a person becomes an adult, legally, at age 18, as the line has to be somewhere. However, we know that the prefrontal cortex, home to executive functioning, which includes impulse control and future planning, is not mature until at least age 21, with good research now saying 25. Think of the difference between a college freshman and a college senior. Maturity, yes? What is maturing, though? It’s the brain.
Sometimes our understanding of the dynamics beneath the standards changes, and sometimes the standards change for other reasons, and that takes us to my last problem with normal.
Normal isn’t constant
The Diagnostic and Statistical Manual is the set of rules for diagnosing mental health disorders, published by the American Psychiatric Association. The current edition is the fifth, the DSM-V (APA, 2013). The first edition, published in 1952, listed homosexuality as a disorder. This classification continued in the second edition, published in 1968. In 1973, the APA discontinued the classification of homosexuality as a mental health disorder, and it was not listed as a disorder in the sixth printing of the DSM-II, in 1974. The DSM-III was published in 1980 and the only reference to homosexuality in it was ego-dystonic homosexuality. The DSM-III-Revision removed ego-dystonic homosexuality in favor of sexual disorder not otherwise specified, which can include distress about one’s own sexual orientation.
Homosexuality is as least as old as homo sapiens. The orientation did not “become” a disorder in 1952 any more than it “ceased” to be a disorder in 1973. The latter change is an example of the best part of the scientific process, the process of constantly learning and improving based on learning. The former change is an example of the worst part of psychology, when it hurts people. Homosexuality didn’t change at all and certainly was not affected by anything written in a book. The construct of normal changed. Imagine the harm done from 1952 to 1973.
Neil deGrasse Tyson said, “The good thing about science is that it's true whether or not you believe in it.” If the consensus in American culture in 1952 had been that gravity did not exist, or that the speed of light is 25 miles per hour, gravity and light would have been unchanged. Anyone making those claims would have been wrong.
Along the same lines, when psychological instruments are revised, the process includes a new comparison with the population for which those instruments are intended. According to the Flynn effect, IQ scores have steadily been increasing for generations, meaning that later versions of a test make a comparison to a more capable peer group. At least one researcher (Willis, n.d.) has suggested a variation of 2 points, in a scaled score, from the WISC-IV (Pearson, 2003) to the WISC-V (Pearson, 2014). Two scaled points equal 10 standard points, so a student who obtained a Full Scale IQ of 75 on the WISC-IV in 2013 could obtain a Full Scale IQ of 65, low enough to possibly be Intellectually Disabled (with other consistent data), on the WISC-V in 2015. The student is not suddenly less capable. Normal changed.
We need normal as a standard, the collective and average as way to analyze what we observe, but we also need to challenge the standard, to calibrate our assessments and to make sure that we have construct validity, that we are really measuring what we think that we are measuring. We need to practice the scientific method, part of which is that any fact or rule or construct is subject to challenge and revision at any time, that we can discover a new set of facts that, when properly tested, can change everything. We need normal, but we have to accept the certainty that new understanding will change normal and we should have our eyes open to see the change when it happens.
Dean, J. (2013). Making habits, breaking habits. Boston: De Capo Press.
De Becker, G. (1997). The gift of fear and other survival signals that protect us from violence. New York: Dell.