ENGL 2105 : Workplace-Based Writing and Research

Home | Introduction | Standard Syllabus | Contribute

Critical Thinking

Introduction

There's thinking and then there is critical thinking.

If a baseball bat and a ball cost a total of $1.10, and the bat costs $1 more than the ball, then how much does the ball cost?

If you answered ¢10 you were thinking. If you answered ¢5, you were thinking critically. The bat costs a dollar more than the ball. This example comes from a book you should read called Thinking: Fast and Slow, by Daniel Kahneman. The problem is our brain inclines us toward thinking the answer must be ¢10, and because we saw that "truth" in a flash, we embrace an error as if it were a fact, might even fight to defend it. If you dislike something you don't understand, that's thinking. If you like something because you are familiar with it, that's thinking. If you hang on to your initial opinions in the face of contradictory evidence, that's thinking. If you dislike people who disagree with you, that's thinking too. As you can tell from that list of inclinations toward errors, you should give up thinking. We resist critical thinking because thinking, going with our gut or our traditions, is fast and satisfying. The only thing better than getting the right answer is getting it quickly. Critical thinking, on the other hand, is slow and cognitively taxing, and we don't always know the math (or logic or what someone else is thinking); we don't always know what we need to know in general and that makes us want to jump to something we do know rather than take the time to learn what we need to learn. A couple hours pondering hard problems and you're tired, everybody is.

Getting the right answer doesn't help if you asked the wrong question. Don't just look for an answer; interrogate the question first.

The goal of critical thinking is to overcome egocentricity. The goal of workplace-based writing and research is to overcome corporate bias.

In addition to inertia and over-confidence, critical thinking is impeded by how our brains work when unsupervised (cognitive biases) and by errors in logic (fallacies) and errors in statistical inference. What follows is an overview of those three sources of errors in judgment and decision making. You should make a point of remembering the concepts that follow. You should also make a point of daily searching for examples in your world, in your own thinking and other people's thinking as well. It is always easier to find these errors in other people's thinking than in our own. So you should seek feedback from people who don't see the world the way you do.

Be advised, however, that knowing these errors in thinking does not inoculate against them. Human beings are hard-wired to these defaults. Being constantly vigilant (and skeptical) will help, but you will succumb to default thinking from time to time anyway. We all do.

Cognitive Biases

Naive Realism: Bird or Woman?
Magical Realism (apophenia): Can you see something hidden among the rose petals? (source)

Naïve realism -- believing that you see the world clearly and understand it perfectly. If you hear yourself saying, "in reality" or "obviously" stop and think critically. What assumptions are you making that make it obvious to you that might not be shared by others? Two people can look at the same object and see something different, either because they are looking at the same thing from different angles or because they interpret the object differently. The latter happens most often when the object is abstract, like a concept or an idea. Just as the objects in our field of vision are assembled by our brains and interpreted by our minds, so our expectations, experiences, and assumptions influence what we infer about the parts of the world we can't see directly. "He must have been guilty or else why would have have run?" (What it Means, Drive-By Truckers) is a rhetorical question for some people (the answer implicit in the question) but for others it's a real question. Perhaps he ran because experience taught him that cops are dangerous.

  • Just because it's obvious doesn't mean it's true.
  • Just because you're certain doesn't mean your right.

Apophenia -- the human tendency to see patterns in random data, to mistake coincidence for meaning. Can you see the outline of a dolphin in the rose (image, right)? While there's nothing inherently wrong with seeing faces in toast or trees in clouds, basing arguments or beliefs on random patterns is very uncritical thinking, especially since once we come to believe something, it is very hard to un-believe (or unsee) it. [Technically, the dolphin is an example of pareidolia, which is a subset of apophenia.]

Given: 1,3,5,7; what's the next number? 11? Sure, if it is safe to assume that the first four items in the set are there because they are the prime numbers in sequence. What if that's not why they are there? What if you were looking at a code of some kind and the regularity of the numbers hasn't yet maninfested itself? Maybe the next number is 16 and the one after that is 13. In that case you made a false, albeit reasonable, assumption. If you run a random number generater long enough, printing each randomly generated number as it was generated, eventually you would see the first four primes in sequence. But they wouldn't be the first four primes, just those numbers randomly displayed in what could easily be mistaken for a specific sequence.

Let's say you are trying to figure out why people like to drink coffee in the coffee shop in the library instead of leaving the building to get coffee elsewhere and you ask that question of the first person that walks in the door. Would you bet the next person would have the same reason? Maybe if you asked 20 people and discovered that there were only four answers, then you might bet a small amount of money on number 21 having one of the four answers. But that person might have a fifth, as yet unheard of answer. How many library goes would you need to ask before you achieved saturation, the point where you haven't heard a new answer in so long that you can safely figure any new answer could be treated as unique or at least so rare as to be irrelevant? Well, how much longer can you afford to stand there asking people as they walk in?

  • Just because you see it clearly doesn't mean it's real.

Confabulation -- making up a story to explain what you can't understand (mythologizing). People dislike doubt so much, and want so badly to feel smart, that we will fill any gap with a story and then accept the story as truth. Often, the story replaces the experience and fiction is all that's left.

  • Just because you can explain something doesn't mean you understand it.
  • Measure all explanations against the evidence.
  • Suspend judgment in the absence of conclusive evidence.

The illusion of coherence -- just as we will mythologize to avoid confronting our ignorance, we will assimilate (explain away) or dismiss (ignore) any evidence that contradicts our beliefs. Human beings, most of us anyway, are so troubled by ambiguity that we would rather be wrong than uncertain. But painting over the cracks doesn't fix the foundation.

Stories are powerful because they make sense of the world for us, they make action meaninful, and they justify behavior, good or bad. But they are just stories; they are not reality. Reality includes a bunch of noise and a bunch of data we mistake for noise, as well as a bunch of noise we mistake for data.

  • Embrace ambiguity.
  • Learn to live with uncertainty.
  • Look for contradictions and anomolies.

Confirmation bias -- we humans are attracted to evidence that supports our beliefs and we tend to ignore or discount contradictory evidence. Once we believe something, we are hard-wired to find proof everywhere. Because we also tend to like people like us, that is who have similar views, we tend to surround ourselves with people who confirm our prior beliefs and thus it is even harder to think critically. This bias for confirmation is exacerbated by the way our Internet practices tend to filter what we see to conform to our preferences, our likes, our friends' likes, and so on. It's quite easy to mistake the block where you live for the whole world.

  • Just because you have evidence doesn't mean you're right.
  • Seek disconfirmation, counter arguments.

Fundamental attribution error -- believing that your failures are caused by bad luck, your successes by hard work, while others' failures are the result of character flaws, their successes just luck or a system rigged in their favor. People who tend to make excuses and blame others when they aren't getting what they want are gripped by the fundamental attribution error. The reality is closer to the fact that every one of us lives in a set of interrelated systems, some support us, others may impede us, but none of us succeeds or fails alone.

  • Don't assume the best of yourself and the worst of others.
  • Own your own errors.
  • Acknowledge that luck and other peoples' contributions may have helped you succeed.

Normalcy bias -- thinking that everything is just fine when disaster looms. A great example of this kind of thinking en masse was the Atlanta "Snowpocalypse" of 2014. Two inches of snow was forecast for metro Atlanta, and everyone went about their business as usual. Next thing you know, the roads got icy, the highways backed up; the surface streets gridlocked, and thousands of people spent the night in their freezing cars.

Normalcy Bias: Atlanta Snowpocalyps, 2014

Availability bias -- letting your personal experience influence your perceptions and inferences. Your experience might not be representative or even relevant. If you watch the news, you are likely to have a more negative worldview than if you don't, largely because "the news" means homicides, domestic violence, kidnappings, spectacular car accidents, fires and so on. These dramatic images color your worldview if you don't contextualize them. The availability bias enables the "information bubble" that most of us live in because the examples we use to enable our thinking tend to come from reinforcing sources.

In the context of workplace-based writing and research, availability means you need to pay special attention to the order you ask questions in. One question may cause a person to think about something that changes their mood and thus influences how they answer the next question or questions. Put that question lower down in the list and the bias might go away.

  • Don't use personal experience, or current events, as evidence.
  • Get a broader perspective; increase the sample size.
Framing: Y-axis distortion effect (source)

Framing -- How you see something, how you define a situation (Is this a threat or an opportunity? A problem or a possibility?), directly effects how you respond. If you can look at things from different perspectives you can capture insights that might lead you to different responses and therefore different decisions. If someone offers you a choice between two alternatives, you should always think: what about both or neither? The wider your perspective, the more options you will see, and the less likely you are to get tunnel vision, like the person who is so focused on a goal that when he achieves it, he has no idea what to do next.

Logical Fallacies

You've no doubt taken a basic Philosophy class and so are familiar with the logical fallacies. If you haven't or you don't remember them, you should look them up. For our purposes, there are only a few of special importance.

Logical Fallacies

You've no doubt taken a basic Philosophy class and so are familiar with the logical fallacies. If you haven't or you don't remember them, you should look them up. For our purposes, there are only a few of special importance.

  • Post hoc ergo propter hoc (after this, therefore because of this)
    We all have a tendency to look for the cause of an event in whatever was happening just before the event, as if history were a line of dominos.

    When it comes to solving problems, our first impulse is to look for causes, which is a good idea. But you need to make sure you don't settle on a cause too quickly.
  • Correlation is not causation
    If you take a class in Psychology you will hear this over and over again. Whenever two variables change together, we tend to assume the changes are directly related, that one causes the other. Often there is no causal relation. One changes and the other does to. Maybe there's something in the environment that is independently acting on both. Maybe they exist in completely separate context that happen to intersect at the moment we are observing them.
  • Arguments from authority, tradition, intimidation, social pressure, and other extraneous (aka red herring) factors.
    If there is evidence, focus on the evidence. If there is no evidence, then find it or recast the question.

    When it comes to your business proposal, you don't want to use any of these strategies. You don't want to strong arm people into thinking they have a problem, especially when they don't have the problem, and you don't want to argue for change or resistance to change when the best arguments you can make are based on the way things currently seem to be.
  • Over generalization
    If three people really hated something, does that mean it's lousy? Strong evidence isn't wide evidence; loud shouldn't be convincing on it's own.

    If everybody you know loves (or hates or is indifferent to) something, you may be inclined to think everybody feels the same way. That's an overgeneralization because the people you know are not a representative sample of the whole population. Unless you are a connector with thousands of random people on your network or an influencer who can shift public opinion just by holding one (Malcolm Gladwell, The Tipping Point), the people you know are a selected sample and therefor proof of nothing on their own.
  • False analogy
    If two things are the same, then what is true of one is true of the other. If they are not the same, however, then what is true of one will only be true of the other if they have the relevant characteristic in common, if they are similar in the right way. If they differ in the right way, then no matter how much else they have in common, the argument is an argument from false analogy.

    The less we know about something the more likely we are to miss subtle distinctions, distinctions that might render an argument from analogy false. So whenever you find your self making an argument from analogy, from two things being similar, ask yourself if they are similar in the right way or just similar in lots of ways or even maybe just one big distracting way.

    False analogies often lead to false assumptions, especially when we don't notice that we are assimilating two different things. No child is identical to his or her parents. No member of a group necessarily has all of the characteristics of that group. Similar isn't the same. Or, as the popular Thai expression says it, same same but different.
  • The fallacy fallacy
    Just because an argument is faulty doesn't mean the conclusion is false; it just hasn't been proven. The absence of evidence isn't proof of innocence, it's only a clear indication that prosecution would be wrong; hence the presumption of innocence.

    A sound argument isn't necessarily true either. From a logical perspective, you can derive a sound conclusion from a false premise if you get the form right, even though what you've just proven is not true. Where your thinking comes from can dictate where it goes.
  • Argument by assertion
    Saying that something is true doesn't prove it is. Seems obvious, but in fact, we tend to believe what we've frequently heard, especially if it's being shouted at us.
Errors in Statistical thinking

Basics of statistical reasoning

Workplace-based writing and research is not a class in statistics. You will get exposure to the following ideas in far greater detail in other courses. But just because this is primarily a writing and research class doesn't mean statistics are irrelevant. In fact, in order to make sense of most of what you read you need to be able to tell whether the research the findings are based on is likely sound even if you don't start doing your own statistical analyses until later in your academic experience.

One of the pillars of all critical thinking is a proper understanding of the nature of a given situation. From a statistical perspective there are three options. A problem is deterministic, stochastic, or "wicked."

Deterministic problems are cause to effect situations where something exists that you want to remove or something doesn't yet exist that you want to make happen. If A, then B. In deterministic situations, what you need to know is knowable and static or at least stable enough to be accurately predictable without random interference or noise of any kind. Precipitation turns to ice when the ambient temperature drops below 32 degrees Fahrenheit. It's raining, a cold front is forecast, the temperature is approaching 32, and will keep going. The roads are going to ice up; better think twice before driving, prepare for delays and perhaps accidents.

Stochastic problems are those where you have to rely on probabilities because there are gaps in the necessary knowledge that can't be filled by further research. If A, what are the chances of B? Often real world problems are more like, what are the chances of A? What are the possible outcomes of A? And what several things might follow from each possible outcome and the probability of each? Yikes!

Life is tough when the problems are stochastic, but it gets even tougher when the problems are "wicked." Wicked problems are those where the situation changes as the evidence comes in, so that there is never a stable place from which to base operations, no stable perceptions from which to draw reliable inferences, and everything may change almost as if at random, but not quite randomly. In other words, the patterns you are looking for aren't very distinct to begin with and they are changing in unpredictable ways. It's not just that A doesn't cause B not that a probability of A will indicate a probability of but, but there the connection between A and B might include any number of dynamic intermediary points. Most of life's big questions are "wicked."

What does any of this have to do with making business decisions based on customer research. Well, you are likely to encounter almost no deterministic problems. In a perfect, frictionless, world for example, if you make your mobile app more "user-friendly," use will go up and more user will download it. But how do you know who your users are and do they all consider the same features "friendly" and what happens if your users aren't all the same? How do you design for the many different kinds of people? The lowest common denominator? Different versions for different user-types? If the company data scientist says that based on the server data, there are only three kinds of users, where "type of user" is defined as a user who takes a specific path from start to checkout, what questions should you ask to make sure you understand and that they have discovered everything that needs to be known?

Common Statistical Errors

  • Naieve Statisticism -- Believing that numbers don't lie, confusing statisitical artifacts with actual phenomenon ("average" is a numerical concept, not a real thing; there are no average people), failing to realize that numerical precision is not proof of accuracy.
  • Too narrow a sample space -- A sample space is the set of all possible outcomes. Human beings tend to narrow their options to lessen uncertainty and minimize complexity. We like simple. We like fast. And so we are inclined to underestimate our options. You may be familiar with the expression "the fool's choice." This is where someone is offered A or B and it doesn't occur to them that there might be a C, to say nothing of Neither A nor B or Both A And B. You need to know what are the possible outcomes.
    Don't focus too narrowly or too quickly.
  • Non-Representative sample -- If you need to make decisions based on customer input, you need to know that your informants represent all of your customers or at least most of them. If your sample skews, your conclusions will be misleading. If you ask your most frequent customers what they want, their wish list might put less frequent customers off. The better someone understands something, for example, the more features and choices they want. Novice customers, typically, don't want lots of options. They want a clean, clear, path. For the most part. But that's the problem. What might you do for people who aren't customers yet but just prospects? How do you find those people? How many different kinds of those people might there be?
  • Base rate error -- Are the chances of a randomly chosen male professor being taller than 6 feet less than, greater than, or equal to the chances of a male professional basketball player being taller than 6 feet? If you said less than, you're thinking. If you realized that the sample space of all professors would far exceed the sample space for all pro ballers and concluded therefore the prof had the edge, you were thinking critically. The frequency of over six-footness for pro basket ballers would far exceed the frequency for profs, but the sheer number of profs improves the chances of a better than 6 footer. (How many pro ball players are there? (600 maybe?) How many Professors? (6,000,000 maybe?) What is the chance of six foot or taller among all men?)
    You need to know the sample rate
    Numbers without context are misleading.
  • Conjunction -- the likely hood of two attributes being true is always less than just one attribute, despite the fact that the accumulation of specific features leads our brains to think the chances are approaching 100%.
  • Texas sharpshooter fallacy (a kind of false cause, seeing a chance cluster of events as proof of some kind -- greater rate of cancer in an area doesn't necessarily mean a carcinogen is present. The alarming number might be merely a statistical effect. Precise though numbers seem, even correctly derived statistics can be me misleading if we attach the wrong assumptions or draw the wrong inferences from them.)
    accuracy vs. precision
  • Non-Representative sample --
    Cherry picked data
    Distorted Y-axis
  • Emotion

    For hundreds of years people assumed that emotions warped decisions because there were plenty of examples where a person or people decided to do something and then later regretted their decision. From this perspective, a perfect world would be a perfectly rational world. We now think that pure rationality is rare and in fact a cognitive disorder (called Alexithymia ) when present. We need to feel in order to think critically. We just need to be aware of our own states and the states of others. If a person is indifferent, knowing why may be important. If a person is agitated or distracted, their state suggests they may change their mind later when they come down. That doesn't mean what the think while stressed is necessarily wrong, only that they may think differently once they come down.

    Feelings are caused by the feeler's interpretations and inferences, not the actions of others. As the authors of Crucial Conversations say, other people don't make you mad; you make you mad. As the Daoist tradition has always maintained suffering comes from within.