Ethical Realism

May 2, 2013

Cognitive Bias & Informal Fallacies

Daniel Kahneman wrote Thinking, Fast and Slow, which is about the psychological research concerning cognitive bias. Kahneman could very well be the leading expert concerning cognitive bias at this point in time. His research concerns how people reason and why people often reason poorly, and it has some important implications for critical thinking. I will describe three cognitive biases that he discussed in his book, and explain how they relate to logical fallacies.

What are cognitive biases?

Critical thinking and logic are concerned with proper reasoning—the way we should reason about things. But human beings are not made to think properly all the time. What we call “cognitive bias” refers to consistent ways people reason poorly. Cognitive bias concerns how people actually reason as opposed to how they should reason.

“Cognitive bias” does not refer to prejudice, racism, a liberal preference in the media, or anything like that. Instead, it refers to how we consistently think in certain ways that often causes us to reason poorly. The human mind was not made to reason perfectly. To reason well can require a great deal of effort and people aren’t usually going to spend hours doing formal logic to figure out if their beliefs are well-reasoned or not. How we actually reason is often intuitive, immediate, and effortless. We use shortcuts and jump to conclusions based on limited information without thinking much at all. This works well for the most part. Spending too much time and energy reasoning about whether we should take a shower would obviously not be a good way to live our lives.

We now know of over a hundred cognitive biases (such as the confirmation bias, outcome bias, and halo effect), and these biases are likely to cause us to use fallacious reasoning—or to find the fallacious arguments given by others to be persuasive.

It shouldn’t be a surprise that we tend to think our own reasoning is perfectly good, even when it isn’t. The most important lesson we can learn from cognitive bias research is that they are part of being human and we can’t stop being biased. Instead, we need to share our thoughts and reasoning with others because we are much better at spotting the poor reasoning of others than our own. Others are likely to help spot the mistakes we make in our reasoning process, especially if they have a different view of the world.

Examples of biases and fallacies

Confirmation bias

Consider the following argument:

  1. All dogs are mammals.
  2. If all dogs are animals, then all dogs are mammals.
  3. Therefore, all dogs are animals.

People are more likely to think this is a good argument because the conclusion is obviously true. However, it’s actually an invalid argument. The confirmation bias causes us to be likely to think any argument in favor of a belief we agree with is well-reasoned, even when it’s actually fallacious.

The above argument has the following argument form:

  1. A.
  2. If B, then A.
  3. Therefore, B.

Another argument with this argument form is the following:

  1. All dogs are animals.
  2. If all dogs are reptiles, then all dogs are animals.
  3. Therefore, all dogs are reptiles.

Both premises are true, but the conclusion is false. Any argument with this form can have true premises and a false conclusion. That means that the premises fail to give us a good reason to believe the conclusion. However, good arguments do give us a good reason to believe the conclusion.

The confirmation bias causes us to take any confirmation for our beliefs too seriously and it causes us to take any counter-evidence against our beliefs much less seriously than we should. We are also likely to fail to realize that some people disagree with what we believe and fail to learn about any counter-evidence against our beliefs.

For example, liberals are more likely to think that any type of environmental regulation is a good idea and fail to realize when they aren’t; and conservatives are more likely to think any type of environmental regulation isn’t a good idea and fail to realize when they are. People are more likely to read articles that support their beliefs, and they are more likely to dismiss or marginalize any evidence they read against their beliefs. Liberals would be more likely to read about the importance of an environmental regulation rather than the problems with it, but they are also likely to think any problems with an environmental regulation aren’t really a “big deal.”

The confirmation bias is likely to cause us to find any fallacious argument for a conclusion we agree with to be a good argument when given by others, but it can also cause us to sincerely engage in fallacious reasoning of our own. In particular, it often causes us to engage in “one-sided reasoning.” We are more likely to reason that any evidence for a belief justifies the belief, even when there’s also evidence against the belief. We should consider “both sides” of a debate when we want to know what we should believe.

Consider the following one-sided argument:

  1. Some people say that robbing a bank to take their money would be wrong because it could cause people to needlessly die.
  2. However, the money we get from robbing a bank could be used to help pay the medical bills for poor people.
  3. Therefore, we should rob a bank.

The problem is that robbing a bank is clearly wrong because it could cause people to needlessly die, even though we could use the money to help some people. We can’t just dismiss the reasons against robbing a bank and think any reason is good enough.

Outcome bias

The outcome bias causes us to think that decisions are morally right or wrong based on the consequences rather than the reasoning process. Consider a person who drives drunk. If no one gets hurt, we are more likely to forgive her and judge her less harshly than if she ends up causing a car accident that kills someone. Her decision to drive drunk was poorly-reasoned either way, and getting someone killed did not actually make the decision more unethical.

It is important to consider what consequences our actions are likely to have, but the actual consequences do not determine how ethical our decisions are. A person who decides to drive a car safely and ends up killing a child who jumps in front of the car does nothing wrong whatsoever, even though it ends up killing someone. And a person who drives drunk is doing something wrong, even if no one gets hurt.

One obvious case of the outcome bias is how we judge revolutionaries. When revolutionaries fail, we are much more likely to see them as terrorists or murderers; but if the revolutionaries succeed, then we are more likely to see them as heroes and to call them “freedom fighters.” The founding fathers of the USA are considered to be freedom fighters, but any revolutionaries against the USA at any later point in time are considered to be terrorists.

The outcome bias is related to the one-sidedness fallacy because we are likely to take the outcome of a decision too seriously and to take the actual reasoning process less seriously than we should. If the decision was well-made, but caused something bad to happen, then we will likely think the decision was unethical, even though it was well-reasoned. If a decision was poorly-reasoned, but caused something good to happen, then we are more likely to judge the decision less harshly.

Consider the following one-sided argument:

  1. Martha decided to go for a walk and got mugged.
  2. The area is known to be safe and it is the first time anyone got mugged in the area for over a year, but Martha is a small woman and knew that such a thing could happen.
  3. Therefore, Martha should have known better and she shouldn’t have went for a walk.

The problem is that Martha would not be said to have done anything wrong if she went for a pleasant walk without being mugged. Perhaps it’s one of her best opportunities to get some exercise. The area is known to be safe and it is part of human life to take small chances. Driving a car safely can get someone killed, but it’s not wrong to drive a car. Going for a walk in a safe area is similarly a reasonable decision to make, even though there is a slight possibility of getting mugged.

Halo effect

The halo effect refers to the fact that we are likely to see a person as nearly all-good or all-bad based on our limited knowledge of the person. The halo effect is highly related to the power of first impressions, but the bias is not limited to first impressions. We change our mind about a person over time and we are still likely to see that person as mostly all-good or all-bad.

If someone is attractive, then we are more likely to have a good first-impression of that person and to see that person as mostly all-good. We are then more likely to think that person is more qualified for any job, has superior abilities, and will be more likely to give to charity.

We are more likely to trust and take seriously anyone we see as mostly all-good than one we see as mostly all-bad. If a person is well-groomed, articulate, well-dressed, and attractive; then we are more likely to think that person’s arguments are well-reasoned, even if they aren’t. We are also more likely to think someone we dislike gives poorly-reasoned arguments, even if they aren’t.

The halo-effect is also likely to cause us to reason fallaciously using “hasty generalizations” and “ad-hominems.”

Hasty generalizations – We are often likely to make judgments based on insufficient data. Knowing that a person is attractive does not tell us how qualified she is, what abilities she has, or how ethical she is. We should try to make those judgments based on much better information than that. However, that is exactly how we tend to actually think about attractive people.

An example of a hasty generalization is the following:

  1. The President has stated that he is against torture.
  2. If the President has stated that he is against torture, then he is a good person and is qualified for the job.
  3. Therefore, the President is a qualified for the job.

The problem is that the limited information presented here about the President is not sufficient to know how qualified she is for the job.

A person’s limited information about a politician is likely to greatly influence her entire view of the person. If the limited information sounds good, then the politician is more likely going to be viewed as all-good. Of course, liberals are also more likely to view Democratic politicians as mostly all-good, and conservatives are more likely to view Republican politicians as all-bad.

Ad hominems – We are often likely to see a person as mostly all-bad based on limited information and jump to conclusions based on that view. If we see a person as mostly all-bad, then we are also likely to view their argument as poorly-reasoned based on very little information. To fallaciously base our judgments on a negative view of a person is called the “ad hominem” fallacy.

An example of an ad hominem is the following:

  1. John wants us to legalize marijuana because it is much more harmless than cigarettes or alcohol.
  2. However, John smoked marijuana when in college.
  3. If John makes poor judgments like that, then we can’t take his argument seriously.
  4. Therefore, we should reject John’s argument.

We are given a reason to think we should legalize marijuana. The argument is not going to be less reasonable based on any potentially negative characteristic of John. We should not reject the argument just because of that potentially negative characteristic. It’s irrelevant.


Cognitive biases often cause us to reason fallaciously and to find fallacious arguments to be persuasive, but cognitive bias is based on effortless automatic thought processes that are often beneficial to us. It’s not always entirely clear when our automatic process will lead us into fallacious reasoning, and we ultimately can’t stop it from happening. However, we can look out for our biases and try to make our reasoning process public, so that other people have a chance to correct any mistakes we make in our thinking.

You can follow Ethical Realism on Facebook or Twitter.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at

%d bloggers like this: