Ethical Realism

August 26, 2010

Moral Reasoning Without Moral Theories

Someone once suggested that we might not need moral theories to reason about morality. I found this to be an intriguing idea despite not fully knowing how it could be done, but I now understand why moral reasoning does not require a moral theory. There are at least four ways to reason about morality without theory:

  1. Uncontroversial moral truths
  2. Analogies
  3. Theoretical virtues
  4. Thought experiments

Uncontroversial moral truths

There are many uncontroversial moral truths, such as the following:

  1. Suffering is bad.
  2. Happiness is good.
  3. If it is wrong for someone to do something in a situation, then it is wrong for anyone to do it in an identical situation.
  4. It always or almost always wrong to torture children.
  5. It is often wrong to steal from people.

Such truths are sometimes called “moral truisms.” These truths are often taken for granted during moral reasoning. Such reasoning can be explicitly and clearly stated in the form of moral arguments, such as the following:

  1. It is always or almost always wrong to torture children.
  2. Whipping the neighbor’s child would be a case of torturing a child.
  3. I have no reason to think that whipping the neighbor’s child would be the right thing to do.
  4. Therefore, whipping the neighbor’s child is probably wrong.

The above argument uses a moral truth (it is always or almost always wrong to torture children) and combines that with two other uncontroversial facts to lead us to a moral conclusion (whipping the neighbor’s child is wrong).

Moral reasoning doesn’t require that we prove absolutely everything. It would be absurd to think that everyone has to know why torturing children is always or almost always wrong. It’s just obvious. We can use uncontroversial truths to lead us to moral conclusions. (Compare this to mathematical knowledge. I know that 2+2=4 even though I don’t know why it’s true.)

However, it might be possible to learn about “why torturing children is always or almost always wrong” through other uncontroversial truths. For example:

  1. We know that suffering is bad because we have experienced it.
  2. All things equal, we know it is wrong to cause bad things to happen.
  3. Therefore, all things equal, it’s wrong to cause suffering.
  4. Torture causes suffering.
  5. Therefore, all things equal, torture is wrong.

The first two premises are ones I believe to be uncontroversial moral truths. If they are false, then it will be up to someone else to prove it. In the meantime it seems quite rational to agree with the above argument.

I don’t want to suggest that there is never any reason to question uncontroversial truths, but being uncontroversial tends to be sufficient for justification. One way to justify an uncontroversial truth is by defending it from objections. If we have no reason to doubt an uncontroversial truth, then it makes good sense to believe it.


Analogies help us draw general truths from less general cases. Analogies let us compare two things to find relevant similarities between the two. For example, kicking and punching people tend to be analogous actions insofar as they are used to hurt people. They are both often wrong for the same reason. Whenever it’s wrong to hurt people, it will be wrong to kick or punch them in order to hurt them.

Using analogies we can justify new general moral truths by using other uncontroversial moral truths. We know that kicking people is usually wrong and we can figure out that punching people is usually wrong for the same reason. We can then use this comparison to discover a new general moral truth—hurting people is usually wrong. We can then use this general rule to realize that torture and other forms of violence are also usually wrong.

Morality for each person is analogous as morality for everyone else. We can consider that kicking people is generally wrong for others because it’s bad when I get hurt. It’s not then a big step to realize that other people are relevantly similar to me. It’s bad when I get hurt, and it’s bad when other people get hurt for the same reason. The disvalue of suffering is analogously similar for each person. But it’s also usually wrong for me to cause others harm for the same reason it’s usually wrong for others to hurt mebecause harming others is usually wrong. Additionally, there can be exceptions to general moral rules, which apply analogously for each person. It is morally acceptable for me to harm others when necessary for self-preservation, and it is acceptable for others to harm me when necessary for self-preservation as well. Self-preservation seems to override the need to refrain from harming others in either case. We could speculate that the value of one person’s life is greater than the value of another to avoid harm.

Theoretical virtues

I have discussed six theoretical virtues in the past, which help us determine when a hypothesis or belief is justified. (The virtues are: Self-evidence, logical consistency, observation, predictability, comprehensiveness, and simplicity.) The better a belief is supported by the six virtues, the more plausible the belief is.

First, some moral statements might be self-evident. Merely understanding the statement could be sufficient to justify the belief in it. For example, consider that “torturing children is always or almost always wrong.” Knowing that torture causes intense suffering, which is bad; that there is pretty much no good reason to cause intense suffering to a child; and that causing harm with no good reason is wrong seems sufficient to realize that “torturing children is always or almost always wrong” is true.

Second, we don’t want our moral beliefs to contradict one another (we want them to be logically consistent). If we have a choice of rejecting an uncontroversial moral truth that we are certain is true (e.g. torture is usually wrong) and a controversial belief (e.g. whipping children is usually good), then we have reason to reject the controversial belief.

We might have a serious problem when two supposedly uncontroversial beliefs contradict one another, such as the belief that it’s never right to hurt people and self-preservation is always right. In that case it might be necessary to hurt someone for self-preservation. The solution here is to realize that these moral rules seem to have exceptions. However, it might at times be impossible to be be logically coherent. We shouldn’t reject an uncontroversial moral truth “just because” it might contradict another moral truth. Sometimes observations also contradict our uncontroversial beliefs, but we simply can’t reject our uncontroversial beliefs without a new set of uncontroversial beliefs to replace them. For example, Newtan’s theory of physics was contradicted by some observations, but scientists still believed it was true until Einstein provided scientists with a new scientific theory that was a clear improvement.

When we hold incoherent beliefs we have a reason to feel less certain about our beliefs, but that doesn’t mean our beliefs should all be rejected.

Third, observation has relevant information for morality. We experience that pain is bad, and that experience is an observation that seems to support the hypothesis that all pain is bad.

Fourth, a hypothesis that has success at making risky predictions is more likely to be true. If I hypothesize that all pain is bad, then my predictions succeed until I observe that some pain isn’t bad. Of course, interpreting these observations is difficult. I don’t think masochism is an example of experiencing pain itself as good. Both pain and pleasure can be simultaneously experienced—and physical and emotional pain (or pleasure) are also two different aspects to our experiences. Masochism could be an experience of physical pain and emotional pleasure.

Fifth, the belief that all pain is bad is much more comprehensive than believing that the pain of touching fire is bad. If all pain is bad, then we could use that truth to help us do a great deal of moral reasoning as opposed to merely realizing that burning pain is bad.

Sixth, simple moral truths, such as “it’s usually wrong to hurt people” give us more more plausible hypotheses than much more complex moral truths, such as, “it’s usually wrong to torture people, to punch people, to kick people, to stab people, to steal from people, and to shoot people.” The simple moral truth can determine that all of these other actions are wrong and more. Additionally, the simple moral truth has less assumptions. We assume all of those actions are examples of hurting people, but we might find out that stealing isn’t technically hurting people. It is safer to have less assumptions rather than more, and simple truths have less assumptions.

Thought experiments

Thought experiments are stories, scenarios, and other contemplations that could lead to insight about the universe. Moral thought experiments are meant to give us insight into morality. For example, imagine that a woman puts a loaded gun up to your head and asks you to give your wallet to her. It seems like the best thing to do in this situation is to give your wallet. It would be absurd to criticize someone for giving up their wallet in this way.

Another thought experiment was suggested by John Stewart Mill in Utilitarianism. He argued that it’s better to be person dissatisfied than a pig satisfied. He thought we would realize that being a person is more enjoyable than being a pig. Being a person gives us intellectual pleasures that are qualitatively better than animalistic pleasures that pigs enjoy. A little bit of intellectual pleasure seems to be superior to a great amount of animalitic pleasures (eating, sleeping, and having sex).1

One thought experiment done more recently was by Peter Singer in his essay The Drowning Pond and the Expanding Circle. He produces a thought experiment and then uses it to produce an analogy. He asks us to imagine that we can save a drowning child from a small pool of water at little cost to ourselves. Would we have an obligation to save the child or would it be morally acceptable to walk on by? The answer seems to be clear—we have an obligation to save the child. It would be wrong not to. Why? He suggests that it’s wrong to refuse to help people when doing so is at little cost to oneself. Singer then argues that this is an analogous situation to giving charity. We can save lives through charity at very little cost to ourselves. (We might have to buy less DVDs, etc.) Therefore, we have an obligation to give to charity.

What exactly are thought experiments doing? We often say that they give us “intuitive support” for a belief. Intuitive support tends to be difficult to explicitly state in the form of arguments. Some intuitive support is considered to be from self-evidence, but some intuitive support could also be based on personal experience and observation. For example, we can compare intellectual pleasures to the pleasures enjoyed by pigs because we have actually experienced them. We can then compare how valuable each experience was. I wrote a great deal about intuition in my discussion, Objections Against Moral Realism Part 2: Intuition is Unreliable.


Moral reasoning is much like other forms of reasoning. We can make use of uncontroversial truths, analogies, and compare theoretical virtues. We even observe some values, such as the value of pleasure and pain.

Moral reasoning is not only compatible with moral theorizing, but it is necessary to reason about morality to theorize in the first place. The moral reasoning discussed above could be used to develop a moral theory. We also need to know something about morality before we can decide if a moral theory is plausible.

Some people have suggested that moral theories have failed us, so morality is probably a human invention. I don’t agree that our moral theories have failed us, but that’s irrelevant. Even if our theories have failed us, that wouldn’t give us a good reason to be skeptical about morality or moral reasoning. Our moral knowledge never depended on moral theories. We know a lot about morality prior to having moral theories.

Update (8/27/10): Added information about Thought Experiments and a few other minor changes.


1 I suspect that we would prefer to live a dissatisfied life as a person than as a satisfied pig because we think human existence itself is worth more than the pleasures that could be offered to a pig. This could give us reason to suspect that pleasure is not the only thing we value. Merely existing as a human being could have a great deal of value.


  1. Great post. I wander if moral theorizing is necessary at all. Perhaps it’s possible to develop a comprehensive and consistent set of ethical principles based on moral reasoning alone, instead of searching for one, universally applicable, ethical principle, which may sometimes seem counter-intuitive.

    Comment by philosopher145 — February 20, 2011 @ 11:42 pm | Reply

    • You can theorize about morality without a singe ethical principle, but as soon as you say one theory in particular contains the absolute truth, you might close yourself off to other possibilities. You don’t necessarily have to close yourself off in this way. It’s probably mostly philosophers who actually think about morality in terms of a single “comprehensive” theory. Perhaps no moral theory is “complete” in the sense of being able to identify all moral facts.

      Comment by James Gray — February 21, 2011 @ 12:51 am | Reply

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: