Science has occasionally appropriated philosophical fields. Physics and psychology were originally discussed by philosophers rather than scientists. Right now ethics is considered to be a philosophical domain, but we could imagine science taking over the field. Will ethics ever be taught in a science class? Will we learn right and wrong from natural science?
People who reject that we could one day have a moral science generally do so due to skepticism, the gap between facts and values, and the is-ought fallacy. I will respond to these concerns and explain why I don’t think any of them are conclusive.
Science originally required scientists to only focus on nonmental and value-free parts of reality. The existence minds and values were both taken to be philosophical rather than scientific issues. But now psychologists tell us about mental activity and economists tell us that some ways of reasoning are economically better than another.
Economics is not value-free, so perhaps economics is close to what a moral science would be like. Economics is supposed to tell us that certain actions are rational in some sense. Given a certain view of a person as being purely self-interested, perfectly rational, and perfectly informed, economics will tell us that the person would buy the more affordable generic medication that is identical to the name-brand product. (Why spend more money than you have to for an identical product?) People are not really purely self-interested, perfectly rational, or perfectly informed. What it means to be rational is value-laden. A person who is irrational is not reasoning properly. It is better to be rational than irrational. If economics is allowed to tell us that some action is more economically rational than another, then perhaps ethical science will someday be allowed to tell us that some action is more ethically rational than another.
What is a moral science?
What exactly do I mean by a “moral science?”
By “science” I mean that the scientific method will be used and that we will have hypotheses and data that could falsify the hypotheses. I will consider economics to be a science, so the question is whether moral science could be as empirically justified as economics. I am not saying that moral science could be like physics, chemistry, or astronomy.
By “moral” I mean normative ethics—a field concerned with what we ought to do, with what’s morally rational, with what’s virtuous, or with what has intrinsic value. I am not talking about descriptive ethics, which is concerned with moral beliefs and attitudes. Descriptive ethics is already studied by anthropology and other scientific domains. Descriptive ethics tells us that people unanimously agree that murder is wrong. Normative ethics could tell us that murder really is morally wrong, that it’s morally irrational to murder, that virtuous people don’t murder, or that human life has intrinsic value.
I am also going to assume that a moral science will require moral realism—the idea that there is at least one moral fact. Anti-realists might say that there is a sense that murder is wrong insofar as we all agree not to murder each other, we all have an interest not to be murdered, we all believe it’s wrong, we care about others, etc. A moral realist will say that morality is not merely about what we believe or desire, and that there are moral facts we can discover. Moral realists will agree that murder really is wrong, and that torturing people is usually (or always) wrong.
One reason some people reject the possibility of moral science is simply because they reject moral realism. They think that morality is really just about our beliefs and desires. There are many types of anti-realism, such as subjectivism, relativism, non-cognitivism, and error theory. There are philosophers who support many of these, and I don’t think we can reject anti-realism right off the bat. The debate continues among philosophers, and I don’t think it would be appropriate just to dismiss anti-realist philosophy without taking several years studying it.
I am certainly not going to prove that moral realism is true here because it is a controversial issue that requires a great deal of argument. However, I do think we can understand the allure of moral realism when considering the following:
We experience that pain is bad in some sense. Pain can be useful to us (by being educational), so pain is not merely bad in the sense of undermining our goals. Some claim that pain is bad in the sense that we desire not to experience pain, but we have a good reason we don’t want to experience pain. It’s not just an arbitrary desire. Not wanting to experience pain makes sense. Also, it generally makes sense for us to prefer that other people avoid experiencing pain as well. It makes sense for us to care about other people and to want them to have good rather than bad experiences. (Go here for more information.)
One common argument given against moral realism is that people disagree about what’s right or wrong. Some cultures say that we are morally justified to punch others in the face for insulting us, and other cultures say it’s wrong to do that. If people don’t agree about right and wrong, does that mean there’s no such thing? No. Philosophers disagree about lots of things and we don’t think that means there is no fact of the matter. For example, some philosophers think the mind is identical to certain brain states and others think that certain mental states are caused by certain brain states. Perhaps we are not going to be able to know one way or the other, but there has to be a right answer. The disagreement in philosophy of mind does not seem to imply that minds don’t exist. I don’t see why moral disagreement would imply that moral facts don’t exist either. (Go here for more information.)
Even though people disagree about various moral issues, it should be mentioned that there is also a great deal of agreement. It seems plausible to think that some moral issues are easier to resolve than others. It is clear that torture is usually (or always) wrong, but it is not easy to know if we should legalize hard drugs that can cause a great deal of harm to people. (Especially if we agree that people should generally have the right to harm their own body.)
The gap between facts and values
David Hume talked about the is-ought gap, which is also called the gap between facts and values. There is obviously a difference between what is the case and what ought to be the case. There’s a difference between nonmoral facts and values. There’s a difference between nonmoral facts and moral facts. The question is—how can we know about what ought to be the case? How can we know about moral facts?
There are sometimes difficult questions to answer, but that does not mean that there is no answer (or that we don’t know the answer). How can we be sure that murder is wrong? I think we do know it, but how we know it isn’t entirely clear. Knowing that murder is wrong does not depend on proving to others that you know it.
Moreover, consider that there’s also an “is-thought” gap. There’s a difference between nonmental facts and mental facts. How can we know about our own thoughts and experiences? That can be a difficult question and I doubt psychologists have a philosophically satisfying answer to this question. Even so, it seems obvious that we do know certain mental facts about ourselves. We know when we see a yellow banana, when we experience pain, and so on.
My own view is that we know about our thoughts and experiences just by having them. We can observe them insofar as we know about them when we have them. In a similar way I think we can know that some experiences are intrinsically bad. We have the experience and we know what it’s like to have it. We know that pleasure is an intrinsically good type of experience and pain is an intrinsically bad type of experience.
Psychologists do talk about our thoughts and experiences. They seem to agree with me that we can know about them. They often rely on self-reports. The assumption is that we can generally trust people to know about their own experiences. If someone says something looks like a yellow banana, we trust them. We don’t require scientists prove that each of these people know what the color yellow looks like. It’s just considered to be obvious.
Economists also talk about economic rationality and state that some decisions are economically better than others. How do they know about what has economic value? They assume that satisfying desires has something to do with rationality. Why do they do that? They never proved it scientifically. But we just consider it to be obvious as well.
Perhaps it is also obvious that pain is intrinsically bad and pleasure is intrinsically good. Do we need to prove it scientifically? Not everything in science is proven scientifically. It could be taken to be an axiom or as a justified assumption.
The is-ought fallacy
A lot of critics seem to have the following fallacy in mind when criticizing the possibility of a moral science:
A deductive argument with nonmoral premises can’t have a moral conclusion.
- Punching Jack will cause him pain.
- Jill has no good reason to punch Jack.
- Therefore, it would be morally wrong for Jill to punch Jack.
The problem is that neither premise states that it’s morally wrong to cause pain for no good reason. The argument is logically invalid.
Of course, the argument could be missing a premise. If we add another premise, then we can make it valid:
- Punching Jack will cause him pain.
- Jill has no good reason to punch Jack.
- If punching Jack will cause him pain and Jill has no good reason to punch Jack, then it would be morally wrong for Jill to punch Jack.
- Therefore, it would be morally wrong for Jill to punch Jack.
Many people will say that scientists only deal with nonmoral facts, and they can’t use those nonmoral facts to get moral conclusions. Scientists can’t use the scientific method to know that pain is intrinsically bad, that it’s wrong to cause pain for no good reason, etc. There are certain moral facts that will have to be known before scientists can answer moral questions, so a moral science will be impossible.
I don’t find this argument to be persuasive because it’s not clear that scientists need to use deductive reasoning in this way to prove everything. Scientists often take something to be too obvious to worry about. They don’t scientifically prove that people generally know what the color yellow looks like, that everyone knows what pain feels like, or that there’s something rational about satisfying desires. Scientists take those premises to be too obvious to prove or believe they are justified assumptions for some other reason. Perhaps moral scientists will also take certain moral premises to be too obvious to prove or will assume them to be justified assumptions. Asking a moral scientist to prove that pain is intrinsically bad might be like asking an economist to prove that it’s rational to satisfy desires, or like asking a psychologist to prove that people know when they experience pain.
Also, consider the following is-thought fallacy:
A deductive argument with nonmental premises can’t have a mental conclusion.
- Jill has her eyes open facing a yellow banana in ordinary light.
- Jill says, “I see a yellow banana.”
- Therefore, Jill sees a yellow banana.
The above argument is logically invalid. Perhaps Jill is a mindless body that merely imitates human behavior. Scientists will assume Jill isn’t a mindless body. They will just sweep these skeptical worries under the rug and assume Jill really does have a mind. Isn’t that fallacious? Can we have a psychological science? Seems obvious to me that psychology does require certain assumptions, but those assumptions are perfectly reasonable. We think it’s obvious that Jill is a regular living person who has a mind and that the evidence of her having a certain experience (from a self-report) is a reliable way for us to know what’s going on in her head.
Once more, moral scientists might need to start with certain assumptions about moral facts. The demand that scientists prove every moral premise seems to imply that they prove all their premises. However, if scientists did need to prove all their premises, then they would never be able to prove anything. We would need an argument for every premise and conclusion. Every argument has a premise, so we would need an infinite number of arguments to justify any argument. It would lead to an infinite regress. (Go here for more information.)
Finally, it might seem obvious that we can know when we see a yellow banana or when we feel pain, but perhaps it is less obvious that murder is wrong or that pain is intrinsically bad. Perhaps that’s why we don’t have a moral science yet. It is often difficult to justify our values and it’s not entirely clear what moral premises we need before a moral science could be successful. Even if a scientist agrees that murder is wrong and pain is intrinsically bad, it’s not entirely clear what they could conclude using those premises. But if a scientist has an entire “moral framework,” then it might be much more possible for them to reach various conclusions. (Go here for more information about moral frameworks.)
I don’t know that we will ever have a moral science, and we might need to know which moral framework to accept before it will ever happen. Even so, I see no good reason to think it’s impossible. None of the objections against a moral science seem convincing, but many philosophers do find anti-realism to be plausible. I doubt there will be a moral science unless some of these skeptical worries will be resolved.
- Can Morality be known through science?
- FAQ on Intrinsic Values
- Debate over moral realism
- The is-ought gap: How do we get “ought” from “is?”
- Five meta-ethical theories
- What you need from formal logic
- What is argument mapping?
- An argument for moral realism
- Do we experience that pain is intrinsically bad?
- The persistence of moral disagreement: An objection to moral realism
- Is knowledge impossible?
- Moral theories (normative theories of ethics)
I don’t know that we will ever have a moral science, and we might need to know which moral framework to accept before it will ever happen.