Ethical Realism

May 5, 2014

Normative & Descriptive Ethics

Filed under: ethics,philosophy — JW Gray @ 6:41 am
Tags: , ,

I believe that one source of confusion can be solved by the distinction between normative and descriptive ethics. Whenever people talk about cultural relativism or evolutionary theories of ethics, I think they have descriptive ethics in mind, but they often jump to the conclusion that whatever they are talking about has certain obvious normative implications. In particular, some people claim that morality comes from evolution and others claim that morality is relative. What they have in mind often doesn’t actually make sense, as I will discuss in detail.

Normative and descriptive ethics

Normative ethics

Normative ethics is about intrinsic value, right and wrong, and/or virtues. The following are claims concerning normative ethics:

  1. It is wrong to kill people just because they make you angry.

  2. We should fight to free slaves when necessary, even when doing so is illegal.

  3. Pain is intrinsically bad—we ought not cause pain without a good reason to do so.

  4. It is reasonable for a person to give charity to those in need, even if no reciprocation should be expected.

Normative ethics is about what actually has overriding importance for determining how we ought to act. Even if you want a million dollars, you ought not kill innocent people in order to get a million dollars in return. Etiquette is often said to be similar to normative ethics, except etiquette is not of overriding importance. Burping is considered to be rude, but it is not that big of a deal.

Descriptive ethics

Descriptive ethics is about what motivates pro-social behavior, how people reason about ethics, what people believe to have overriding importance, and how societies regulate behavior (such as by punishing people for doing certain actions). We know that empathy helps motivate pro-social behavior (such as giving to charity) and we know that our beliefs about what has overriding importance is somewhat based on the culture we live in.

What behaviors are punished in a society tells us something about what the people find to be of overriding importance, and the type of punishment I have in mind is basically just negative consequences. Punishment could even be social pressure, such as being criticized for doing something unethical. For example, Jonathan Haidt has talked about the importance of gossip and reputation for motivating ethical behavior. (See “New Synthesis in Moral Psychology” (PDF).)

There are certain predictable ways people reason about ethics (often in unreasonable ways). For example, people often overestimate the importance of consequences when considering how well reasoned people’s moral decisions are. (See information about the outcome bias (PDF).)

Evolution and ethics

People often claim that morality comes from evolution. I agree that we evolved an ability to reason about ethics, we evolved certain pro-social intuitions, and we evolved empathy. Scientists can explain quite a bit about why people often act ethically (in pro-social ways) and sometimes act unethically. That’s descriptive ethics.

I think it is clear that evolution and science in general has a lot to tell us about descriptive ethics, but what about normative ethics? Does evolution tell us that we ought not cause nonhuman animals needless suffering? Not everyone seems to have much empathy for nonhuman animals and I suspect we evolved that way precisely to make it easier for people to hunt nonhuman animals for food. I see no obvious way to jump from the results we get concerning descriptive ethics to reach conclusions concerning normative ethics. For example, it would be fallacious to think that anyone who didn’t evolve an automatic empathetic response towards nonhuman animals would therefore have no reason to care about the well being of nonhuman animals. (See this for more information.)

Some people might think evolution (and other scientific facts) somehow explain away normative ethics—perhaps every belief we have about right and wrong are actually false. Maybe we reason about morality and have empathy because that’s how we evolved, but there are no moral facts. That could be right, but obviously much more would need to be said. We can’t just assume that normative ethics should be rejected without argument. I think we do think we know certain things about normative ethics and it would not be appropriate to throw out those beliefs without a good reason.

Cultural relativism

Anthropologists have reported that different cultures have different moral beliefs (as did Herodotus thousands of years ago). It isn’t a big leap to realize that people’s beliefs about ethics have something to do with their cultural upbringing. This is sometimes described as “descriptive cultural relativism.” It is nothing more than the claim that our moral beliefs are influenced by our culture and that people of different cultures often disagree about certain moral issues. For example, some cultures think young girls should be circumcised and other cultures think it is child abuse.

However, the term “cultural relativism” tends to actually refer to “normative cultural relativism.” That view states that what is actually right or wrong is based on the culture we are in – whatever a culture says is right or wrong is true for that culture. In that case slavery might not be wrong for all people (because many people seemed to genuinely think it was ethical in the past) and it might not be unethical to cause nonhuman animals needless suffering for all people.

Some people have argued that morality is relative (with normative ethics in mind) by explaining how different cultures have different views about morality, but that argument doesn’t work. We can’t rationally jump to that conclusion about normative ethics by merely knowing that our moral beliefs are influenced by our culture.

How could anyone take normative cultural relativism seriously? Why think that “murder is wrong” is true for us only because your culture says it is wrong? The fact that cultures have different moral beliefs doesn’t guarantee that all cultures have true ethical beliefs.

Conclusion

I think that people have been confused and jump to strange conclusions because they have not properly differentiated normative and descriptive ethics. Some people seem to think that how we evolved can somehow tell us what we should believe about normative ethics, but our pro-social tendencies are actually a topic for descriptive ethics. Some people seem to think that morality is relative (in a normative sense), and people often argue for that position by talking about how descriptive ethics is relative (because different cultures have different views about ethics). I don’t think that their argument makes sense.

Related

23 Comments »

  1. James, in my experience it is well understood in the science of morality field that, like the rest of science, evolutionary theories of morality are only descriptive, have no innate normative power, and cannot tell us what our ultimate goals for ‘moral’ behavior ought to be.

    In addition, it is uncontroversial that this science of morality may be instrumentally useful in defining moral codes that are most likely to achieve common goals such as “increasing overall well-being in a society”.

    However, I also argue the science of morality provides some descriptive moral facts that 1) cannot be contradicted by normal moral philosophizing, and 2) have normative implications regarding moral ‘means’ (because there appears to be a fact of the matter about what morality’s universal function descriptively is.)

    Consider the following data set:

    1. All past and present enforced ‘moral’ norms (norms whose violators are commonly judged to deserve punishment)

    2. Biology based motivating ‘moral’ emotions such as empathy, loyalty, guilt, shame, and indignation.

    3. Haidt’s universal ‘moral’ foundations for making moral judgments: harm, liberty, fairness, loyalty, respect for authority, and purity

    Assume for a moment that it is empirically true that all of these advocate or motivate elements of strategies for increasing the benefits of altruistic cooperation in groups. (The variations, contradictions, and bizarreness of cultural moral codes are just different implementations of the one function. They differ largely in who is in in-groups or out-groups, which particular strategies are emphasized, and markers of membership in groups.)

    Then, the universal function of morality, the principle reason it exists, is to increase the benefits of altruistic cooperation in groups. Any philosophical argument that the ‘means’ of morality ought (normatively) to be something else contradicts what ‘morality’s’ universal function descriptively is as defined by the above data set.

    The above claim, if true, inductively proves a form of moral naturalism regarding morality’s universal function, the ‘means’ that are descriptively moral.

    I’ve have gotten two bizarre classes of comments in response from moral philosophy majors:

    1. Science cannot ‘prove’ any form of moral naturalism; that can only be done by philosophical meta-ethical arguments (which I take to normally be deductive)
    2. How do you know that what people have thought morality was has anything to do with what morality really is?

    The moral philosophy majors seem to have little to no respect for the power of science to reveal ‘truth’ (science’s normal provisional truth) about morality’s function and thereby ground morality in reality.

    I do see confusion about descriptive and normative, particularly in the popular media reports on the science of morality. But I see the more important part of the problem you describe as due 1) to a lack of appreciation for the power of science’s inductive methods to reveal useful truth about morality’s function and 2) an unproductive reliance on philosophy’s deductive methods to define ‘means’ (function) in addition to ‘ends’.

    Any comments would be appreciated. The science of morality and moral philosophy seem to be playing two very different games, leading to much miscommunication. Cultural utility might be maximized if we could get everyone playing the same game.

    Comment by Mark Sloan — May 5, 2014 @ 9:44 am | Reply

    • Mark,

      Thanks for the reply. Here is my response.

      James, in my experience it is well understood in the science of morality field that, like the rest of science, evolutionary theories of morality are only descriptive, have no innate normative power, and cannot tell us what our ultimate goals for ‘moral’ behavior ought to be.

      I don’t know if that is true or not. People who don’t study philosophy have a good chance at being confused about it, and even those who study philosophy seem to be confused about it.

      I have heard that anthropologists are often cultural relativists thinking that goodness is really about the opinions of a culture, but that might be changing.

      My point is not to attack scientists, but to just help people think through these issues.

      However, I also argue the science of morality provides some descriptive moral facts that 1) cannot be contradicted by normal moral philosophizing, and 2) have normative implications regarding moral ‘means’ (because there appears to be a fact of the matter about what morality’s universal function descriptively is.)

      Do you think that philosophers are contradicting descriptive facts about morality?

      How we should be defining morality is an interesting question, but there could be a normative and a descriptive definition. That is precisely what the Stanford Encyclopedia of Philosophy says about it (as was linked above): http://plato.stanford.edu/entries/morality-definition/

      Consider the following data set:

      1. All past and present enforced ‘moral’ norms (norms whose violators are commonly judged to deserve punishment)

      2. Biology based motivating ‘moral’ emotions such as empathy, loyalty, guilt, shame, and indignation.

      3. Haidt’s universal ‘moral’ foundations for making moral judgments: harm, liberty, fairness, loyalty, respect for authority, and purity

      I don’t think anyone deserves punishment, but whether or not we should give negative consequences for certain behavior is an interesting issue. I mentioned punishment above as well. (I said, “What behaviors are punished in a society tells us something about what the people find to be of overriding importance, and the type of punishment I have in mind is basically just negative consequences.”)

      #2 is not a complete sentence. I don’t know what exactly you are saying. However, I also mentioned that, “We know that empathy helps motivate pro-social behavior.”

      #3 isn’t a complete sentence either. Haidt’s main interest is moral motivation and moral reasoning. That is descriptive ethics.

      However, Haidt does think there are better ways to reason about morality than others. That would be a normative issue.

      Assume for a moment that it is empirically true that all of these advocate or motivate elements of strategies for increasing the benefits of altruistic cooperation in groups. (The variations, contradictions, and bizarreness of cultural moral codes are just different implementations of the one function. They differ largely in who is in in-groups or out-groups, which particular strategies are emphasized, and markers of membership in groups.)

      Then, the universal function of morality, the principle reason it exists, is to increase the benefits of altruistic cooperation in groups. Any philosophical argument that the ‘means’ of morality ought (normatively) to be something else contradicts what ‘morality’s’ universal function descriptively is as defined by the above data set.

      That is a descriptive function of morality. It doesn’t necessarily tell you the normative definition of morality.

      I’ve have gotten two bizarre classes of comments in response from moral philosophy majors:

      1. Science cannot ‘prove’ any form of moral naturalism; that can only be done by philosophical meta-ethical arguments (which I take to normally be deductive)
      2. How do you know that what people have thought morality was has anything to do with what morality really is?

      I don’t know that meta-ethics has to be deductive. Why would you think that?

      #2 seems to refer to the idea that there is an ideal type of morality. Culture can tell us what a culture believes about morality. Biology/psychology is about what people actually care about and what motivates them. That doesn’t necessarily tell you what is “truly right” or if pain is intrinsically bad (or not). It doesn’t tell you if there are moral facts (if moral realism is true) or if intrinsic values exist at all.

      I do see confusion about descriptive and normative, particularly in the popular media reports on the science of morality. But I see the more important part of the problem you describe as due 1) to a lack of appreciation for the power of science’s inductive methods to reveal useful truth about morality’s function and 2) an unproductive reliance on philosophy’s deductive methods to define ‘means’ (function) in addition to ‘ends’.

      I don’t know why you think those things about philosophers. It could very well be that philosophers are concerned with normative ethics and get confused when some people talk about science that involves morality, which is descriptive ethics. I see no reason to think philosophers unproductively rely on deductive methods.

      Any comments would be appreciated. The science of morality and moral philosophy seem to be playing two very different games, leading to much miscommunication.

      I agree with this point. I think that is probably happening quite a bit.

      Comment by JW Gray — May 12, 2014 @ 9:59 pm | Reply

      • “I don’t know if that is true or not. People who don’t study philosophy have a good chance at being confused about it, and even those who study philosophy seem to be confused about it.”

        I agree there is much confusion about descriptive and normative moral truth for people not familiar with either philosophy nor the science of morality, as is too often evident in news media reports on such science.

        That said, my experience still is that people working in the science of morality field are cognizant of the fact that evolutionary theories of morality are only descriptive.

        “Do you think that philosophers are contradicting descriptive facts about morality?”

        Yes, that is my experience with philosophy graduate students and other apparently knowledgeable persons on the Ethics section of Philosophy Forums (if you are aware of that open site). But this may have been a failure to communicate. More in a moment when I describe their objections below.

        “How we should be defining morality is an interesting question, but there could be a normative and a descriptive definition. That is precisely what the Stanford Encyclopedia of Philosophy says about it (as was linked above):http://plato.stanford.edu/entries/morality-definition/”

        I am a fan of Bernard Gert, a very sensible fellow. As I said to Keegan, “Bernard Gert defines normative morality “to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.” Science provides descriptive truth about the function of that code of conduct, to increase the benefits of cooperation. Recognizing that the (below) data set of behaviors does have a universal function would ground morality discussions in reality in a way that I expect would be highly beneficial to moral philosophy.”

        “I don’t think anyone deserves punishment, but whether or not we should give negative consequences for certain behavior is an interesting issue. I mentioned punishment above as well. (I said, “What behaviors are punished in a society tells us something about what the people find to be of overriding importance, and the type of punishment I have in mind is basically just negative consequences.”)”

        Game theory shows that punishment of moral code violators is critical for maintaining cooperative, successful groups. Otherwise, free-riders and other exploiters quickly take over and destroy the benefits of cooperation. This punishment strategy is encoded in our biology in two ways. We feel indignation when we see other people violate moral norms and are motivated to punish them, at least by social disapproval. Second, we feel guilt and shame when we violate moral norms; this internal punishment can much efficient in enforcing moral codes than purely external punishment.

        This punishment strategy is also encoded in our cultural norms which require at least disapproval of moral violations, and in some cases require more serious punishment, usually under rule of law, up to and including death.

        What kind of punishment and who administers it so as to best increase the benefits of cooperation has been a hot topic in game theory. Those results might be consistent with your statement “the type of punishment I have in mind is basically just negative consequences.”

        “#2 is not a complete sentence. I don’t know what exactly you are saying. However, I also mentioned that, “We know that empathy helps motivate pro-social behavior.””

        The period at the end was an error. Perhaps it would have been more clear to describe this part of the data set as 2) The existence of biology based emotions such as empathy and loyalty that motivate unselfish acts for others, and shame, guilt, and indignation that provide internal punishment or motivate external punishment of people who violate moral norms.

        “#3 isn’t a complete sentence either. Haidt’s main interest is moral motivation and moral reasoning. That is descriptive ethics.”

        It is only meant to describe a category of data. It is not meant to be a sentence. And yes, the entire data set and any inductive conclusions based on it are only descriptive.

        ““Then, the universal function of morality, the principle reason it exists, is to increase the benefits of altruistic cooperation in groups. Any philosophical argument that the ‘means’ of morality ought (normatively) to be something else contradicts what ‘morality’s’ universal function descriptively is as defined by the above data set.”
        That is a descriptive function of morality. It doesn’t necessarily tell you the normative definition of morality.”

        Right.

        ““I’ve have gotten two bizarre classes of comments in response from moral philosophy majors:
        1. Science cannot ‘prove’ any form of moral naturalism; that can only be done by philosophical meta-ethical arguments (which I take to normally be deductive)
        2. How do you know that what people have thought morality was has anything to do with what morality really is?”
        I don’t know that meta-ethics has to be deductive. Why would you think that?”

        Arguments in the little I have read in meta-ethics were of the form Premises – logic- conclusion. I have never seen any meta-ethical inductive arguments. If you are saying you see no reason that inductive arguments could not be a part of meta-ethics, I would be delighted.

        “”I do see confusion about descriptive and normative, particularly in the popular media reports on the science of morality. But I see the more important part of the problem you describe as due 1) to a lack of appreciation for the power of science’s inductive methods to reveal useful truth about morality’s function and 2) an unproductive reliance on philosophy’s deductive methods to define ‘means’ (function) in addition to ‘ends’.”
        I don’t know why you think those things about philosophers. It could very well be that philosophers are concerned with normative ethics and get confused when some people talk about science that involves morality, which is descriptive ethics.”

        I agree. I see a lot of the miscommunication problem being scientists talking about what morality is (uncontroversially in their view) and philosophy majors interpreting that as naïve claims about what morality ought to be. Since the philosophy major’s perhaps almost sole interest and focus for years has been what morality ought to be, this is perhaps understandable. What I have found befuddling is their inability to grasp what is being said, even after it is repeatedly pointed out to them that no normative claims are being made.

        Could you clarify if you think philosophers ought to, or at least logically could, include inductive arguments, for example to understand what the function of morality descriptively is, as part of meta-ethics?

        That is, I understand that any claim about what the universal function of morality ‘is’ is a meta-ethical claim, even if that claim as no necessary innate normative power (is purely descriptive). If that is true, then there is no logical problem with descriptive science proving that meta-ethical claim.

        But we would need to add a new kind of descriptive category to Gert’s descriptive forms of morality, which might then look like this:
        1. descriptively to refer
        a. to some codes of conduct put forward by a society or, some other group, such as a religion, or accepted by an individual for her own behavior or
        b. to conclusions about morality’s origins and function as a matter of science

        “I see no reason to think philosophers unproductively rely on deductive methods.”

        All conclusions from deductive arguments are inherent in their premises, so their utility is necessarily limited. Yes, they are useful, but inadequate on their own to understand morality as a product of biological and cultural evolution. Until philosophers understand what morality’s function is (a product of inductive logic inaccessible to deductive logic) I see no chance philosophers will be able to definitively say what its function ought to be. I have never heard of a philosopher using an inductive argument. Perhaps it is time they started?

        Sorry for the length. I appreciated your reply.

        Comment by Mark Sloan — May 13, 2014 @ 2:11 am

      • Mark,

        You said:

        I am a fan of Bernard Gert, a very sensible fellow. As I said to Keegan, “Bernard Gert defines normative morality “to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.” Science provides descriptive truth about the function of that code of conduct, to increase the benefits of cooperation. Recognizing that the (below) data set of behaviors does have a universal function would ground morality discussions in reality in a way that I expect would be highly beneficial to moral philosophy.”

        Not sure exactly what the point is that you are making here. If we have a goal, then scientists could certainly help us know how to accomplish the goal. What should we do about global warming? Scientists can tell us what some potential solutions are. We can then try to decide on which solution is best.

        “I don’t think anyone deserves punishment, but whether or not we should give negative consequences for certain behavior is an interesting issue. I mentioned punishment above as well. (I said, “What behaviors are punished in a society tells us something about what the people find to be of overriding importance, and the type of punishment I have in mind is basically just negative consequences.”)”

        Game theory shows that punishment of moral code violators is critical for maintaining cooperative, successful groups. Otherwise, free-riders and other exploiters quickly take over and destroy the benefits of cooperation. This punishment strategy is encoded in our biology in two ways. We feel indignation when we see other people violate moral norms and are motivated to punish them, at least by social disapproval.

        When you say someone deserves punishment, we might have something else in mind — like that the world is better off when evil people suffer or something. I agree that negative consequences can be an effective way to regulate people’s behavior. I said something to that effect in the post. Haidt has also talked about it regarding things like reputation and gossip, which I also mentioned.

        The period at the end was an error. Perhaps it would have been more clear to describe this part of the data set as 2) The existence of biology based emotions such as empathy and loyalty that motivate unselfish acts for others, and shame, guilt, and indignation that provide internal punishment or motivate external punishment of people who violate moral norms.

        I agree that pro-social behavior has a lot to do with ethics, and I don’t have a problem saying that these are moral emotions. However, I am wondering if you are saying that this information somehow proves that morality is descriptively about cooperation and so forth.

        Human emotions don’t seem to encourage us to help strangers very well. Haidt and other psychologists have talked about that. And yet the assumption we have is that we should be helping strangers (who need help) more and try to find ways to get people to help strangers.

        We could have very well evolved pro-social and cooperative behavior and motivations because it is a reproductive advantage, but it is not necessarily a reproductive advantage to help strangers who need help, or animals that need help, etc. So, there is a sense that I would want to say that morality in the normative sense goes beyond what we evolved. Even so, the descriptive reason we have morality could very well be something more selfish.

        Is evolution the only thing relevant to descriptive morality? No, there’s also culture and lots of other things. So, there might be a sense that descriptive morality has a variety of “functions.”

        Arguments in the little I have read in meta-ethics were of the form Premises – logic- conclusion. I have never seen any meta-ethical inductive arguments. If you are saying you see no reason that inductive arguments could not be a part of meta-ethics, I would be delighted.

        Inductive arguments (and abductive arguments) also use premises and conclusions. Those conclusions are also supposed to logically follow in some sense.

        Abduction is about the best explanation for things. Does meta-ethics explain anything? I think so. It explains our moral beliefs, moral disagreement, moral progress, etc.

        Why would we be an error theorist? Because it could explain that how we think about normativity is mistaken and so many things we think about ethics is false.

        Why be a moral realist? Because certain things implied by the alternatives (like error theory) might be things we should think are false.

        It might be that every meta-ethical theory will face various challenges, so the one a philosopher argues for is generally thought to be the better than the alternatives. The philosopher then has to explain the various problems the alternative meta-ethical theories face and why they are inferior to the one that is endorsed.

        I agree. I see a lot of the miscommunication problem being scientists talking about what morality is (uncontroversially in their view) and philosophy majors interpreting that as naïve claims about what morality ought to be. Since the philosophy major’s perhaps almost sole interest and focus for years has been what morality ought to be, this is perhaps understandable. What I have found befuddling is their inability to grasp what is being said, even after it is repeatedly pointed out to them that no normative claims are being made.

        That could happen and the fact that people so rarely talk about the descriptive vs normative distinction could be part of the problem.

        However, I am not convinced that scientists rarely make any mistakes about this type of thing. I have no idea what the frequency is concerning scientists being confused over the normative and descriptive distinction.

        Could you clarify if you think philosophers ought to, or at least logically could, include inductive arguments, for example to understand what the function of morality descriptively is, as part of meta-ethics?

        I’m not sure what exactly it would mean for meta-ethics to describe the function of morality. However, utilitarians think that morality should be about the greatest good for the greatest number (perhaps because pleasure is intrinsically good and pain is intrinsically bad). That would actually fall into moral theory rather than meta-ethics as far as I am concerned. Maybe some people think it is meta-ethics for some reason. Not that there’s a huge distinction between the two.

        Well, how would a utilitarian use induction to reach the conclusion that utilitarianism is true? I think that their main point should be that utilitarianism is the best moral theory (assuming that they have a good reason to think so). They can then show that everything important in ethics is really about pleasure and pain (or happiness and suffering). They could try to show that their theory faces fewer problems than the alternatives.

        That is, I understand that any claim about what the universal function of morality ‘is’ is a meta-ethical claim, even if that claim as no necessary innate normative power (is purely descriptive). If that is true, then there is no logical problem with descriptive science proving that meta-ethical claim.

        I didn’t actually talk much about meta-ethics because it is a somewhat complex issue. It might be that some meta-ethics is about descriptive morality and some isn’t. However, we could just talk about normative ethics and the meta-ethics that deals with normative ethics.

        For example, what does “normative ethics” refer to? What’s it mean? That’s a meta-ethical question that just deals with normative ethics.

        The answer you suggested dealing with what rational people would agree to would be one answer to that question.

        All conclusions from deductive arguments are inherent in their premises, so their utility is necessarily limited. Yes, they are useful, but inadequate on their own to understand morality as a product of biological and cultural evolution. Until philosophers understand what morality’s function is (a product of inductive logic inaccessible to deductive logic) I see no chance philosophers will be able to definitively say what its function ought to be. I have never heard of a philosopher using an inductive argument. Perhaps it is time they started?

        They use inductive arguments.

        I’m not sure that deductive arguments are necessarily very limited, and I’m not sure what you have in mind concerning how meta-ethics is and should be done.

        Consider if we should have laws that protect animals. Should we only have them if they also benefit humans? I think there is something unethical about abusing animals, even if it benefits humans and does nothing harmful to humans.

        Can I prove that we ought to care about animals? Maybe not, but there are various philosophical arguments we could consider. Utilitarianism would be a reason to think animals should be cared for and there are reasons to think utilitarianism is a good moral theory. Also, utilitarian concerns seem important to pretty much every other moral theory as well. So, utilitarianism seems like it could be incomplete, but that it is relevant to ethics at the very least.

        Comment by JW Gray — May 13, 2014 @ 3:42 am

  2. Is it not that the term ‘pro-social’ is, if one accepted that there is a good basis for normative thought, a tautology? And if it is not a tautology then isn’t it loaded already with the bias of a normative account of the social as being good? Perhaps the term ‘social’ suffices for the above argument?
    That aside, Hume’s naturalist fallacy comments on just this confusing of ‘normative’ and descriptive. To say that something is natural is not also to then say it ought be.
    But how are we to genuinely accept the normative as anything other than a historical emergence out of a specific cultural history and thinking? Originally, from where the term ‘norm’ comes from, is nothing more than the arbitrary setting of a convention of measurement: it has evolved from a term used to name the tool the carpenter’s square. Thus it was the utility of normativity that grants it its power. Normativity is a convention; it is just, as the speed of light is, an attempt at fixing one particular relativised positional account as immutable so as to give to other conventions a non-dynamic point of reference.
    What is lost in surrendering the idea of the normative?
    What is gained in surrendering this same idea?
    The latter question is certainly more interesting.

    Comment by Keegan Eastcott — May 12, 2014 @ 12:23 pm | Reply

    • Keegan, you said:

      “That aside, Hume’s naturalist fallacy comments on just this confusing of ‘normative’ and descriptive. To say that something is natural is not also to then say it ought be.”

      First, a qibble: the naturalistic fallacy is attributed to G. E. Moore, not Hume. Hume described the logical difficulty in deriving binding normative facts from descriptive facts of any kind, not just what is natural or in terms of natural properties such as “pleasant” or “desirable”.

      But that aside, science of the last few decades shows the cross cultural function of moral behavior is NOT a “historical emergence out of a specific cultural history and thinking”. The universal function of ‘moral’ behavior is to increase the benefits of cooperation in groups. (Here, ‘moral’ behaviors are the data set represented by past and present cultural moral codes and motivated by our biology based ‘moral’ emotions and sense of right and wrong. See my comment above and, for example, Martin Nowak’s Evolution, Games, and God.)

      Note that the ultimate goals of that cooperation may be radically different in different cultures and science is silent about what those ultimate goals ought to be (fortunately, philosophers are not silent about ultimate goals).

      If philosophers define “morality’ to be consistent with this data set, then they will be factually wrong if they argue morality has a different function. Thus a descriptive fact about morality can have normative implications in terms of morality’s function.

      If philosophers define “morality” to not be consistent with the data set, then I argue they are talking about something else than what morality actually is outside of academia.

      Bernard Gert defines normative morality “to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.” Science provides descriptive truth about the function of at least the largest portion of that code of conduct, to increase the benefits of cooperation. Recognizing that the above data set of behaviors has a universal function would ground morality discussions in reality in a way that I expect would be highly beneficial to moral philosophy.

      Comment by Mark Sloan — May 12, 2014 @ 7:06 pm | Reply

      • Sorry, I had thought it was Hume.
        I didn’t say moral behaviour was a product of history; I said normativity was.
        There is no disputing that there is a practiced form of morality; nor is there a dispute that certain acts are good for the increased cohesion and social well-being.
        I agree with what you’ve said, that science, with good reason, can explain how morality plays a role, and how it functions.
        The concern is whether these moral ‘facts’ (or facts) are of an ontological kind which is distinct from just psychological occurrences. For the normative claim, that claim which wishes an ‘all’, or an ‘everyone ought agree’, is precisely that which I am claiming is the historically and culturally emergent phenomenon.
        The idea(l) of the rational person, is of the same kind.

        Comment by Keegan Eastcott — May 12, 2014 @ 9:13 pm

      • Mark,

        You said:

        Bernard Gert defines normative morality “to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.”

        I think that is too restrictive of a definition of normative ethics. That is what some philosophers think normative ethics is about, but not all of them. It is basically a broad theory of ethics. The connection between rationality and ethics is a controversial issue.

        If there are any facts about what a rational person should want morality to be, then that would be a claim of normative ethics. However, facts about virtue and intrinsic values could also be moral facts that could have little to do with rationality.

        Note that rationality itself is normative. What we should believe, which behaviors are conducive for rationality, and when a belief is justified is quite concerned with normativity.

        There’s normative and descriptive elements to rationality. How people reason is descriptive, but how they should reason is normative. Logicians don’t study how people actually reason, but they do tell us how they should reason. Logicians deal with normative rationality rather than descriptive rationality. That’s not to say that logicians shouldn’t study any science of rationality.

        Comment by JW Gray — May 12, 2014 @ 10:06 pm

    • Keegan,

      Is it not that the term ‘pro-social’ is, if one accepted that there is a good basis for normative thought, a tautology? And if it is not a tautology then isn’t it loaded already with the bias of a normative account of the social as being good? Perhaps the term ‘social’ suffices for the above argument?

      This is a good question. How do we know what the normative is about? How could we know anything normative at all?

      Why do anthropologists and other scientists say that being pro-social counts as being associated with “morality?”

      I think there are arguments about all of these things worth considering and that would go far beyond what I intended to discuss in this post.

      I would agree that being pro-social really does relate to morality. Anti-social and purely destructive behavior is not ethical, even from the normative standpoint.

      But how are we to genuinely accept the normative as anything other than a historical emergence out of a specific cultural history and thinking? Originally, from where the term ‘norm’ comes from, is nothing more than the arbitrary setting of a convention of measurement: it has evolved from a term used to name the tool the carpenter’s square. Thus it was the utility of normativity that grants it its power. Normativity is a convention; it is just, as the speed of light is, an attempt at fixing one particular relativised positional account as immutable so as to give to other conventions a non-dynamic point of reference.
      What is lost in surrendering the idea of the normative?
      What is gained in surrendering this same idea?
      The latter question is certainly more interesting.

      I think Mark’s comment about being what rational people would agree to is actually a pretty good answer to this issue.

      It also relates to what I said above: “We should fight to free slaves when necessary, even when doing so is illegal.”

      What a culture says is right is like what is legal. It has to do with how people respond to our behavior. People can think ethical behavior is unethical and punish people who do ethical things.

      If we surrender the idea of the normative, then we seem to be giving up on the idea that we ought to believe anything about ethics, that some cultures do things wrong, that we can make moral progress, and that people can be wrong about what is ethical.

      Of course, it is really a controversial issue. Anti-realists might argue that we don’t really give up anything by rejecting the normative.

      Comment by JW Gray — May 12, 2014 @ 10:14 pm | Reply

  3. It isn’t always clear what people think the “naturalistic fallacy” refers to. I alluded to the naturalistic fallacy above: http://www.fallacyfiles.org/adnature.html

    Hume argued that we can’t know what ought to be the case from what is the case — nonmoral premises can’t lead to a moral conclusion. That’s the is-ought gap. Some people say that is the naturalistic fallacy.

    GE Moore had the “open question argument.” He argued that we can’t decide something means the same thing as “good” just because it is always involved with goodness. Go here for more information: http://en.wikipedia.org/wiki/Naturalistic_fallacy#Moore.27s_discussion

    Comment by JW Gray — May 12, 2014 @ 9:42 pm | Reply

  4. James, you said above:

    “Not sure exactly what the point is that you are making here (about Gert’s definition of normative). If we have a goal, then scientists could certainly help us know how to accomplish the goal. What should we do about global warming? Scientists can tell us what some potential solutions are. We can then try to decide on which solution is best.”

    Yes, descriptive science is a source of knowledge about how to achieve our goals. My point is that the “science of morality” of the last few decades provides knowledge about how to define enforced cultural norms that are most likely to achieve group goals specifically by means of increasing the benefits of cooperation. Further, this knowledge can be particularly effective in achieving increased durable well-being, because much of our experience of durable well-being appears to be a biological reward and motivation for maintaining cooperation in groups. (It’s the part of our well-being experience that motivates us to be social animals.)

    I like Gert’s definition of normativity in that “all rational persons” seem more likely to agree on science than existing moral philosophy options. Then if they can further agree that the goal of advocating and enforcing moral codes is well-being (which seems likely), we have the starting basis of a moral theory that meets Gert’s criterion for possessing a universal normativity. (Many questions would still remain of course, such as morality regarding animals and interactions between groups, but it would be a good start.)

    “When you say someone deserves punishment, we might have something else in mind — like that the world is better off when evil people suffer or something. I agree that negative consequences can be an effective way to regulate people’s behavior. I said something to that effect in the post. Haidt has also talked about it regarding things like reputation and gossip, which I also mentioned.”

    From the science of morality perspective regarding the universal function of morality, punishment is justified (and even an obligation) if it reliably increases the benefits of cooperation in the long term. This may be the science of morality’s version of ‘makes the world better off.’

    Relevant to gossip and reputation, perhaps the most powerful cooperation strategy yet found is called “indirect reciprocity”. It can be summarized as “Do to others as you would have them do to you” plus, as game theory shows is required, “Punish those who violate this norm at minimum by social disapproval (gossip about their reputation) and refusing to cooperate with them.” (Versions of the Golden Rule do not require any admonition for punishment because our biology based emotion indignation can more than take care of that.)

    “I agree that pro-social behavior has a lot to do with ethics, and I don’t have a problem saying that these are moral emotions. However, I am wondering if you are saying that this information somehow proves that morality is descriptively about cooperation and so forth.”

    Yes, morality (as defined by the above three categories of data) is descriptively about cooperation. That is, our ‘moral’ biology and virtually all past and present enforced moral codes, as empirical truth, either motivate or advocate strategies for increasing the benefits of cooperation in groups.

    Of course, moral philosophers can, and have since Socrates, defined morality differently, perhaps as answers to “How should I live?” or “What is good?”, which are important questions. But when philosophers do so, they are talking about a broader subject. It could be very helpful to moral philosophy to recognize that the separable subset (which might be called social morality – the science of morality’s subject) may have definitive answers, while the broader questions “How should I live?” or “What is good?” may be forever unanswerable.

    “Human emotions don’t seem to encourage us to help strangers very well. Haidt and other psychologists have talked about that. And yet the assumption we have is that we should be helping strangers (who need help) more and try to find ways to get people to help strangers.”

    Yes, part of defining the ultimate goals of morality (a subject science is silent on) is defining who deserves fairness (defining who is in a cooperative group and deserves full moral regard) and who is a “stranger” who can morally be ignored or even exploited.

    Let’s consider two possible answers as to how we should treat strangers.

    Peter Singer proposes everyone is in one group and we have the same moral obligation to help a child we will never meet on the other side of the earth as our own child.

    Unfortunately, Singer’s approach destroys some of the most powerful motivations people feel to act unselfishly to help other people and increase the benefits of cooperation (to act morally). These motivations are triggered by Haidt’s three empirical in-group versus out-group “moral foundations”: loyalty, respect for authority, and purity (by an in-group’s definition). Due to our long evolution in small groups where motivation for loyalty and reliable markers of membership (respect for authority and purity) meant the difference between life and death, these in-group versus out-group moral foundations can be powerful means of motivating in-group cooperation. Singer’s proposal trashes them and thus trashes much of our motivation to act morally.

    But John Rawls’ fairness provides a solution that retains the motivational advantages of in-groups such as families, friends, and communities without abusing “strangers”. I mean that if the morality of interactions between in-groups includes Rawlsian fairness and thus avoids exploitation, then we might have a useful basis for “all rational persons” defining a universally normative moral code.

    I look forward to responding to the remainder of your comment. However, I am concerned about wearing out my welcome before I hear your response to the above. So will stop here.

    I hope you are finding at least some of this interesting.

    Comment by Mark Sloan — May 13, 2014 @ 11:50 pm | Reply

    • Mark,

      I think we are mainly in agreement. I will mainly just respond to the things you said that I am not so sure about.

      You said,

      Singer’s proposal trashes them and thus trashes much of our motivation to act morally.

      I don’t know that anything he says trashes our motivation to act morally. There are pragmatic concerns, but he is talking more about an ideal. The idea might be that perfectly rational persons would not be constrained by such tribalism and potentially troubling psychological quirks. Of course, we are not perfectly rational, so figuring out what is realistic for us is also important.

      Every person is different. Some people are more tribalistic than another. Another issue is how malleable our tribalism is and how much we should try to keep its influence at a minimum.

      Also note that breaking out of the whole in-group loyalty tribalism thing can be important to motivate us to help strangers and nonhuman animals. I think that should be a very important consideration.

      But John Rawls’ fairness provides a solution that retains the motivational advantages of in-groups such as families, friends, and communities without abusing “strangers”. I mean that if the morality of interactions between in-groups includes Rawlsian fairness and thus avoids exploitation, then we might have a useful basis for “all rational persons” defining a universally normative moral code.

      I’m not sure exactly what you have in mind when you say this. Rawls did not intend his Theory of Justice to account for all of ethics.

      You said that what philosophers talk about is quite broad. That’s exactly the point. We want to account for everything about ethics that we can. We don’t want to pretend that harming animals is okay and so forth.

      I hope you are finding at least some of this interesting.

      I always appreciate getting comments. I let people know on twitter and facebook about your comments (and those left by Keegan). There are many important issues raised and much of what I write is kept a bit simplistic on purpose because I don’t want to overwhelm people when a few simple points can still be helpful. These conversations help expand and clarify the issues.

      Comment by JW Gray — May 14, 2014 @ 4:56 am | Reply

      • James, you said

        “I don’t know that anything he (Peter Singer) says trashes our motivation to act morally. There are pragmatic concerns, but he is talking more about an ideal.”

        By advocating a moral standard (equal preference for everyone) that is dissonant with our relevant biology, Singer greatly reduces motivation to behave morally and thereby, if followed, would reduce overall well-being. I take the ultimate goal of morality to be increased overall well-being. Perhaps Singer does not.

        “I’m not sure exactly what you have in mind when you say this. Rawls did not intend his Theory of Justice to account for all of ethics.”

        Right, Rawls was talking about rules (justice) in political institutions.

        However, I propose his “veil of ignorance” thought experiment position is just what is needed for defining ‘fair’ rules for interactions between groups (including interactions with strangers) that I expect will be most likely to increase over-all well-being by the specific means of increasing the benefits of cooperation.

        “You said that what philosophers talk about is quite broad. That’s exactly the point. We want to account for everything about ethics that we can. We don’t want to pretend that harming animals is okay and so forth.”

        Yes, but attempts to date at moral theories typically lump multiple behavior categories together (cooperation strategies, moral treatment of animals, moral obligations to one’s self, and so forth) and then attempt to account for or justify them by a single idea (the moral theory). This sets up unsolvable problems for moral philosophy.

        Separating out cooperation strategies, the largest single moral behavior category (what essentially all cultural moral codes are about), greatly simplifies accounting for and justifying morality. Understanding that the main universal function of morality is increasing the benefits of cooperation can, on its own, be culturally very useful. Also, once morality as cooperation strategies is dealt with, other ethical issues that have nothing to do with cooperation strategies can be much more easily dealt with.

        “I always appreciate getting comments. I let people know on twitter and facebook about your comments (and those left by Keegan). There are many important issues raised and much of what I write is kept a bit simplistic on purpose because I don’t want to overwhelm people when a few simple points can still be helpful. These conversations help expand and clarify the issues.”

        Are you are looking for something new to do in moral philosophy? Perhaps you would consider working on the implications for moral philosophy of the emerging science of morality (where morality is understood as cooperation strategies).

        This area of inquiry seems almost wide open, and as you might guess, I expect it to become central to moral philosophy in the next few decades.

        A recent survey book on the on the science is Evolution, Games, and God edited by Martin Nowak. I wrote a review of it on its Amazon page. It contains a couple of papers on philosophical implications, but, as I said in the review, I thought that was the weakest part I read.

        Comment by Mark Sloan — May 15, 2014 @ 4:43 pm

      • You said, “This sets up unsolvable problems for moral philosophy. ”

        I’m not convinced about that. We want to know how to think about morality. There are many different angles to consider. Those are important areas of interest to me and people in general.

        Moral theories might be able to make sense out of just about everything we want them for.

        Your comment about Singer seems to miss the point. Singer isn’t saying that equal consideration requires everyone to always be motivated to help strangers equally to everyone else. That is impractical and he knows it. I already explained that he is dealing with something like an ideal.

        We do want to know how to motivate people to help strangers. We don’t just give up and say to the hell with strangers. Utiltiarianism and Singer’s understanding of ethics could give us a reason to think helping strangers is a good thing.

        I will consider reading that book at some point, but I already have a lot of books to read.

        Comment by JW Gray — May 16, 2014 @ 12:53 am

  5. James, you said,

    “I’m not convinced about that (This sets up unsolvable problems for moral philosophy.) We want to know how to think about morality. There are many different angles to consider. Those are important areas of interest to me and people in general.
    Moral theories might be able to make sense out of just about everything we want them for.”

    We can say the problems are unsolved to date and I am not aware of any philosopher who seriously claims proof that questions such as “How am I to live?” or “What is good?” necessarily have definitive answers. Also, as I said above, it seems highly unlikely for a unitary moral theory (about one subject) to account for or justify multiple unrelated phenomena such as cooperation strategies and obligations to ourselves.

    Do you remember what Protagoras told Socrates? Protagoras told Socrates that morality was simple; people have a moral sense because it was needed to obtain the benefits of cooperation in groups. In essence, he was saying that moral behaviors are cooperation strategies. That is just what the science of the last few decades confirms. Socrates rejected this idea as too commonplace and not intellectually satisfying, as one commenter put it. In my view, that was a tragic mistake because it left morality stuck in a state of unresolvable disagreement ever since.
    From sources ranging from the pre-Socratic philosophers to kung bushmen, it appears that it was once commonplace knowledge that that the function of our moral sense and morality is to increase the benefits of cooperation in groups.

    Moral philosophy could make great strides if its analysis was informed by the best available empirical data.

    “ Singer isn’t saying that equal consideration requires everyone to always be motivated to help strangers equally to everyone else. That is impractical and he knows it. I already explained that he is dealing with something like an ideal.”

    I do understand that Singer has proposes a moral ideal, and expects us all to fall short due to our flawed nature. My criticism of Singer’s proposal is a pragmatic one. I expect Singer’s morality is much less likely to achieve overall well-being goals than available alternatives.

    These alternatives define moral behavior as cooperation strategies. They would be unusually harmonious with our moral sense (and therefore motivating to act morally). They are necessarily usually harmonious because these cooperation strategies are what shaped our moral sense. Singer’s proposal would be too often dissonant with our moral sense to become the underlying principle of a cultural morality.

    ” We don’t just give up and say to the hell with strangers. Utiltiarianism and Singer’s understanding of ethics could give us a reason to think helping strangers is a good thing.”

    I am not suggesting giving up correcting how in-groups too often tend to act immorally toward out-groups (including strangers). Quite the opposite, I am suggesting defining the morality of between group interactions (including with strangers) using Rawls’ position behind a veil of ignorance as a standard for fairness.

    There is another, more visceral and far older, understanding of fairness we can apply to the problem of between group morality. Fairness can be understood as how we treat moral equals in our in-groups. That is, between group morality ‘ought’ (instrumental) to require fairness but does not imply equal obligation to family members, our friends, our communities, and people on the other side of the earth. This instrumental ought is justified by the ultimate goal of increased over-all well-being.

    It seems to me inevitable that moral philosopher’s analysis will eventually be informed by the best available empirical data (see Massimo Pigliucci http://scientiasalon.wordpress.com/2014/04/20/philosophy-my-first-five-years/).

    But I expect a lot of kicking and screaming along the way. Getting back to the topic of your article, I see the biggest barriers to communication between philosophers and people working in the science of morality field being on the philosopher side, not the science side.

    Comment by Mark Sloan — May 16, 2014 @ 2:18 pm | Reply

    • Mark,

      You said,

      Moral philosophy could make great strides if its analysis was informed by the best available empirical data.

      Yes, but I don’t see how this or anything else you said refuted what I was saying. We want to understand how to reason about ethics, about which actions are unjust, about what has value, about what different kinds of value there are. Restricting our discussion of ethics is not necessarily a good idea based on what you are saying.

      I do understand that Singer has proposes a moral ideal, and expects us all to fall short due to our flawed nature. My criticism of Singer’s proposal is a pragmatic one. I expect Singer’s morality is much less likely to achieve overall well-being goals than available alternatives.

      Singer is quite pragmatic. Read what he actually says. He not only expects us to fall short, but he would want us to have policies that appeal to our motivations in a way that would lead to more ethical behavior. Idealized ethics does not contradict being pragmatic at all. It tells us what counts as improvement.

      Let’s say that we could decide to shame people for being unethical. How do we decide if we should do it or not? Well, the fact that it helps motivate more ethical behavior is part of it. Why does it count as “more ethical?” Making people happier and helping people live better lives seems quite relevant.

      These alternatives define moral behavior as cooperation strategies. They would be unusually harmonious with our moral sense (and therefore motivating to act morally). They are necessarily usually harmonious because these cooperation strategies are what shaped our moral sense. Singer’s proposal would be too often dissonant with our moral sense to become the underlying principle of a cultural morality.

      That’s not an alternative at all. You seemed to start to drift from normative ethics to descriptive ethics. Singer never said not to have cooperation strategies. Cooperation strategies in no way require utilitarianism to be false.

      But I expect a lot of kicking and screaming along the way. Getting back to the topic of your article, I see the biggest barriers to communication between philosophers and people working in the science of morality field being on the philosopher side, not the science side.

      You have said something to that effect before, but I still don’t know why you see it that way.

      Comment by JW Gray — May 20, 2014 @ 6:58 am | Reply

      • “We want to understand how to reason about ethics, about which actions are unjust, about what has value, about what different kinds of value there are. Restricting our understanding of ethics is not necessarily a good idea just because science has important information.”

        James, I also see moral philosophy as seeking “to understand how to reason about ethics, about which actions are unjust, about what has value, about what different kinds of value there are”. I also agree that this is its proper role, and do not think it would be useful to restrict it to just the study of cooperation strategies. Virtue ethics, for instance, is about much more than cooperation strategies and may have a lot to tell us about “How ought we live?” Cooperation strategies are just about means, not ends.

        But traditional moral philosophy has yet to be informed by the best available empirical data. That empirical data for fairness, justice, and moral values has just started to be discovered in the last few decades and is still a work in progress.

        For example, what are the origins and functions (the primary reason they exist) of our sense of fairness and values? Science is providing this knowledge.

        Wouldn’t you find it useful to know the reason (or reasons) that our intuitions are what they are about fairness, justice, and moral concern about harm, freedom, loyalty, respect for authority, and ‘purity’? Might it not be useful to understand why so much of our emotional experience of durable well-being depends on the quality of our cooperative relationships with other people?

        Part of my criticism of Singer is that he does not understand the origins and functions of our experience of durable well-being. Therefore, he proposes a morality that cannot maximize overall well-being, regardless of his good intentions.

        Of course, philosophers might argue that our sense of fairness and moral values ‘ought’ to be something else than what science finds them to be. But what chance is there that philosophers will be right about what ought to be if they are ignorant of what is?

        _____________

        “Well, the fact that it helps motivate more ethical behavior is part of it. Why does it count as “more ethical?” Making people happier and helping people live better lives seems quite relevant.”

        We agree that “Making people happier and helping people live better lives” is highly relevant to designing moral codes.

        However, I argue that Singer, and everyone else in traditional moral philosophy, has insufficient factual knowledge about the origins and functions of our sense of fairness, moral values, and experience of durable well-being to be able to come to definitive conclusions about what those moral codes ought to be. Indeed, that lack of knowledge is the chief reason moral philosophy has not come to definitive conclusions about morality and, on its own, is highly unlikely ever to.

        ______________

        “Singer never said not to have cooperation strategies. Cooperation strategies in no way require utilitarianism to be false.”

        Utilitarianism defines an ultimate goal for advocating and enforcing moral codes. Descriptive science is silent on what our ultimate goals ought to be, so no contradiction with Utilitarianism is possible.

        Descriptive science of morality has normative implications only if people decide to advocate and enforce moral codes that they expect, based on that science, will be most likely to achieve whatever their group ultimate goals are.

        ________

        “I see the biggest barriers to communication between philosophers and people working in the science of morality field being on the philosopher side, not the science side.”
        “You have said something to that effect before, but I still don’t know why you see it that way.”

        Contrast the intellectual challenges on the two sides.

        On the science side, the work in morality uses the same intellectual approach as the rest of science. Results are innately only descriptive. Of course, people can give these results normative power by instrumentally incorporating them into moral codes to be advocated and enforced, justified by being the most likely to achieve whatever their ultimate goal for enforcing moral codes is (perhaps, like Singer, something like maximizing overall durable well-being). I see little confusion in the science of morality field between descriptive and normative.

        On the philosophy side, incorporating the new science of morality results will require a major dislocation in the way moral philosophy is done. That work has hardly started.

        Worse, some of the most prominent professional philosophers who have worked on the science of morality’s normative implications (Richard Joyce, Sharon Street, and Michael Ruse) perversely claim that the science of morality debunks moral realism. (My palm slaps my forehead.) Whereas the science of morality actually proves a form of moral realism regarding the universal function of enforced moral codes.

        To be fair, when they say things like “Morality is an illusion!” they are referring to the illusion of an external source of normative power (which to me is a crazy idea hardly worth discussion.) I find such claims highly misleading though, because there IS a universal, external source for our past and present moral codes – cooperation strategies that are as real as the mathematics that define them, and this knowledge should be highly culturally useful.

        The philosophy side of this particular discussion is in relative intellectual chaos compared to the science side.

        As Ken Binmore, one of the grand old men in the field recently said:

        “In my own work, I have given up trying to convert the traditional moral philosophers in the audience who label themselves as rationalists, objectivists, and realists while simultaneously denying that science has anything to contribute to their subject.”

        I have not given up trying yet. There is too much at stake in terms of cultural utility.

        Comment by Mark Sloan — May 20, 2014 @ 10:36 pm

      • Mark,

        You said,

        But traditional moral philosophy has yet to be informed by the best available empirical data. That empirical data for fairness, justice, and moral values has just started to be discovered in the last few decades and is still a work in progress.

        You could be right about that. A lot of scientific data isn’t being used that should be used. One issue is that it isn’t entirely clear what exactly philosophers should be saying about these things or if anything they say is wrong based on the data. I mostly see science as telling us how to achieve ethical goals in a better way and there are some examples of that. For example, making one option a default is likely to cause more people to choose that option, and there could be certain decisions we should want to encourage people to have.

        When talking about things like capitalism and socialism, I would expect science to have a lot to say about those things.

        When talking about what we should do about our cognitive biases, I think science can inform us about effective ways to mitigate those effects on us.

        For example, what are the origins and functions (the primary reason they exist) of our sense of fairness and values? Science is providing this knowledge.

        Do philosophers even talk about that? It sounds like it is descriptive ethics, which philosophers just tend to not study because it is a scientific issue.

        Wouldn’t you find it useful to know the reason (or reasons) that our intuitions are what they are about fairness, justice, and moral concern about harm, freedom, loyalty, respect for authority, and ‘purity’? Might it not be useful to understand why so much of our emotional experience of durable well-being depends on the quality of our cooperative relationships with other people?

        Maybe it would. I know a little about those things and I’m not sure that it has helped me a whole lot. The question is whether or how I should apply that to my philosophical work. Like I said, it might help us achieve certain goals.

        Part of my criticism of Singer is that he does not understand the origins and functions of our experience of durable well-being. Therefore, he proposes a morality that cannot maximize overall well-being, regardless of his good intentions.

        There are two main things he proposes: One, goals we should try to achieve. Two, ways to achieve those goals. I think mainly he talks about goals we should try to achieve. Are any of those goals somehow the wrong goals to have?

        Of course, philosophers might argue that our sense of fairness and moral values ‘ought’ to be something else than what science finds them to be. But what chance is there that philosophers will be right about what ought to be if they are ignorant of what is?

        I think we know certain things about ethics and we need not study science to know those things. I don’t think science tells us which moral theory is true or what type of goals we should have. How would it do that?

        ____________

        Utilitarianism defines an ultimate goal for advocating and enforcing moral codes. Descriptive science is silent on what our ultimate goals ought to be, so no contradiction with Utilitarianism is possible.

        Then I am confused about what you are saying.

        Contrast the intellectual challenges on the two sides.

        On the science side, the work in morality uses the same intellectual approach as the rest of science. Results are innately only descriptive. Of course, people can give these results normative power by instrumentally incorporating them into moral codes to be advocated and enforced, justified by being the most likely to achieve whatever their ultimate goal for enforcing moral codes is (perhaps, like Singer, something like maximizing overall durable well-being). I see little confusion in the science of morality field between descriptive and normative.

        Sometimes scientists do philosophy. I think Haidt might have done some meta-ethics. He seems to think that his moral psychology has certain normative implications. I think he did say some things that I thought were unwarranted concerning normative ethics.

        Haidt has said obviously false or misleading things about how philosophers view reasoning as well. He said how Plato thought people reasoned well. Maybe Plato was wrong in thinking that philosophers could reason well, but it isn’t clear that he trusted the reasoning of philosophers much either.

        Normative ethics done by philosophers could also have various implications to how scientists should be discussing and studying descriptive ethics. I don’t have a lot of examples in mind, but I don’t have a lot of examples of philosophers doing things wrong based on their failure to incorporate descriptive ethics properly either.

        I agree that it might be happening, but I don’t think what scientists are doing is always done the right way. They also seem to misunderstand philosophy and might not fully grasp how philosophy should apply to what they are doing.

        On the philosophy side, incorporating the new science of morality results will require a major dislocation in the way moral philosophy is done. That work has hardly started.

        Worse, some of the most prominent professional philosophers who have worked on the science of morality’s normative implications (Richard Joyce, Sharon Street, and Michael Ruse) perversely claim that the science of morality debunks moral realism. (My palm slaps my forehead.) Whereas the science of morality actually proves a form of moral realism regarding the universal function of enforced moral codes.

        I will have to look into that. I did allude to that in the blog post, though. I said, “Some people might think evolution (and other scientific facts) somehow explain away normative ethics—perhaps every belief we have about right and wrong are actually false. Maybe we reason about morality and have empathy because that’s how we evolved, but there are no moral facts. That could be right, but obviously much more would need to be said. We can’t just assume that normative ethics should be rejected without argument. I think we do think we know certain things about normative ethics and it would not be appropriate to throw out those beliefs without a good reason.”

        Haidt seems to do the same thing, though.

        Tim Dean is also an anti-realist, but I don’t think he necessarily uses science to try to debunk moral realism. Of course, Tim Dean might actually be considered to be a moral realist by some as well.

        Comment by JW Gray — May 21, 2014 @ 4:37 pm

      • I wrote this blog post about what Ravi Iyer, a social psychologist, had to say against moral realism, and Haidt responded with pretty much no comment on the philosophy involved. It is not entirely clear that his argument has much of anything to do with science, though.

        Ravi Iyer’s Argument Against “Moral Absolutism”

        Comment by JW Gray — May 21, 2014 @ 4:51 pm

  6. James, one more comment about distinguishing between descriptive and normative:

    In discussions about evolution and morality, I have found that merely pointing out which of the following three topics is being discussed can clarify the descriptive/normative distinction.

    “Evolution of morality” refers to the purely descriptive science of morality about the origins and function of behaviors culturally known of as moral. No oughts (normative content), even of the instrumental variety, are implied. This is the “science of morality” that philosophers too often falsely insist, even in the face of vehement protestations, is making naïve normative claims.

    “Morality of evolution” refers to the normative question “Is the process of evolution itself somehow inherently moral?” The consensus is no, because evolution can select for immoral behaviors such as greed and violence as easily as moral behaviors such as altruism.

    “Morality from evolution” refers to normative implications, given some ultimate goal, of the purely descriptive science of morality. For example, using that descriptive science to justify moral norms, such as the Golden Rule (indirect reciprocity), to be advocated and enforced as instrumental oughts based on their likelihood of, for example, increasing overall well-being in the society. This is at least and important part of what modern moral philosophy ought to be doing.

    Comment by Mark Sloan — May 18, 2014 @ 12:49 am | Reply

  7. James, in response to my statement:

    “For example, what are the origins and functions (the primary reason they exist) of our sense of fairness and values? Science is providing this knowledge.”

    You said:

    “Do philosophers even talk about that? It sounds like it is descriptive ethics, which philosophers just tend to not study because it is a scientific issue.”

    My impression is that most moral philosophy claims about values and fairness are ultimately grounded in consistency with moral intuitions. Knowing the origins and functions of these moral intuitions enables sorting out individual and cultural variations from cross-cultural universals. Knowing the cross-cultural universals of our moral intuitions increases the cross-cultural universality of resulting moral philosophy claims.

    I agree that moral philosophers do not talk about or are even aware of the evolutionary origins and functions of our values and sense of fairness. This is the source of the chief problems in traditional moral philosophy.
    _________________

    “Might it not be useful to understand why so much of our emotional experience of durable well-being depends on the quality of our cooperative relationships with other people?”
    “Maybe it would. I know a little about those things and I’m not sure that it has helped me a whole lot. The question is whether or how I should apply that to my philosophical work. Like I said, it might help us achieve certain goals.”

    Assume you propose morality has a Utilitarian goal of increasing overall well-being.

    So how can the science of morality be useful in the philosophical work of defining the moral code that is most likely to achieve this goal?

    That science shows 1) the critical role of cooperation in groups in producing durable well-being, 2) the ‘moral means’ of achieving ultimate ‘moral’ goals are cooperation strategies (where ‘moral’ refers to morality as the product of evolutionary processes), and 3) the diversity, contradictions, and bizarreness of existing moral codes are primarily due to different definitions of favored in-groups and different markers of membership in those in-groups.

    With this sort of knowledge, the philosophical work of defining the moral code that is most likely to achieve a Utilitarian goal will produce a radically different result than existing versions of Utilitarianism. Standard Utilitarianism’s problems with over-demandingness, predicting consequences, aggregating well-being, ignoring justice, motivating ‘moral’ action, and even defining durable well-being either disappear or are greatly alleviated.
    ______________

    “There are two main things he (Singer) proposes: One, goals we should try to achieve. Two, ways to achieve those goals. I think mainly he talks about goals we should try to achieve. Are any of those goals somehow the wrong goals to have?”

    Yes, Singer’s idea that the best way to achieve Utilitarian goals requires acting impartially is wrong.

    Due to our biology and the nature of cooperation in our physical reality, impartiality cannot maximize overall well-being because 1) much of our experience of durable well-being is produced by cooperative relationships with family and friends who we preferentially cooperate with and support, and 2) game theory shows that preferences for in-groups are almost always necessary to prevent free-riding and exploitation, which both destroy the benefits of cooperation and overall well-being.

    But if the existence of in-groups such as family, friends, communities, and so forth are required for maximizing durable well-being, how can we avoid the immorality of in-groups ignoring or exploiting out-groups? We can do that, and continue to maximize overall durable well-being, by applying a requirement for Rawlsian fairness to interactions between groups.

    Singer’s emphasis on impartiality is a mistake. He should be emphasizing fairness and cooperation as the most effective means of achieving his Utilitarian goals.
    ________________

    “I think we know certain things about ethics and we need not study science to know those things. I don’t think science tells us which moral theory is true or what type of goals we should have. How would it do that?”

    Right, descriptive science can tell us how to achieve our ultimate goals, but not what those ultimate goals ought to be.

    However, as I discussed regarding Peter Singer’s proposals, his Utilitarian goal moral code requires impartiality and apparently is produced without knowledge from science about the critical role of cooperation. Such a code would be radically different and much less effective than a Utilitarian goal moral code requiring fairness that takes full advantage of science’s knowledge about the origins and function of morality and our experience of durable well-being.
    _________________

    “Haidt has said obviously false or misleading things about how philosophers view reasoning as well. He said how Plato thought people reasoned well. Maybe Plato was wrong in thinking that philosophers could reason well, but it isn’t clear that he trusted the reasoning of philosophers much either.”

    When I said I saw little confusion on the science side regarding descriptive and normative claims, I was aware of one glaring exception – Sam Harris who is not mainstream and to my mind is not really doing science – and one possible mainstream exception – Jonathan Haidt, who is sometimes ambiguous as to whether he is making descriptive or normative claims.

    I recently attempted to usefully synthesize Haidt’s and Steven Pinker’s views on the science of morality in a short post:

    http://www.thisviewoflife.com/index.php/magazine/articles/would-abandoning-moral-foundations-make-for-a-better-society
    _________________

    “Normative ethics done by philosophers could also have various implications to how scientists should be discussing and studying descriptive ethics. I don’t have a lot of examples in mind, but I don’t have a lot of examples of philosophers doing things wrong based on their failure to incorporate descriptive ethics properly either.”

    Could you explain how ethics done by philosophers could affect how scientists should be studying morality using the normal tools and methods of science? What could an ethics philosopher say that could affect that science in the slightest way?

    On the other hand, help from philosophers could be highly useful in how to talk about morality. But I don’t see that philosophers can be helpful in this matter unless they first understand the science.

    The following are my examples of philosophers doing things wrong based on their failure to incorporate descriptive knowledge about the origins and function of moral behavior:

    Contemporary philosophers:

    1) Peter Singer’s counterproductive focus on impartiality rather than fairness.
    2) Richard Joyce, Michael Ruse, and Sharon Street’s lack of recognition of science’s proof of a form of moral realism regarding the cross-species universal function of ‘moral’ behavior (‘moral’ meaning as encompassed by past and present enforced moral codes).

    Historical philosophers:

    3) Kant: Defining moral behavior as acting according to his categorical imperatives is contrary to morality’s universal function. As a result, Kant’s claim has essentially no cultural utility. At best, it is a flawed heuristic for increasing the benefits of cooperation.
    4) All past Utilitarians: Claiming that the morality of any act is justified by an increase in overall well-being is a factual error. Moral behavior in the form of cooperation strategies is only one means of increasing well-being. This confusion greatly reduces Utilitarianism’s cultural utility because it directly creates numerous problems with over-demandingness, predicting consequences, aggregating well-being, ignoring justice, motivating ‘moral’ action, and even defining well-being or happiness.
    5) Virtue ethicists: Ethics conceived as the answer to the question “How should I live?” is appropriate, useful, and admirable. However, virtue ethics’ cultural utility suffers because it does not distinguish between the two primary categories of ethical behavior 1) cooperation strategies (as exemplified by past and present enforced moral codes) and 2) ethical behaviors that have nothing to do with cooperation strategies, such as obligations to yourself.

    To me, essentially all of moral philosophy’s moral theories to date have been “done wrong” due to a lack of understanding of what morality descriptively ‘is’ (what its function is) as a cross-species natural phenomenon.
    _______________

    “ “Some people might think evolution (and other scientific facts) somehow explain away normative ethics—perhaps every belief we have about right and wrong are actually false. Maybe we reason about morality and have empathy because that’s how we evolved, but there are no moral facts. That could be right, but obviously much more would need to be said. We can’t just assume that normative ethics should be rejected without argument. I think we do think we know certain things about normative ethics and it would not be appropriate to throw out those beliefs without a good reason.”
    Haidt seems to do the same thing, though.
    Tim Dean is also an anti-realist, but I don’t think he necessarily uses science to try to debunk moral realism. Of course, Tim Dean might actually be considered to be a moral realist by some as well.”

    I read Richard Joyce’s, Michael Ruse’s, and Sharon Street’s use of the science of morality to “debunk morality” and did not find them convincing. That is, these arguments did not touch my certainty that there is an external, cross-species universal source of morality’s function – increasing the benefits of cooperation in groups. In that sense, I am a rock solid moral realist. But in the sense of a mysterious sense of an external source of innate normative power, I don’t see that as real.

    Right, Tim did explain to me that he is a moral anti-realist regarding the innate normative power of knowledge from descriptive science or anywhere else. (This is the same sense that Joyce, Street, and Ruse are moral anti-realists.)

    My arguments to him are that this is highly misleading and he should clarify that he is a moral realist in that the function of morality is a natural phenomenon, as real as the mathematics it is based on. (This use of “should” is justified by the unstated goal of increasing cultural utility.) So far, he is unmoved.

    However, I expect he agrees that morality as cooperation strategies is a cross-species universal natural phenomenon and therefore he is a kind of moral realist regarding morality’s function, but he has not said that directly.

    I apologize again for length and some repetitiveness.

    Is there a way I can do the paragraph “indent” you use when quoting someone? I can’t figure out how to do it.

    Comment by Mark Sloan — May 22, 2014 @ 1:34 am | Reply

    • Mark,

      You said,

      With this sort of knowledge, the philosophical work of defining the moral code that is most likely to achieve a Utilitarian goal will produce a radically different result than existing versions of Utilitarianism. Standard Utilitarianism’s problems with over-demandingness, predicting consequences, aggregating well-being, ignoring justice, motivating ‘moral’ action, and even defining durable well-being either disappear or are greatly alleviated.

      I don’t really know what you have in mind. I think you might be attributing something to utilitarians that might not need to be much of an issue. Again, cooperation strategies are great and there’s nothing about utilitarianism that is against such a thing.

      Is utilitarianism too demanding? Not necessarily. What obligations do we have? That could be determined by what types of consequences will end up making us better of. Haidt talked about the importance of gossip and reputation. Well, obviously using those things can help motivate more ethical behavior. Those are the types of things that give us demands in the first place.

      The ideal of utilitarianism to maximize happiness has no perfect end result. We can try to make the universe a better place in perhaps an endless number of ways. We can then look at all the known options and decide which works best based on limited information. We then try to find the best ways to motivate behavior. No one is necessarily required to do the very best option insofar as it might not be practical. The “obligation” part could come from negative consequences that seem to work well.

      Yes, Singer’s idea that the best way to achieve Utilitarian goals requires acting impartially is wrong.

      I think his point is that utilitarianism itself is impartial. If you look at a situation where a person lets animals suffer or where they help animals suffer less, which one is better? The one where animals suffer less is the better one.

      Does that mean we should punish everyone who doesn’t help animals suffer less? Nope. We would have to know which response to our behavior works out best.

      Due to our biology and the nature of cooperation in our physical reality, impartiality cannot maximize overall well-being because 1) much of our experience of durable well-being is produced by cooperative relationships with family and friends who we preferentially cooperate with and support, and 2) game theory shows that preferences for in-groups are almost always necessary to prevent free-riding and exploitation, which both destroy the benefits of cooperation and overall well-being.

      Those are practical concerns, but they do not invalidate the fact that everyone’s interest/well being counts. It’s not like ethics is all about me and my family. How stupid would it be for Singer to tell people that his family and friends are the ones we should help the most?

      Did Singer say that people shouldn’t care for their own kids and provide their own kids with food? Did he say everyone must feed every child equally? I think not. That would be stupid. You are taking impartiality to an extreme in a way that was not necessarily intended.

      Could you explain how ethics done by philosophers could affect how scientists should be studying morality using the normal tools and methods of science? What could an ethics philosopher say that could affect that science in the slightest way?

      Scientists say that they are studying morality. We could wonder what exactly they mean by that and so forth. Philosophers have worked on trying to clarify various concepts for quite some time and might have something to contribute.

      Not sure how many scientists work on things like the Trolley problem, but I wonder if they are really understanding deontology properly and so forth.

      The following are my examples of philosophers doing things wrong based on their failure to incorporate descriptive knowledge about the origins and function of moral behavior:

      Contemporary philosophers:

      1) Peter Singer’s counterproductive focus on impartiality rather than fairness.
      2) Richard Joyce, Michael Ruse, and Sharon Street’s lack of recognition of science’s proof of a form of moral realism regarding the cross-species universal function of ‘moral’ behavior (‘moral’ meaning as encompassed by past and present enforced moral codes).

      Historical philosophers:

      3) Kant: Defining moral behavior as acting according to his categorical imperatives is contrary to morality’s universal function. As a result, Kant’s claim has essentially no cultural utility. At best, it is a flawed heuristic for increasing the benefits of cooperation.
      4) All past Utilitarians: Claiming that the morality of any act is justified by an increase in overall well-being is a factual error. Moral behavior in the form of cooperation strategies is only one means of increasing well-being. This confusion greatly reduces Utilitarianism’s cultural utility because it directly creates numerous problems with over-demandingness, predicting consequences, aggregating well-being, ignoring justice, motivating ‘moral’ action, and even defining well-being or happiness.
      5) Virtue ethicists: Ethics conceived as the answer to the question “How should I live?” is appropriate, useful, and admirable. However, virtue ethics’ cultural utility suffers because it does not distinguish between the two primary categories of ethical behavior 1) cooperation strategies (as exemplified by past and present enforced moral codes) and 2) ethical behaviors that have nothing to do with cooperation strategies, such as obligations to yourself.

      Not sure I agree with many of these examples, but it would be hard to talk about them all in detail. I did say a little about my view of utilitarianism and Singer.

      I am also a bit confused about the proof about moral realism. I would think moral realism would actually be on the normative ethics side of things because it is about facts in the normative category of things.

      It could be that the normative could reduce to the descriptive in some way, though.

      My arguments to him are that this is highly misleading and he should clarify that he is a moral realist in that the function of morality is a natural phenomenon, as real as the mathematics it is based on. (This use of “should” is justified by the unstated goal of increasing cultural utility.) So far, he is unmoved.

      He talks about applied ethics like he is a realist. He admits that rational people are better off agreeing to live by certain standards (and have some motivation from empathy).

      One question that I think we should try to pose to Tim and those who see things his way is what exactly “moral realism” is supposed to be. There are different ways people think about it and there is some gray area.

      I suspect Tim is also an anti-realist about rationality (another normative domain), so he might not be impressed with the idea that moral facts could be based on facts of rationality.

      To have the paragraph indent you use “blockquote” and “/blockquote” in “<" symbols. It is basically html, I think.

      Thanks for all the thoughtful comments. I hope you continue to add conversation to my blog posts (and can certainly continue this discussion as well).

      Comment by JW Gray — May 22, 2014 @ 5:53 am | Reply

    • Mark, we did talk a little to Tim Dean about his anti-realism before and can be found here: http://ockhamsbeard.wordpress.com/2011/12/22/the-basis-for-morality/

      This is an answer he gave, which is pretty interesting:

      James, perhaps I’m mistaken in thinking you once mentioned that my view might be compatible with a form of realism. And I agree wholeheartedly that the realism/anti-realism distinction is by no means clear cut.

      My objection to moral realism is the notion of uniquely prescriptive ‘moral facts’ which are distinct from non-moral descriptive facts. I think there are only non-moral facts – including facts about cooperation, psychology and human interests – and our contingent desire to pursue those interests.

      If one wanted to call facts about cooperation, or facts about how to behave socially in order to further our interests as social creatures, as ‘moral’ facts – which would constitute a weaker sense of ‘moral fact’ – then perhaps I could be a realist. But I don’t believe there are any objective facts that are intrinsically prescriptive or binding or motivating.

      Comment by JW Gray — May 22, 2014 @ 5:58 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a reply to Keegan Eastcott Cancel reply

Create a free website or blog at WordPress.com.