Ethical Realism

November 25, 2010

Can Morality Be Known Through Science?

Can science lead to moral knowledge? If so, moral naturalism is true.

Naturalism is the philosopher’s jargon for saying “based on natural science.” “Moral naturalism” is the view that morality is part of the reality studied by science (physical reality) and can be known by science, but “moral naturalism” has more specifically become jargon for the view that there are moral facts and they can all be studied by science.1 “Empirical” knowledge (or justification) is knowledge attained through observation and experimentation (the scientific method).2 Naturalism is almost synonymous with “empiricism,” which is the view that we can know everything from observation—and “moral empiricists” would think that all moral knowledge is attained through observation and experimentation.

If moral naturalism is true, then there are moral facts—morality exists as part of the world independently of our opinions. Morality wouldn’t merely be a human fabrication, social contract, or “cultural custom.” Additionally, we would have a way to know about moral facts, and it is quite a reliable method based on our experience with scientific progress so far.

If moral naturalism is true, then there are various questions that need to be answered, which are often taken to be objections to naturalism:

  1. How do we know when we identify that a moral fact is identical to a nonmoral fact? Whenever we reduce moral facts to nonmoral facts, we should ask, “But what should I do?” This is known as the open question argument.
  2. How can science give us moral knowledge? Science can only give us facts about objects rather than values—and these are two totally different things. “You can’t get an ‘ought’ from what ‘is.’”
  3. How can we observe moral facts? Perhaps our observations can be explained without appealing to moral facts.
  4. Do moral facts explain anything or cause anything to happen? If moral facts are natural, then would everything in the natural world be exactly the same even if moral facts don’t exist? If moral facts are natural, then they might be useless. Morality itself would have no influence or power over us.

I will discuss these two sorts of moral naturalism, the above questions, and briefly discuss how moral naturalism might relate to Sam Harris’s The Moral Landscape.

Two Sorts of Moral Naturalism

There are two sorts of moral naturalism: Reductionism and non-reductionism.

Reductionism

Reductionism states that moral facts and properties are nothing other than certain nonmoral facts and properties. For example, “good” might mean “pleasure, and “right” might refer to “whatever produces the most pleasure and the least pain.”

Given reductionism, we can know that an act is right or wrong based on our observations. For example, we could see that torturing a child is wrong because it causes severe pain when better alternate courses of action are possible.

Reductionism has been greatly successful in science so far, so we could suspect that it could be successful for morality for that same reason. For example, H2O is water. We found out that what we call “water” was really H2O.

“Moral naturalism” is often equated with “reductionism” because that has been the historical position within philosophy, but they are not equivalent. Consider that thoughts don’t seem to be equivalent to brain states. Nonetheless, thoughts are “natural” and we study them within natural science (psychology).

Non-reductionism

Non-reductionism or “emergentism” is the view that moral facts and properties depend on non-moral facts but are not identical to them. It could be that “badness” exists (in part) because pain exists. Whenever there is pain, something bad is happening to someone. However, “bad” doesn’t refer to pain itself. This is much like how thoughts seem to exist (in part) because of brain states, but the word “thought” doesn’t refer to brain states.

Non-reductionism is a view about reality rather than knowledge. Moral non-reductionism is the view that moral facts are “more than the sum of their parts,” and are irreducible to nonmoral facts. Moral non-reductionism doesn’t tell us how we can know about moral facts or justify them.

Some people equate “non-reductionism” with “intuitionism” and “non-naturalism” because all three views have been unified in the past in philosophy. However, these are all different positions.

Intuitionism is the view that we can know moral facts via “intuition” or “noninferential reasoning” or “reflection.” Intuitionism doesn’t tell us whether moral facts are natural (physical) parts of the world, and it does not tell us that moral facts can’t be known though observation.

Non-naturalism takes two forms: One view tells us that facts are part of the reality studied by science and another tells us that facts are knowable by using science.

What I call “non-reductionism” here is actually “naturalistic non-reductionism” because I am only concerned with naturalistic knowledge here—the view that we can know moral facts through observation. Naturalism, non-reductionism, and intuitionism could all be compatible.

Questions

1. How do we know when we identify that a moral fact is identical to a nonmoral fact?

It’s not entirely clear how we know that “badness” is nothing but “pain” or the other way around. How could we know such a thing? I don’t have a good answer for this question, but there must be some way that we can know that H2O is nothing but water. Is it even possible that “badness” is nothing but “pain?” I don’t think we can rule it out.

G.E. Moore’s open question argument illustrates how problematic naturalistic reductions (or identity relations) are by saying that whenever someone gives an identity relation X (of goodness, badness, rightness, etc.), it makes sense to ask, “This is X, but is it good?” For example, someone could say that “badness is pain.” In that case we could ask after being burned severely, “This burning feeling is pain, but is it bad?”3

Moore’s open question argument can illustrate our uncertainty—it’s not obvious that “pain” is identical to “bad,” but many people seem to think that the open question argument proves such identity relations to be impossible. This is absurd because the “argument” would even apply to ordinary naturalistic reductions, such as “H2O is water.” Someone could ask, “Yes, this is water in my glass, but is it H2O?” We would simply think that the person knows very little about water for asking such a question.

I agree that we simply don’t know that “badness” is nothing but “pain” and I suspect that there is no good identity relation for moral facts. However, that doesn’t mean that naturalism is probably false because non-reductionism is another option.

2. How can science give us moral knowledge?

In particular, isn’t it impossible for science to tell us what ought to be the case because it can only tell us what is the case? Isn’t what ought to be and what is two totally different things—facts and values? Isn’t it true that you can’t get “ought” from “is?”

One simple answer is that there are moral facts. Facts and values aren’t two different things. Values are a subclass of fact. In particular, pain does seem to be intrinsically bad (bad just for existing). It is a fact that pain exists, so it’s a fact that something bad exists. Therefore, facts about what is the case can tell us something about morality.

Moreover, what ought to be the case seems to relate to intrinsic values. All else equal, it is right to help people or produce something good (such as pleasure), and it is wrong to hurt people or produce something bad (such as pain).

3. How can we observe moral facts?

We can see that someone is doing something wrong when they torture a child, but perhaps that “observation” is nothing other than our own prejudice, cultural indoctrination, emotional response, etc. In that case our observations would be exactly the same even if there are no moral facts. Morality could be nothing other than a cultural tradition.

Although it is possible that moral observation is delusional, it is also possible that nonmoral observation is delusional. We could be dreaming right now, or our minds might be controlled by machines, etc. This sort of skepticism doesn’t seem plausible concerning nonmoral observations, but do we have any reason to doubt that moral observation is based on moral reality?

I think that there is at least one case where moral observation is very plausible—when we experience intense pain. We observe that such experiences are bad. Such an observation seems like it can’t be delusional because even a hallucination of pain would be bad. Delusions and hallucinations are deceptive, but pain can’t be deceptive. We directly experience the reality of pain, but deceptive experiences trick us into thinking something is real or true. (For example, I can dream that I see my hand. I wouldn’t really be seeing my hand, but I would be tricked into thinking I am.)

4. Do moral facts explain anything?

If moral facts are identical with nonmoral facts, then they would explain something because the nonmoral facts explain something. For example, people would want to avoid pain—and “bad” might be nothing but “pain”—so the fact that something is “bad” would explain why we want to avoid it insofar as we want to avoid pain. (The moral property “bad” would cause us to want to avoid touching fire insofar as “bad” is “pain.”) Still, we might wonder if saying pain is “bad” adds anything to the universe. Would everything be the same if “bad” wasn’t “pain?” Does the fact that something is bad explain anything insofar as it is bad (apart from the fact that pain is involved)?

I find these questions troubling for the reductionist, but not for the non-reductionist. The non-reductionist can make it clear that the “badness” of pain explains a lot—we wouldn’t care about pain if it wasn’t bad. In fact, if nothing is bad, then pain couldn’t exist. Therefore a world with no moral facts would be devoid of pain and things would be quite different. (Badness causes things to happen insofar as it is part of the meaning of “pain” and we want to avoid pain precisely because it feels bad.)

What about Sam Harris’s The Moral Landscape?

Sam Harris claims that we can attain moral knowledge from science—but he defines “science” as “rational study.”4 Therefore, Harris isn’t saying that moral naturalism is true. There might be ways other than observation for knowing about certain moral facts. However, moral naturalism, if true, would vindicate Harris’s book. It would not only be possible to know moral facts through reason, but it would also be possible for there to be a moral science similar to psychology and other social sciences.

Conclusion

Non-reductionism is not only compatible with naturalism, but it is more intuitive and initially plausible. The questions that are supposed to be troubling for a naturalist are actually only troubling for the reductionist. Additionally, naturalistic non-reductionism is quite intuitive and is initially plausible given our experiences and the powerful replies it can offer to so-called objections presented against naturalism.

Many people want there to be “evidence” of moral facts and argue that moral facts probably don’t exist given the lack of positive evidence. However, there is evidence of moral facts. For example, we experience pain as bad. The potential fact that “pain is bad” is a highly plausible truism that few people would disagree with. Those who deny that moral facts exist still refuse to be tortured to prove their point. Instead, they often try to explain why they “hate pain” without there being any “moral facts.” I don’t find any such position to be plausible. I have considered such alternatives in my essay, An Argument for Moral Realism.

Notes

1 The belief in moral facts—that morality is part of reality itself and not merely a human fabrication—is also known as moral realism. “Moral naturalism” usually refers to a sort of moral realism, but it is possible to be a naturalist and reject moral realism.

2 I will often refer to “knowledge” when the word “justification” might be more appropriate. It is possible to have justified moral beliefs without having “moral knowledge” because justified beliefs—though reasonable—could be false.

3 In this case we might actually think it’s impossible to think that pain isn’t “bad” despite the fact that we do have a good reason to reject that “bad” is nothing but “pain.”

4 “Some of my critics got off the train before it even left the station, by defining “science” in exceedingly narrow terms. Many think that science is synonymous with mathematical modeling, or with immediate access to experimental data. However, this is to mistake science for a few of its tools. Science simply represents our best effort to understand what is going on in this universe, and the boundary between it and the rest of rational thought cannot always be drawn” (Moral Confusion in the Name of “Science”, March 29, 2010.)

About these ads

48 Comments »

  1. Hi James (if I may),

    I am surprised by what you write about a general definition of moral naturalism as a kind of moral realism. You write:

    Moral naturalism” is the view that morality is part of the reality studied by science (physical reality) and can be known by science

    I take it that moral facts are, ultimately, studies by scientific theories, not by moral theories. But many naturalistic moral realists would not take such a view. For instance, some reductionists, such as Peter Railton, would not say that all moral facts are investigated by non-moral branches of science (he says that he is not committed to scientism). Non-reductivists, such as Boyd, would say that only moral theories can tell us what morally good states of affairs are (see his analogy between moral theories and engineering theories).

    This affects the second worry you raised in your post. Naturalistic moral realists would just say that non-moral science does not tell us anything about morality though moral theories might get some support from it.

    Am I missing some of your points?

    Ryo

    Comment by Ryo — November 25, 2010 @ 2:38 pm | Reply

    • Ryo, Thanks for the comments. I think you misunderstand what I am saying.

      Moral naturalism states that morality can be known by the scientific method. It’s observable/empirical. I never said anything about getting morality from non-moral scientific theories.

      Non-moral branches of science might not be able to study morality. Morality could require its own branch of science just like psychology has to have its own branch of science. Also, naturalists might think everything can be known by the scientific method, but they might think that philosophy itself is part of natural science.

      The open question argument (worry 2) is still relevant because knowledge of identity relations is being questioned, which is relevant to the scientific method (can the scientific method even give us identity relations?) and it could be used to question the possibility of identity relations entirely. I discuss the issue more in the comments below.

      People like Boyd who reject reductionism and say that morality is “irreducible” have avoided the open question argument because they don’t require identity relations.

      Comment by James Gray — November 25, 2010 @ 7:12 pm | Reply

  2. I agree that we simply don’t know that “badness” is nothing but “pain” and I suspect that there is no good identity relation for moral facts.

    Why do you suspect that?

    Comment by josefjohann — November 25, 2010 @ 3:47 pm | Reply

    • josefjohann,

      Thank you for the question. Why do I suspect that “bad” doesn’t mean “pain”?

      First, they seem to mean two different things to me. I could know what “bad” means without knowing what “pain” means.

      Second, it’s not clear that everything we think of as “pain” is the same thing. Emotional and physical pain might be two different things and they are both bad.

      Third, if there is an identity relation for “bad” then it will probably be a conjunction of various bad things. Pain isn’t necessarily the only bad thing. We would define “bad” as “physical pain” and “emotional distress” etc. I am not convinced that a conjunction can be identical to something else in physical reality.

      I can’t say that “bad” isn’t “pain.” It’s possible. I just don’t think it is at this point in time based on the current information available.

      Comment by James Gray — November 25, 2010 @ 7:02 pm | Reply

      • James,

        In reply to your third point, couldn’t one make similar comments about motion? Yes, clapping my hands isn’t necessarily the only kind of motion. Many different things count as motion. We could make a conjunction of all the different possible things which count as motion.

        In both the cases of “motion” and “bad,” I would say it is better to find something common to all the different things we would call bad (including pain but also other things). We could find that “bad” is naturalistic, embodied in physical things in various ways, and also that badness can be instantiated at the physical level without needing to be emergent.

        Comment by josef johann — November 25, 2010 @ 7:25 pm

  3. josef johann,

    I don’t think motion is “clapping hands” and “jumping” and…

    Yes, there can be something underlying all “bad” things and pain is a candidate, but what about “good”? I think that existence is a plausible candidate for intrinsic value as well as pleasure.

    It’s implausible that we can find out that two totally different things can be intrinsically good, but badness could only mean one thing.

    Robert Audi has also strongly argued against the view that all bad things are “pain.” He argues that pain is “aspectually bad” (is a bad element of an experience) but an experience can be “episodically good” (good overall) that has pain and no pleasure–or episodically bad with pleasure and no pain. For example, a criminal who did horrible crimes and is never “brought to justice” could live a wonderful life of luxury and happiness. That criminal seems to be living in an intrinsically bad situation. This is found in Audi’s The Good in the Right. My review of his book can be found here: http://ethicalrealism.wordpress.com/2010/10/19/review-of-robert-audis-the-good-in-the-right/

    Comment by James Gray — November 25, 2010 @ 7:56 pm | Reply

  4. James,

    Briefly, I don’t want to argue that “bad” is “pain” necessarily, but that whatever “bad” is can be embodied by the physical, and “pain” which we have some physiological understanding of, serves as the type of thing I am ready to call “bad” (or perhaps an instance of bad).

    I also don’t think motion is a concatenation of clapping and jumping… though I was unclear about this. I meant that we could similarly argue against motion by trying to show that concatenating various instances of motion is a bad way to show what motion is. It seems obviously true that motion is real, and it is about physical things, so the problem of instantiating it in physical things is, I think, surmountable.

    I think, at least when definitions are strict, that wherever there are various things under a common label, there must be something they hold in common. Whatever that commonality is may be isolated as the one thing essential to the definition.

    So I would give the same treatment to the “good” (or intrinsically valuable) that I just gave to the bad, and say there is one good after all. Alternatively I could say there are various goods and various bads because they can embody the one thing that good (or bad) is in various manners. It looks to me like this can be happily done without emergentism.

    I will read your review of Audi’s book when I can.

    Comment by josef johann — November 25, 2010 @ 8:28 pm | Reply

    • You admit that “pain” and “bad” might refer to two different things (and you might therefore accept that “bad” is emergent and irreducible), but you resist the idea that different things can be “good” without an identical nonmoral property. I don’t know why you think this.

      I find it likely that there are emergent properties. Thoughts are one good example of this. It is likely that some emergent properties are multiply-realizable. There is no strict one way street — two different things can cause the same property or event. That means that one animal’s thoughts can be caused by somewhat different conditions than another’s. If that is true about “goodness,” then it can be that different things can be “good” without a single underlying nonmoral property.

      Also consider that physical reality has mathematical properties. One person can be twice as tall as another. It’s not likely that these properties are emergent or reducible. It’s also unlikely that mathematics requires one underlying physical state. It is very plausible to think that quite different physical states can have the same mathematical property without there being an underlying identical property. There can be two people or two electrons, and so on.

      Mathematics is an example of properties that are physically instantiated without an underlying identical non-mathematical property needed. I don’t know that morality and mathematics are alike in every way, but I find it likely that they can be alike in the sense of one property being instantiated by two different states of affairs (that lack a nonmoral/nonmathematical property).

      Comment by James Gray — November 25, 2010 @ 8:48 pm | Reply

      • You admit that “pain” and “bad” might refer to two different things (and you might therefore accept that “bad” is emergent and irreducible), but you resist the idea that different things can be “good” without an identical nonmoral property. I don’t know why you think this.

        I intended to say that pain might be an instance of bad rather than being synonymous with bad.

        Comment by josef johann — November 25, 2010 @ 8:54 pm

  5. Hmm… That wasn’t brief at all.

    Comment by josef johann — November 25, 2010 @ 8:33 pm | Reply

    • Depends how you define “brief.” I don’t mind a lengthy discussion, but I sometimes have to find the time to respond.

      Comment by James Gray — November 25, 2010 @ 8:50 pm | Reply

  6. josef johann,

    I intended to say that pain might be an instance of bad rather than being synonymous with bad.

    “Evening star” and “morning star” might not be synonymous, but they are identical. Are you saying that about “bad” and “pain?”

    If so, then I already made it clear why I currently reject this sort of identity relation.

    You said, “I think, at least when definitions are strict, that wherever there are various things under a common label, there must be something they hold in common.” I let you know why I don’t agree with this statement and I find it puzzling.

    Also, I slightly edited by reply after you read it because you got to it so quickly and I wanted to clarify myself.

    Comment by James Gray — November 25, 2010 @ 8:59 pm | Reply

    • A nickel is an instance of money without being synonymous with money. I think pain might similarly be an instance of bad without being synonymous with bad.

      All I have time for today!

      Comment by josef johann — November 25, 2010 @ 9:40 pm | Reply

      • I agree that pain is an instance of something “bad” without being synonymous (or identical) with “bad.” There are potentially other instances of bad (emotional distress and physical pain might be two different instances of something “bad”). That’s pretty much emergentism (the view that moral facts or properties are irreducible).

        Comment by James Gray — November 26, 2010 @ 5:17 am

      • (in reply to this comment)

        James,

        My understanding of an emergent property is that it has this underlying material stuff, and when that stuff is configured the right way, the emergent property comes into being, and when configured the wrong way the emergent property goes out of being. But the emergent property is not reducible to any particular aspect of its underlying configuration.

        I believe there are things that can be instantiated in a wide variety of ways without necessarily being emergent. I used motion and money as examples before, but maybe turing machine might be a better example.

        There are a wide variety of ways to create something that does what a turing machine does (laptop, iphone, elaborately structured k’nex set), yet “turing machine” has a rigorous definition, and the explanation of why all these different things are Turing machines is that they do something we can capture in formal description.

        And the property of “being a Turing Machine” is reducible too, i.e. it has an anatomy: a set of states states, a specified start state, an alphabet and table of transition functions.

        So long as I can produce some sort of example of a non-emergent thing that is nonetheless instantiated in a variety of ways (i.e. so long as it’s possible that “bad” could have a formal description that captures why seemingly different things are bad), I think that gives good reason to believe that “bad” itself could be non-emergent yet instantiated in a variety of ways.

        Comment by josef johann — November 28, 2010 @ 9:50 pm

      • kind of repeated myself at the end there…

        Comment by josef johann — November 28, 2010 @ 9:53 pm

  7. What is your definition for “pain” and “bad”?

    I see pain as being more specific than bad. Bad is very ambiguous. A tree falls over in the wind and crushes your car is an example of something that is bad. A giant chemical sludge spill that contaminates a huge river’s ecosystem is also bad. Train wreck killing 200 people is bad. The stock market crashing is bad. Getting poor grades is bad.

    Bad is is just an adjective to describe something.

    Pain it seems in all cases to be a mental hurt. It could be mental and/or physical hurt.

    Pain has

    Comment by Mike Krueger — November 27, 2010 @ 11:54 am | Reply

    • I tend to use the word “pain” to refer to any sort of suffering or negative experience, but people commonly use it to refer to “physical hurt.” The word “suffering” might be better. Pain is an experience, so it is related to our minds. Pain with no thoughts or consciousness wouldn’t be pain. A damaged leg of a brain dead person wouldn’t cause pain.

      I don’t know that it’s possible to define the word “bad” in a satisfying way. The use of “bad” here tends to refer to “intrinsically bad.” The idea is that there are things we should try to avoid. If something is bad, then it’s better not to exist. It “makes sense” to try to avoid bringing about something if it would be bad.

      One way we can discuss what it means by being “intrinsically bad” is by examples. One example is physical pain. Pain is something that seems to make our lives worse, make the world worse, would be bad just for existing, etc. Emotional distress seems to fit into here as well.

      I don’t know that defining pain (or thoughts, or color experiences, etc.) need to be given in any way other than examples. We experience that pain is bad, that we have thoughts, and so on. A poet might do a better job at describing these things than anyone else. It might be impossible to define pain in a way that a person would understand who never actually felt it.

      Comment by James Gray — November 28, 2010 @ 5:53 am | Reply

      • I was reading this and the comments. I was feeling like you both had different definitions of bad, or maybe I did and wanted it clarified. Thanks.

        Comment by Mike Krueger — November 28, 2010 @ 11:58 am

  8. I liked this and agree with you. I hope that universities start programs for us to learn more. I also agree with your stance on Sam Harris “The Moral Landscape”

    Comment by Mike Krueger — November 28, 2010 @ 1:37 pm | Reply

  9. I think of the scientific method as depending on things that are in some way observable. It at least includes a process whereby a hypothesis is formed and then tested, and the results of the test compared with the predictions of the hypothesis; if there is disagreement, the hypothesis is revised or rejected.

    It seems for there to be a moral science it would necessarily need to include the identification of a set of observable things that are considered to be indicators of good and indicators of bad, and to develop and carry out tests of existing moral theories (or develop new ones). It doesn’t seem like it would be very likely for such a science to be particularly successful if these ideas do not include concrete measurables that exist and can be compared without respect to individuals’ opinions of them.

    I expect some of the most significant challenges to potential practitioners would be in establishing these observables, and in finding a way to carry out tests whose results could be trusted.

    What is your view on this, and do you think that such an approach to moral questions is a) feasible and b) would lead to new understanding?

    My own view is that if the proposed discipline did not include observables, then while you could debate whether or not it’s rightly considered a science, I don’t believe there could be anything new to be expected from it that we don’t already have from philosophy.

    Comment by James — November 28, 2010 @ 3:58 pm | Reply

    • I discuss how moral facts seem observable above in the section “How can we observe moral facts?” I don’t think that we can measure moral facts using numbers, but we can’t measure thoughts or experiences in numbers either. That didn’t stop psychology from being a branch of science. One highly scientific part of psychology is the relationship the mind has to the brain. We can study the brain to learn some things about the mind. This might work for morality as well. For example, insofar as pain is bad and can be studied through the brain, we can study bad using science.

      I can’t say what we might find out by studying morality “in the lab,” but moral philosophy itself might be scientific. We can learn a lot about morality through our experiences and discussing those experiences with others. This is similar to how psychologists often reply on reports of experiences.

      Comment by James Gray — November 28, 2010 @ 8:48 pm | Reply

  10. Mike Kreuger,

    We both are talking about intrinsic value, but we are trying to figure out how intrinsic value relates to the physical world. We might be talking past each other to some extent. I don’t fully understand his position.

    Comment by James Gray — November 28, 2010 @ 8:12 pm | Reply

    • josef johann,

      Yes, if it’s not emergent, then it’s reductionistic. It is possible for “bad” to be reductionistic. That’s what I reject.

      You seem to want to reject that “badness” is emergent and that “badness” is an identity relation. I don’t know what that means. If water is (nothing but) H2O, then the two things are identical. If a cell phone and a laptop are both “turing machines,” then they are “nothing but” symbol manipulation insofar as they are both turing machines. That sounds like an identity relation to me.

      Comment by James Gray — November 28, 2010 @ 11:46 pm | Reply

      • James,

        It is possible for “bad” to be reductionistic. That’s what I reject.

        If you have any posts where you elaborate on this, or would make some in the future, they would interest me.

        You seem to want to reject that “badness” is emergent and that “badness” is an identity relation.

        I certainly want to reject the former. It may be that there is a formal description of what bad is (it might, for instance, be a formal description of a brain state, which can then be instantiated in any number of ways), and it may be that bad has an identity relation to that. Though I am not well enough versed in philosophical vocabulary to know whether a “formal description” is real enough that bad can be real by having an identity relation to it.

        I think bad is real when there are actual instances of it. And bad is meaningful (as descriptions are meaningful) when no real instances of it exist.

        Comment by josef johann — November 30, 2010 @ 4:18 am

      • Me: It is possible for “bad” to be reductionistic. That’s what I reject.

        If you have any posts where you elaborate on this, or would make some in the future, they would interest me.

        Reductionism vs non-reductionism is pretty much what we’ve been discussing here. I don’t think there is a knock-down argument against reductionism, but I don’t see how it can work. I think that pain is intrinsically bad, but if “intrinsically bad” means nothing more than “pain,” then it’s not clear why it’s rational to want to avoid pain and help other people avoid pain.

        It’s not clear to me how realist reductionism is so different from the anti-realist variety. It’s not clear how reductionism and “eliminative reductionism” can be different. When I find out that water is H2O the natural response seems to be, “So, water doesn’t really exist. It was just H2O all along!” Our experience of water seems to be “deceptive” or little more than a product of our biology. It doesn’t show us the “reality” of water. If badness is like that, then it doesn’t really exist either.

        I did argue that the badness of pain is irreducible at http://ethicalrealism.wordpress.com/2009/10/07/an-argument-for-moral-realism/ and the argument relies on our understanding of the qualia (feel or first person experience) of pain.

        Comment by James Gray — November 30, 2010 @ 5:57 am

  11. josef johann,

    You said,

    So long as I can produce some sort of example of a non-emergent thing that is nonetheless instantiated in a variety of ways (i.e. so long as it’s possible that “bad” could have a formal description that captures why seemingly different things are bad), I think that gives good reason to believe that “bad” itself could be non-emergent yet instantiated in a variety of ways.

    This might be right, but I don’t see how a reductionistic or non-emergent understanding of “bad” (or “good”) can be satisfying in this way.

    Also, keep in mind that we are dealing with “moral realism” and “intrinsically bad.” It is also possible that our word “bad” is nothing more than a convention of language, a human fabrication, etc. That sort of reductionism is even less plausible (for me) than moral naturalism identity relations.

    Comment by James Gray — November 29, 2010 @ 12:42 am | Reply

  12. >>What is your view on this, and do you think that such an approach to moral questions is a) feasible and b) would lead to new understanding?

    I do think it is feasible. Which is one of the reasons I have supported Sam’s book so much. I also think it will for most lead to a new understanding of what is good for people and society. When we start having thousands of studies done into relationships of “X” into the well being of an individual. Kind of like studies done on child abuse. Yes it is bad for the individual. It is also bad for society since the victims tend to create more victims. It may come up with things that are very much in a gray area. Those things that would be considered towards the bad end of a curve could help us allocate funds for education or create policies. Maybe at the very least would would have a society with more information to help us be more rational than before. Even if very little falls into the “bad” category. We as the humans could be making better informed choices.

    I can’t think of a defining example. So i will pick this. What if they study tattoos. They scientifically wouldn’t necessarily come back and say tattoos are good or bad. They might come back with levels or numbers of tattoos or coverage of tattoos of varying degrees affect on happiness, friends, close friends, family, earning potential, earning potential based on fields, marriage, infidelity, health, mental health, and not getting a tattoo etc.

    So when you do consider getting a tattoo you will be better informed of your choice and its consequences.

    Comment by Mike Krueger — November 29, 2010 @ 2:51 pm | Reply

    • Then I guess the gray area and points of definite negative impacts would be identified. So that we could have campaigns to teach educate children and the public to think about their choice.

      Also yes I would expect this science to have facts that are observable.

      Comment by Mike Krueger — November 29, 2010 @ 2:56 pm | Reply

  13. James,

    As you may remember from previous conversations, my position is that science can tell us quite a lot about what the function of moral behavior ‘is’ (why it exists in human societies). Further, this function represents a descriptive moral fact, a moral principle that I think has potential as the basis of a workable secular morality.

    I’ve posted Part 1 (of 2) of my summary of my position (in an informal style) on an open philosophy forum where I have gotten some useful comments. I am pretty well satisfied with it.

    If you had a few minutes, you might give it a read. Any comments would be appreciated.

    Thanks,

    Mark

    http://forums.philosophyforums.com/threads/part-outline-of-a-proposed-workable-secular-moral-system-44410.html

    Comment by Mark Sloan — November 30, 2010 @ 3:00 am | Reply

    • Mark,

      I am assuming that you are hoping to describe morality in an anti-realist way, so your view would be a rejection of “moral naturalism” as the term is used above. You define morality in terms of cooperation because morality seems to exist in the hopes to bring about mutual benefit.

      Here are some of my thoughts:

      One, although you say that game theory seems to help us understand “pagan virtues,” I’m not sure that I agree. Nietzsche’s Beyond Good and Evil and Genealogy of Morality describes pagan (Greek) morality as an individualistic and selfish morality. It would not then exist for the benefits of cooperation and so on. Nietzsche would probably think you are trying to reinterpret master morality in terms of slave morality.

      Two, your understanding of morality seems to be based on utilitarianism. You claim that morality exists for “results” rather than respect. If you want to “discover” what morality is all about, then it might not be a good idea to assume utilitarianism is correct. If you want to defend utilitarianism, then you are free to do so.

      Three, you speak of benefits and harms, but those words seem to be based on moral assumptions. (What counts as benefit or harm will determine our moral system.) If we accept a moral realist understanding of benefits and harms, then there is more to morality than what you describe.

      Comment by James Gray — November 30, 2010 @ 5:37 am | Reply

      • James, my aim is to show, as a matter of science, what the function of virtually all cultural morality ‘is’ (the primary reason cultural moralities exist). I claim to be able to show that there is such a primary reason. This moral principle has been the primary force shaping both our biological moral hardware (even before the emergence of culture) and cultural moral standards. This principle is put forward as a descriptive moral fact, a provisionally ‘true’ part of science that describes only what ‘is’, not what ‘ought’ to be.

        It seems to me I am working on the ultimate possible “moral naturalism” and “moral realism”. However, I understand that “moral naturalism” and “moral realism” are usually taken to be about prescriptive moral facts – what ‘ought’ to be as Hume famously used the word, not the descriptive moral fact I am proposing, Hume’s ‘is’.

        I am interested in a secular moral system based on my claimed principle because it appears likely to be the “considered rational choice” for many people for adopting and practicing. When I say “practicing” I mean that accepting its burdens will almost always be their “considered rational choice” even when, in the moment a choice is made, they expect accepting the burdens of acting morally will likely be against their best interests. Such a moral system would have no necessity for any source of justificatory force, beyond ‘rational choice’, for accepting its burdens.

        Nietzsche misunderstood (understandably based on what was known then) what the function of ‘pagan’ (Greek) virtues and ‘Christian’ virtues were in their cultures. Their function is much more plausibly explained, consistent with the claimed underlying moral principle of all cultural morality, as increasing the benefits of cooperation in hierarchical groups including armies, governments, businesses, and even non-egalitarian societies in general.

        ‘Pagan’ moral virtues (beauty, strength, courage, magnanimity, and leadership) are apt virtues for effective leaders in a hierarchy. Magnanimity, courage, and leadership are NOT selfish virtues (as you say Nietzsche claimed?). Even “beauty” can be understood as a useful attribute of a leader who is vigorous and attractive. ‘Christian’ moral virtues (humility, meekness, quietude, asceticism, and obedience) are apt virtues for effective followers in a hierarchy. So ‘pagan’ moral virtues and ‘Christian’ moral virtues are two sides of the same coin.

        Members of hierarchies typically are BOTH leaders of those below them in rank and followers of those above them in rank. ‘Pagan’ moral virtues accommodate this duality by leaving the ‘virtues’ of followers up to the individual’s instincts, coercion by leaders, and good sense. ‘Christian’ moral virtues leave the virtues of leadership up to our instincts and good sense. The choice of which side of the coin to culturally reinforce may reflect the relative power positions in the two cultures of the people defining ‘virtues’. This dichotomy in virtues was forced on these cultures. Telling people “Act according to pagan virtues when you are acting as a leader in a hierarchy and Christian virtues when you are acting as a follower” was too complicated to be an effective moral standard for cultures that had little to no idea what the ultimate function of moral behavior was. In any event, our instinctual understanding of cooperation in hierarchies is in both cases more than up to the task of filling in the other side of the coin.

        No, my proposed moral principle is not based on Utilitarianism. Utilitarianism is a misconceived child of the claimed moral principle. Utilitarianism is ineffective as the basis of a culturally useful moral system because 1) the impossibility of defining happiness or good and 2) the lack of any motivating source of justificatory force for accepting its burdens. A moral system based on the claimed moral principle has neither of these two fatal flaws.

        Finally, “Benefits” are whatever a cooperating group (perhaps just two people) decide will best meet their needs and preferences. (Sounds like some marriages.) No moral assumptions are required to define them. Note that people commonly have needs and preferences such as an urge to dominate through violence that will not increase the long term benefits of cooperation. Attempting to somehow choose ‘benefits’ that reduce the overall benefits of cooperation would be immoral by the claimed principle.

        Sorry about the length of my response, but it commonly takes more space to answer good questions than to ask them.

        Comment by Mark Sloan — November 30, 2010 @ 9:57 pm

      • Nietzsche thought that the underlying reason for morality was “will to power” rather than cooperation. He would say that we don’t want cooperation unless it helps promote will to power. Nietzsche would say that leaders (who are followers of master morality) want to be leaders for themselves, not to benefit others.

        No, my proposed moral principle is not based on Utilitarianism. Utilitarianism is a misconceived child of the claimed moral principle. Utilitarianism is ineffective as the basis of a culturally useful moral system because 1) the impossibility of defining happiness or good and 2) the lack of any motivating source of justificatory force for accepting its burdens. A moral system based on the claimed moral principle has neither of these two fatal flaws.

        I’m not sure what you are saying here. Utilitarianism doesn’t require any beliefs about happiness, but certainly we are interested in what is “good.” You talk about “benefits.” If nothing is good, then nothing is a benefit.

        You also say that you are a moral realist, but if there are no “oughts” and there is no “good” or “values,” then what is morality? What you are talking about doesn’t sound like moral realism at all.

        You say that morality must be able to be rationally overriding, but why would anyone think that unless morality is about something that “really matters” and is “more important” than our self-interest?

        Finally, “Benefits” are whatever a cooperating group (perhaps just two people) decide will best meet their needs and preferences. (Sounds like some marriages.) No moral assumptions are required to define them. Note that people commonly have needs and preferences such as an urge to dominate through violence that will not increase the long term benefits of cooperation. Attempting to somehow choose ‘benefits’ that reduce the overall benefits of cooperation would be immoral by the claimed principle.

        That is nothing but a social contract. Anti-realists talk about the same thing you are talking about. Of course, anti-realists should probably admit that the social contract is not overriding and you might as well break the rules when it would benefit you to do so.

        Additionally, happiness and suffering are both important to people who live in a social contract. Happiness is a benefit and suffering is a harm. The fact that these are difficult to define is not a good reason to ignore them.

        Comment by James Gray — November 30, 2010 @ 11:41 pm

  14. James, I do not doubt “that leaders (who are followers of master morality) want to be leaders for themselves, not to benefit others”. I’ll assume initially that “master morality” is what Nietzsche thought moral behavior ‘ought’ to be (in the sense of an imperative ought, not just a ‘rational choice’ ought that I will get to in a moment).

    My study is not about wondering what moral behavior ‘ought’ to be (imperative ought). In my opinion, it is likely that no such question has a sensible answer. My study is about what the function of cultural moralities ‘is’ and the moral principle(s) implied by that function.

    If Nietzsche was searching for what moral behavior ‘is’, he completely missed finding it. If he was searching for what moral behavior ‘ought’ to be (imperative ought), I think he also failed because, as I said, that is a question I think the universe is unable to answer (and at least has not done so to date). The best he could do is define a morality that an individual might decide they ‘ought’ to adopt and practice (‘rational choice ought’) because they expected doing so would best meet their needs and preferences in the long term.

    I expect there will always be people who expect “master morality” will be their ‘rational choice’ for adoption and practice. They better watch out. Some of the rest of us may be figuring out what morality really ‘is’ and will be coming after them to punish them for any harm they have done.

    Lot of things are good, lots of things are benefits, lots of things make us happy or unhappy. My point was that while we are all interested in ‘good’ and ‘happiness’, we cannot define these terms in ways that enable sensible answers to questions like “How do I judge if an act is moral if it produces a large penalty for an unwilling few but a small gain for many?”. That is what I mean by “the impossibility of defining happiness or good” being the first fatal flaw of Utilitarianism.

    Regarding moral realism, I would say there are no imperative oughts of the kind Hume warned us about. I can’t be certain of that, but it seems unlikely. I am a moral realist only in the sense that there are descriptive moral facts about what the function of all cultural moralities ‘are’ (the primary reason they exist in cultures).

    Morality is about something that is not just important but is critical to human happiness (isn’t that the eudemonia idea?). It is critical to human happiness because it is the best wisdom we have for increasing the benefits of living in families, among friends, in communities, and in the world.

    I am proposing a kind of social contract between the people who are cooperating to obtain benefits. That group might be a family, a group of friends, an ad-hoc group, or a billion member society.

    The chief attraction of the proposed “workable cultural moral system” is that acting morally according to it appears to be the ‘considered rational choice’ (the ‘rational choice’ based on a priori reflection and likely past experience concerning long term best interests) even when you expect, in the moment of decision, that acting morally will not benefit you. The utility or lack of utility of my proposed secular moral system largely turns on if this assertion is ‘true’ about acting morally actually being the ‘considered rational choice’.

    Part 2 of this “Outline of a proposed secular moral system” thread (I’ll post in a few days) is largely devoted to explaining why acting morally according to it will likely almost always be in our long term best interests.

    I am definitely not ignoring happiness and suffering. I include them prominently in people’s needs and preferences. I am just leaving defining them up to the people who are choosing what benefits they will try for from their cooperation.

    Comment by Mark Sloan — December 1, 2010 @ 1:27 am | Reply

  15. I’ll assume initially that “master morality” is what Nietzsche thought moral behavior ‘ought’ to be (in the sense of an imperative ought, not just a ‘rational choice’ ought that I will get to in a moment).

    He didn’t. First, he thought herd morality was fine for the herd, but not for the masters. Second, his own morality was supposed to be superior to master morality.

    My point was that while we are all interested in ‘good’ and ‘happiness’, we cannot define these terms in ways that enable sensible answers to questions like “How do I judge if an act is moral if it produces a large penalty for an unwilling few but a small gain for many?”. That is what I mean by “the impossibility of defining happiness or good” being the first fatal flaw of Utilitarianism.

    If we can’t decide that one course of action is “better” than another, then we might simply be unable to judge whether or not it is immoral. What do you see as the alternative?

    Regarding moral realism, I would say there are no imperative oughts of the kind Hume warned us about. I can’t be certain of that, but it seems unlikely. I am a moral realist only in the sense that there are descriptive moral facts about what the function of all cultural moralities ‘are’ (the primary reason they exist in cultures).

    Hume said nothing threatening to moral realism (or the belief in oughts in general). The is-ought gap is not a good objection. In fact, I discussed it briefly above and you gave no objections to what I said.

    The fact that cultural moralities exist is not up for debate. Everyone agrees that they exist–both realist and anti-realists. If morality is nothing but cultural tradition, customs, etc.; then morality is just a subcategory of anthropology. Of course we can study the customs and moral traditions of cultures and we do. Most scientists simply don’t think that this is all morality is (especially if they are moral realists).

    My own argument for moral realism has less to do with what ought to be than what has value. When I say something is “good” I don’t think that means anyone “agrees” that it’s good. It means some things are worthy of promoting even if no one else agrees that it is good.

    Morality is about something that is not just important but is critical to human happiness (isn’t that the eudemonia idea?). It is critical to human happiness because it is the best wisdom we have for increasing the benefits of living in families, among friends, in communities, and in the world.

    That sounds like a utilitarian conclusion. Is it rational to promote happiness? Is it wrong to not? Ought we try to promote happiness? (The word “ought” is highly related to the thought that some behavior is rational. If it’s rational, then it’s not wrong–it would be false to say we ought not do it..

    I am proposing a kind of social contract between the people who are cooperating to obtain benefits. That group might be a family, a group of friends, an ad-hoc group, or a billion member society.

    I have no problem with social contracts and you can see my replies to Tim Dean’s latest posts. An understanding of social contracts can tell us how to benefit better through cooperation, but it doesn’t tell me to follow the rules when doing so is against my personal self interest (unless we are moral realists). It seems rational to break the rules and irrational to not when doing so would benefit me. The moral realist can better encourage us to help each other at the expense of our own self-interest than anti-realism can.

    The chief attraction of the proposed “workable cultural moral system” is that acting morally according to it appears to be the ‘considered rational choice’ (the ‘rational choice’ based on a priori reflection and likely past experience concerning long term best interests) even when you expect, in the moment of decision, that acting morally will not benefit you. The utility or lack of utility of my proposed secular moral system largely turns on if this assertion is ‘true’ about acting morally actually being the ‘considered rational choice’.

    Again, “rationality” seems to imply oughts, and I don’t quite know what you mean by the word. If we “agree” that food is “good” and learn to get more food together, then you seem to say that it is rational to do so even at my personal expense. I have no idea why you think that.

    One sort of rationality is instrumental self-interest. That is how Hume sees it.

    Part 2 of this “Outline of a proposed secular moral system” thread (I’ll post in a few days) is largely devoted to explaining why acting morally according to it will likely almost always be in our long term best interests.

    Yes, being moral is almost always in one’s personal self-interest, but the most intelligent evil people know that and they are the worst sorts of people because they break the rules and harm lots of people only when they can get away with it and benefit from it.

    I am definitely not ignoring happiness and suffering. I include them prominently in people’s needs and preferences. I am just leaving defining them up to the people who are choosing what benefits they will try for from their cooperation.

    Utilitarians could do the same thing, couldn’t they?

    Comment by James Gray — December 1, 2010 @ 2:04 am | Reply

    • James,

      Part 2 of my “Outline of a culturally useful secular moral system” based in science thread has been posted. It is largely devoted to explaining why acting morally according to it will likely almost always be in our long term best interests.

      http://forums.philosophyforums.com/comments.php?id=44465&findpost=732071#post732071

      My preference is that you comment on part 2 above which is independent of part 1 in the sense Part 2 describes how a mere descriptive fact about morality might come to have the normative force required for a culturally useful morality.

      I am happy to continue our Part 1 discussion, but my general impression is that you are, in the main, objecting to implications that I don’t believe really exist. Therefore, I am more interested in your comments on Part 2.

      If you have time and inclination to consider them, below is my response to your above comments on Part 1.

      My point was that while we are all interested in ‘good’ and ‘happiness’, we cannot define these terms in ways that enable sensible answers to questions like “How do I judge if an act is moral if it produces a large penalty for an unwilling few but a small gain for many?”. That is what I mean by “the impossibility of defining happiness or good” being the first fatal flaw of Utilitarianism.

      I am definitely not ignoring happiness and suffering. I include them prominently in people’s needs and preferences. I am just leaving defining them up to the people who are choosing what benefits will be the goal of their cooperation in different groups: families, friends, communities, and so forth.

      Utilitarians certainly could conclude that the best way to maximize good and happiness is to advocate behaviors (moral behaviors) that are unselfish in the short term and increase the benefits of cooperation where the benefits of cooperation are whatever people think will best meet their needs and preferences in the long term. But then it would no longer be Utilitarianism, it would be the Cooperation Benefits Morality I am proposing.

      Comment by Mark Sloan — December 2, 2010 @ 10:18 pm | Reply

      • Thanks for letting me know about part 2, but I think we should continue with part 1 as well.

        I want you to say more about these “implications that don’t exist.” For example:

        1. You don’t think that evil people can and do act morally for the most part precisely because it is beneficial to their self interest? I think you might want to read my conversation with Tim about this idea.

        2. You don’t think rationality has anything to do with what we ought to do? You disagree that i “ought” to make use of logic, for example?

        3. You think oughts can’t exist because they aren’t “descriptive” in any way despite the fact that we have little reason to think that?

        I also don’t see how a utilitarian who wants to use cooperation to attain benefits would no longer be a utilitarian. A utilitarian says that what matters is consequences. If you disagree, then you are a deontologist. Although you aren’t interested in “right and wrong,” you are basically defining “right” as “leads to benefits” and “wrong” as “leads to harms.” That’s pretty much what utilitarians do.

        Let’s take a look at a paragraph of part 2:

        I will argue there are sufficient reasons, applicable only to this moral principle, that people are likely to conclude they ought to commit to it as their ‘rational choice’ and almost always follow it, even when, in the moment of decision, they expect doing so will not be in their best interests. (A ‘rational choice’ in single quotes will refer to the choice expected to best meet needs and preferences – terminology from Rational Choice Theory.)

        The words in bold are words that seem to be more than merely “descriptive facts.” I don’t understand how you can use these words after rejecting prescriptive facts.

        I will now take a look at one of your arguments from part 2:

        What if I have committed to a moral principle that I expect, based on reflection and experience, will almost always better meet my needs and preferences in the long term, even when, in the moment of decision, I expect it will not? Our predictions about the long term future, even those based on serious reflection, are often wrong. Our predictions about moral behaviors are particularly flawed because they often must be made almost instantly and moral behaviors are in general unselfish, and in the moment of decision, our ‘base animal instincts’ motivate us to avoid unselfish actions. Our almost instant choices for action may even favor immoral choices (as defined by the proposed moral principle) that can be expected to often not best meet our needs and preferences in the long term due to foregoing the synergistic benefits of cooperation.

        The fact is that intelligent people can and do make selfish decisions knowing full well that it will benefit them. The decisions we make when we “know we won’t get caught” can often benefit us despite being immoral.

        You might want to consider Aristotle’s “practical wisdom” that enables someone sensitive to the particular unique elements of the situation. This is necessary to be fully moral, but it is also necessary to be ideally self-serving.

        Comment by James Gray — December 3, 2010 @ 2:52 am

  16. James, I am happy to continue our discussion of Part 1 either here or on the other forum. In a response to a comment today in the Part 2 thread on the other forum, I responded as follows. I will include it here because it may illuminate my responses (that follow) to your above points.

    Addressing the other commenter: “You may have the expectation that all arguments about morality must begin with assertions about foundational concepts used in contemporary moral philosophy, like the ideas of ‘good states of affairs’ or rights and obligations. (See my comment #11 addressed to Oldandrew.) I was motivated to look for another route to understand morality because, in my view, this mainstream Moral Philosophy approach has been largely unproductive for the last 200 years. Much to my surprise, within a year I had found another route, two areas of science which provided the peer reviewed grounding for this other route, and I had a rough concept of my proposed moral principle:

    ‘Behaviors that increase, on average, the benefits of cooperation in groups by acts that are unselfish at least in the short term are moral behaviors.’

    The moral principle I am proposing is empirically true or false based on data-dependent arguments. Data-independent arguments about the ‘end’ of moral behavior as are standard in mainstream Moral Philosophy are not, so far as I know, empirically true or false. This is a huge difference.

    I can defend the idea that ALL the “foundational concepts used in contemporary moral philosophy, like the ideas of ‘good states of affairs’ or rights and obligations” are actually consequences of my proposed moral principle based in the fundamental nature of the universe.

    My moral principle is not a kind of Utilitarianism. Utilitarianism is one flawed approximation of this proposed moral principle.”

    To your specific points:

    1. You don’t think that evil people can and do act morally for the most part precisely because it is beneficial to their self interest? I think you might want to read my conversation with Tim about this idea.
    2. You don’t think rationality has anything to do with what we ought to do? You disagree that i “ought” to make use of logic, for example?
    3. You think oughts can’t exist because they aren’t “descriptive” in any way despite the fact that we have little reason to think that?

    Regarding your first point, since ‘evil people’ is a little vague, how about we talk about the behavior of rational psychopaths defined as people who are incapable of feeling empathy or guilt and have no conscience? My understanding is that most people who would meet the clinical criteria for rational psychopathy (minimum 1% of the population in the US) are never convicted of any crime and can be very successful in business, government, and religious organizations. They can be very successful because they can be utterly ruthless (often useful in businesses and government) and because they feel no guilt, they are happy to tell other people whatever will make the other people do what the psychopath wants. As long as you are doing what a psychopath wants, they can be the most charming person you have ever met.

    I DO think that rational psychopaths can and do act morally for the most part precisely because it is beneficial to their self interest. The difference between the rest of us and rational psychopaths is that our self interests include considerations of empathy, guilt, conscience, and pleasure in the cooperative company of friends and family (which I understand rational psychopaths also do not experience). I agree that a rational psychopath pursuing their self interests will act differently from the rest of us. I disagree that that is a problem for my proposed cultural moral system. The moral principle is not “do what you think is in your best interest”, it is “do what will increase the benefits of cooperation within and between the groups you are members of”.

    I DO think rationality has a great deal to do with what we ought to do. Data-dependent rational arguments are the source of my proposed moral principle. That moral principle is either empirically true or false. Ideas that are empirically true or false are rational (if they are ‘true’) in a sense that data-independent arguments cannot be since it may not be possible to determine if their premises are rational. Further, the only motivation for acting according to the proposed moral principle when, in the moment of decision, you expect doing so will be against your best interests, is based on an a prior rational argument that concluded that it will almost always be in your best long term interests to behave morally.

    I think it is highly unlikely that imperative oughts exist. I am certain that ‘considered rational choice’ oughts do exist. I am confident that ‘considered rational choice’ oughts provide adequate motivation for acting morally to make the proposed moral principle a more ‘rational choice’ than any available alternative. I am also confident that is our best course of action while we are waiting for Moral Philosophy to produce its first convincing imperative ought, if it ever does.

    This is getting too long. I can address your remaining points later if you wish.

    Comment by Mark Sloan — December 3, 2010 @ 11:31 pm | Reply

    • The moral principle I am proposing is empirically true or false based on data-dependent arguments. Data-independent arguments about the ‘end’ of moral behavior as are standard in mainstream Moral Philosophy are not, so far as I know, empirically true or false. This is a huge difference.

      I already explained how morality is empirical from my perspective. You haven’t provided any objections to that view. Empirical doesn’t mean “everyone agrees.”

      I can defend the idea that ALL the “foundational concepts used in contemporary moral philosophy, like the ideas of ‘good states of affairs’ or rights and obligations” are actually consequences of my proposed moral principle based in the fundamental nature of the universe.

      Then I ask you to defend it from my objections. I don’t think agreement about what is “good” is the starting point. I think we can know something is good prior to agreement.

      My moral principle is not a kind of Utilitarianism. Utilitarianism is one flawed approximation of this proposed moral principle.”

      You admitted you are a consequentialist in the forum. The word “consequentialist” and “utilitarian” are roughly used as equivalent terms. There is no one theory for either perspective. There is a great diversity of views.

      Regarding your first point, since ‘evil people’ is a little vague, how about we talk about the behavior of rational psychopaths defined as people who are incapable of feeling empathy or guilt and have no conscience? My understanding is that most people who would meet the clinical criteria for rational psychopathy (minimum 1% of the population in the US) are never convicted of any crime and can be very successful in business, government, and religious organizations. They can be very successful because they can be utterly ruthless (often useful in businesses and government) and because they feel no guilt, they are happy to tell other people whatever will make the other people do what the psychopath wants. As long as you are doing what a psychopath wants, they can be the most charming person you have ever met.

      I don’t think the most evil people are necessarily “psychopaths.” However, we can consider successful people without guilt, etc.

      I DO think that rational psychopaths can and do act morally for the most part precisely because it is beneficial to their self interest. The difference between the rest of us and rational psychopaths is that our self interests include considerations of empathy, guilt, conscience, and pleasure in the cooperative company of friends and family (which I understand rational psychopaths also do not experience).

      You might understand wrong, and your understanding is irrelevant. Lots of evil people have rich family lives and enjoy being with their family and friends. To act like we either have empathy or not — and that empathy will determine if you should be good — is also irrelevant. A person has some control over their empathy. We could have less empathy rather than more. Why should I want to have more rather than less?

      In other words to declare that empathy determines if I should want to be good begs the question because it doesn’t explain why I should want empathy.

      I agree that a rational psychopath pursuing their self interests will act differently from the rest of us. I disagree that that is a problem for my proposed cultural moral system. The moral principle is not “do what you think is in your best interest”, it is “do what will increase the benefits of cooperation within and between the groups you are members of”.

      You said it is rational to be moral. Hume’s definition of practical reason is whatever is smart considering our self-interest. If you want a new definition of “rationality” then I need to know what it is and why I should agree with it.

      I DO think rationality has a great deal to do with what we ought to do. Data-dependent rational arguments are the source of my proposed moral principle. That moral principle is either empirically true or false. Ideas that are empirically true or false are rational (if they are ‘true’) in a sense that data-independent arguments cannot be since it may not be possible to determine if their premises are rational. Further, the only motivation for acting according to the proposed moral principle when, in the moment of decision, you expect doing so will be against your best interests, is based on an a prior rational argument that concluded that it will almost always be in your best long term interests to behave morally.

      I already told you why I disagree with the “prior argument.” Again, you are assuming that rationality involves doing what is in our self-interest.

      I think it is highly unlikely that imperative oughts exist. I am certain that ‘considered rational choice’ oughts do exist. I am confident that ‘considered rational choice’ oughts provide adequate motivation for acting morally to make the proposed moral principle a more ‘rational choice’ than any available alternative. I am also confident that is our best course of action while we are waiting for Moral Philosophy to produce its first convincing imperative ought, if it ever does.

      This is what you think, but I don’t know why. I need to see arguments.

      I disagree with you. I think that people’s lives have value. Their pain is bad just like my pain. I have a reason to give a stranger (drifter) an aspirin even though it would be totally absurd to expect any relevant self-interested reason for doing so. I want to do what I think “really matters” so I want to help people because I think they really matter. I don’t think I am irrational just because I am concerned with other people and not merely myself. However, if you are right that other people don’t really matter, then my reasoning is flawed to the point that I could be irrational after all.

      I think finding my behavior and beliefs to be “irrational” is counterintuitive (perhaps to the point of absurdity) because there’s no implausible beliefs or goals involved with my behavior. The fact that many people agree to have various goals is what you call “good.” However, such goals could be inappropriate and fail to really be “good.”

      This is getting too long. I can address your remaining points later if you wish.

      Yes, I would like to hear more.

      Comment by James Gray — December 4, 2010 @ 12:33 am | Reply

      • “I already explained how morality is empirical from my perspective. You haven’t provided any objections to that view. Empirical doesn’t mean “everyone agrees.””

        I was using empirical in the sense of “capable of being confirmed, verified, or disproved by observation or experiment”. I can see you can argue that a data-independent argument produces a true or false result (and in that sense is empirical), but data-dependent arguments are “capable of being confirmed, verified, or disproved by observation or experiment”. This is a big difference.

        “Then I ask you to defend it from my objections. I don’t think agreement about what is “good” is the starting point. I think we can know something is good prior to agreement.”

        I have lost track of what your objections are. I agree with you that “agreement about what is ‘good’” is NOT the starting point. The starting point is identifying the underlying function or functions of cultural moralities. My claimed single function (the primary reason cultural moralities exist) is what has created and molded what we consider ‘good’ and our needs and preferences concerning living in social groups.

        “You admitted you are a consequentialist in the forum. The word “consequentialist” and “utilitarian” are roughly used as equivalent terms. There is no one theory for either perspective. There is a great diversity of views.”

        The moral principle is largely consequentialist in that for a behavior to be moral, it must increase, on average, the benefits of cooperation in groups. However, to be moral, a behavior must also be unselfish at least in the short term, which brings in the intent of the person doing it. Therefore, the moral principle is not purely consequentialist.

        The principle itself provides the criterion for the rules for determining if an act is moral or immoral based on intent and consequences. That criterion is “The rules for classifying an act as moral or immoral with regard to intent and consequences are whatever rules are most likely to increase the benefits of cooperation”.

        “I don’t think the most evil people are necessarily “psychopaths.” However, we can consider successful people without guilt, etc.”

        Neither do I. I suggested discussing psychopaths because doing so clarifies (at least for me) the issue I thought you were interested in.

        “In other words to declare that empathy determines if I should want to be good begs the question because it doesn’t explain why I should want empathy.”

        I would say you (‘rational choice’) ought to want empathy if you expect that will best meet your needs and preferences which include needs and preferences concerning living in cooperative groups.

        “You said it is rational to be moral. Hume’s definition of practical reason is whatever is smart considering our self-interest. If you want a new definition of “rationality” then I need to know what it is and why I should agree with it.”

        As I have stated, a ‘rational choice’ in my arguments (always in single quotes to reduce confusion with other kinds of rational choices) is the choice expected to best meet your needs and preferences, but where those needs, preferences, and expectations are not required to be rational.

        This is a standard definition from Rational Choice Theory, which is most commonly used in economic studies. This was just the right definition I needed for what kind of choice justifies accepting the burdens of the proposed moral principle. I also use the phrase ‘considered rational choice’ to indicate prior consideration as to what choice is most likely to actually best meet one’s needs and preferences in the long term. (Our expectations in the moment of decision about what action will best meet one’s needs and preferences in the long term are highly unreliable.)

        “I already told you why I disagree with the “prior argument.” Again, you are assuming that rationality involves doing what is in our self-interest.”

        No, I am not assuming that rationality involves doing what is in our self-interests. I am using the definition of ‘rational choice’ from Rational Choice Theory as the choice expected to best meet our needs and preferences. The needs and preferences of mentally normal people commonly include preferences for accepting a burden in order to benefit someone else.

        For me to formulate arguments showing that imperative oughts do not exist would be similar to formulating arguments ‘proving’ that the supernatural does not exist. That is, pretty much a waste of time.

        I do not understand why you wrote: “I disagree with you. I think that people’s lives have value. Their pain is bad just like my pain. I have a reason to give a stranger (drifter) an aspirin even though it would be totally absurd to expect any relevant self-interested reason for doing so. I want to do what I think “really matters” so I want to help people because I think they really matter. I don’t think I am irrational just because I am concerned with other people and not merely myself. However, if you are right that other people don’t really matter, then my reasoning is flawed to the point that I could be irrational after all.”

        My proposed moral principle advocates unselfish behaviors that increase the benefits of cooperation in groups. Why are you thinking I would disagree with the obviously moral acts you mention?

        Remember, the moral principle is NOT “Do whatever will best meet your needs and preferences”. It is “Act unselfishly to increase the benefits of cooperation in the groups you belong to”.
        The bit about ‘rational choices’ only comes into play only as MOTIVATION (or justification) to act morally according to the principle.

        Comment by Mark Sloan — December 5, 2010 @ 7:00 pm

      • I’m replying to this comment by Mark Sloan.

        I’m kind of dropping in to this discussion from nowhere, so I hope I don’t misunderstand what I am about to respond to.

        but data-dependent arguments are “capable of being confirmed, verified, or disproved by observation or experiment”. This is a big difference.

        In Acoustics, we can do science about “pitch” even if we have to rely on subjective first-person reports. We have discovered that pitch correlates with frequency of a sound wave and that pitch goes up or down linearly as frequency goes up or down logarithmically.

        In the 1800s, physicist Henry Cavendish had no instruments for precise measurements of electric charge. So he subjected himself to electric shocks to determine charges as best he could. And this itself was enough to do legitimate science; Cavendish discovered many laws of electricity and may have been the original discoverer of Ohms law.

        It makes sense that the same kind of empirical study can be based on first person reports of experiences. Gather enough reports, look for regularities, objective correlates out in the world, and they may start to converge.

        In fact, lots of empiricism is done based on first-person reports of experiences. People experience “good” too, and judge things to be good.

        Comment by josef johann — December 5, 2010 @ 7:30 pm

  17. Mark,

    I was using empirical in the sense of “capable of being confirmed, verified, or disproved by observation or experiment”. I can see you can argue that a data-independent argument produces a true or false result (and in that sense is empirical), but data-dependent arguments are “capable of being confirmed, verified, or disproved by observation or experiment”. This is a big difference.

    I don’t know that my definition of “observation” is so different from yours. I think Josef gave a pretty good reply already.

    I have lost track of what your objections are. I agree with you that “agreement about what is ‘good’” is NOT the starting point. The starting point is identifying the underlying function or functions of cultural moralities. My claimed single function (the primary reason cultural moralities exist) is what has created and molded what we consider ‘good’ and our needs and preferences concerning living in social groups.

    There can be biases in determining what has intrinsic value, but once people figure out what “intrinsic value” means and learn enough about ethics, I think we can realize what does and doesn’t have intrinsic value.

    It sounded before like you are an anti-realist who rejects intrinsic value, but now I’m not sure anymore. I think saying that “good” is whatever people agree is “good” is giving up too soon and relying on prejudice rather than observation or self-evidence. Certainly “good” doesn’t merely mean “whatever common goals people share in a culture.”

    The moral principle is largely consequentialist in that for a behavior to be moral, it must increase, on average, the benefits of cooperation in groups. However, to be moral, a behavior must also be unselfish at least in the short term, which brings in the intent of the person doing it. Therefore, the moral principle is not purely consequentialist.

    What is your definition of “unselfish?” If it means “mutually beneficial behavior,” then it is compatible with psychological egoism and whatever happens in our minds when we act seems irrelevant.

    If you think people “need to help people out of pure altruism” then we can judge such an “intention” as good using consequentialism given that such an intention will promote “benefits.” (Beliefs, desires, and other mental states can have a consequentialist status.)

    Of course, we might wonder if “pure altruism” is rational because it seems to be based on the idea that we can value other people “for their own sake,” but it’s not clear that people really are valuable in this way. It might be that people have intrinsic value, so altruism would be rational — but if they actually have no value, then why treat them as if they did?

    I would say you (‘rational choice’) ought to want empathy if you expect that will best meet your needs and preferences which include needs and preferences concerning living in cooperative groups.

    Wrong. Assuming there are no intrinsic values, empathy is pretty much the only reason we “prefer” to meet the needs of others. Without empathy we wouldn’t have to prefer such a thing. That is exactly my point here. Why should I want to help strangers?

    It is true that family and friends might be people worth helping because we depend on them, but it is false that we need to help strangers for our personal well being.

    As I have stated, a ‘rational choice’ in my arguments (always in single quotes to reduce confusion with other kinds of rational choices) is the choice expected to best meet your needs and preferences, but where those needs, preferences, and expectations are not required to be rational.

    This is a standard definition from Rational Choice Theory, which is most commonly used in economic studies. This was just the right definition I needed for what kind of choice justifies accepting the burdens of the proposed moral principle. I also use the phrase ‘considered rational choice’ to indicate prior consideration as to what choice is most likely to actually best meet one’s needs and preferences in the long term. (Our expectations in the moment of decision about what action will best meet one’s needs and preferences in the long term are highly unreliable.)

    Nothing wrong with that, but you are excluding the possibility of moral reason. This seems to assume that moral reason is impossible. We can do away with morality entirely and just try to figure out how to fulfill our own desires.

    No, I am not assuming that rationality involves doing what is in our self-interests. I am using the definition of ‘rational choice’ from Rational Choice Theory as the choice expected to best meet our needs and preferences. The needs and preferences of mentally normal people commonly include preferences for accepting a burden in order to benefit someone else.

    I don’t see how you are disagreeing here. Sounds like agreement to me.

    For me to formulate arguments showing that imperative oughts do not exist would be similar to formulating arguments ‘proving’ that the supernatural does not exist. That is, pretty much a waste of time.

    Again, you sound like an anti-realist. Why not think that health doesn’t exist as well? Why not think that rationality doesn’t exist as well? Why not think that minds don’t exist as well? We can’t prove that such things don’t exist, so it would be a waste of time to do so as well.

    What “ought” to be isn’t observable. That’s true for any kind of rationality. The only reason that rationality seems to give us oughts is because we know our desires and we know that some actions will fulfill them better than others. The only way we know that torture is wrong is because we know what pain is like and we know that some actions produce a lot more pain than the alternatives. The only way we know that a person is healthy is that we know they could be less well off given an alternative state.

    I do not understand why you wrote: “I disagree with you. I think that people’s lives have value. Their pain is bad just like my pain. I have a reason to give a stranger (drifter) an aspirin even though it would be totally absurd to expect any relevant self-interested reason for doing so. I want to do what I think “really matters” so I want to help people because I think they really matter. I don’t think I am irrational just because I am concerned with other people and not merely myself. However, if you are right that other people don’t really matter, then my reasoning is flawed to the point that I could be irrational after all.”

    My proposed moral principle advocates unselfish behaviors that increase the benefits of cooperation in groups. Why are you thinking I would disagree with the obviously moral acts you mention?

    Because it’s not clear why I should treat people as good just for existing when doing so would not always benefit me personally to do so. A smart person can hurt other people to benefit herself and get away with it.

    Remember, the moral principle is NOT “Do whatever will best meet your needs and preferences”. It is “Act unselfishly to increase the benefits of cooperation in the groups you belong to”.

    First, my point is that I want to help people with no chance of being rewarded for it. That contradicts what you consider to be “rational.”

    Second, it’s not clear that a “rational” person should want to be moral. Your argument was unconvincing to me. Maybe only idiots are always moral and smart people know how to be evil and benefit from their immorality. I already mentioned this fact and responded to what you had to say about it. Then you ignored it entirely.

    The bit about ‘rational choices’ only comes into play only as MOTIVATION (or justification) to act morally according to the principle.

    And I don’t think motivation would always lead to morality based on what you consider to be “rational.”

    What are my objections? We can start with the following:

    1. Choosing to be immoral can be fully “rational.” Choosing to always be moral is not fully “rational.” We “ought not” be moral given the assumption that only self-interest oughts exist.

    2. We have good reason to believe in intrinsic values, which don’t require “agreement” and seem sufficient to have moral rationality beyond the rationality of self-interest. Additionally, “moral oughts” are a product of moral rationality — how to promote intrinsic values best.

    3. We seem to observe some intrinsic values. We know they exist similar to how we know pain exists.

    Comment by James Gray — December 5, 2010 @ 11:06 pm | Reply

  18. James, I can answer all of your above points, but am thinking the conversation might be best moved forward if we focused on some key elements. Your three objections look like a promising place to start since I am insubstantial agreement with them. If we can mutually understand why you think they are significant objections and I do not (and even for the most part agree with them), we may be able to move forward.

    First, just to clarify, when I use ‘rational choice’ with single quotes I am using the definition from Rational Choice Theory in which there is no requirement that expectations, needs, or preferences are rational in the normal sense of the word. I will assume that if you use rational choice without single quotes you are referring to some other definition of rational choice, which it would be good if you could define.

    1. Choosing to be immoral can be fully rational. Choosing to always be moral is not fully rational. We “ought not” be moral given the assumption that only self-interest oughts exist.

    Yes, yes, and NO. It can be a ‘rational choice’ to act immorally. Choosing to ALWAYS be moral is not a ‘rational choice’ if cases arise when you are CERTAIN that acting morally in a particular instance will not best meet your needs and preferences in the long term.

    However, we have difficulty predicting if a particular act will actually turn out to be in our best interests or not. We don’t know all relevant information about the present and lack the computational ability to predict the future accurately if we did. Moral choices are often made almost instantly and, in the heat of the moment, our ‘base animal instincts’ motivate selfishness (immorality). Suppose that based on reflection and experience prior to making a moral choice, an individual concludes that acting morally (almost always) will be more likely to actually meet their needs and preferences in the long term than paying attention to their unreliable expectations about their long term best interests in the moment of decision. In this case, it would be a “considered ‘rational choice’” to commit to acting morally (almost always) even when they expect, in the moment of decision, that doing so will not be in their best long term interest.

    If a proposed moral principle has the right characteristics (as I claim mine does), we “ought’ to act morally (almost always) according to it even given the assumption that only self-interest oughts exist.

    2. We have good reason to believe in intrinsic values, which don’t require “agreement” and seem sufficient to have moral rationality beyond the rationality of self-interest. Additionally, “moral oughts” are a product of moral rationality — how to promote intrinsic values best.

    I agree.

    I also believe I know the origins of all these intrinsic values that we experience as having moral force (for example: it was just the right thing to do, I am motivated by my emotions to act unselfishly, I am motivated to punish people who violate these intrinsic values, and so forth). Those intrinsic values that, cross culturally, require no agreement are likely biological in nature. Within a culture, cultural intrinsic values also do not generally require ‘agreement’ in the sense that they are already shared intrinsic values.

    I can argue that all of these intrinsic values have a common primary function (primary reason they exist). That function is to motivate unselfish acts that increase the benefits of cooperation in groups.

    3. We seem to observe some intrinsic (moral) values. We know they exist similar to how we know pain exists.

    I agree.

    I am a little concerned that sometimes (such as when you mentioned pain just above) you appear to be thinking about purely biological intrinsic moral values (empathy, guilt, righteous indignation, willingness to risk injury and death to defend family and friends, and so forth). However, other times you seem to be including culturally based intrinsic moral values like equal rights for all people and that sort of thing. I am not sure that is a problem for you or not, but the difference between purely biologically based moral behavior and cultural standard based moral behavior Is a BIG deal when talking about their common underlying function (to motivate unselfish behaviors that increase the benefits of cooperation). It is a big deal because the relevant benefits of cooperation in the two cases are very different.

    We agree there are moral facts. You are thinking of them as the intrinsic moral values discussed above? I am thinking there is really just one that moral fact that is really important. That is the underlying function of all intrinsic moral values – my proposed moral principle. However, while my proposed moral principle comes with no imperative oughts, it does come with ‘rational choice’oughts that appear adequate for a culturally useful morality.

    Comment by Mark Sloan — December 6, 2010 @ 4:58 am | Reply

  19. Mark,

    Thanks for the reply.

    1. Choosing to be immoral can be fully rational. Choosing to always be moral is not fully rational. We “ought not” be moral given the assumption that only self-interest oughts exist.

    Yes, yes, and NO. It can be a ‘rational choice’ to act immorally. Choosing to ALWAYS be moral is not a ‘rational choice’ if cases arise when you are CERTAIN that acting morally in a particular instance will not best meet your needs and preferences in the long term.

    However, we have difficulty predicting if a particular act will actually turn out to be in our best interests or not. We don’t know all relevant information about the present and lack the computational ability to predict the future accurately if we did. Moral choices are often made almost instantly and, in the heat of the moment, our ‘base animal instincts’ motivate selfishness (immorality). Suppose that based on reflection and experience prior to making a moral choice, an individual concludes that acting morally (almost always) will be more likely to actually meet their needs and preferences in the long term than paying attention to their unreliable expectations about their long term best interests in the moment of decision. In this case, it would be a “considered ‘rational choice’” to commit to acting morally (almost always) even when they expect, in the moment of decision, that doing so will not be in their best long term interest.

    That means morality isn’t really overriding after all.

    If a proposed moral principle has the right characteristics (as I claim mine does), we “ought’ to act morally (almost always) according to it even given the assumption that only self-interest oughts exist.

    I’m not that impressed with saying we should “almost always” be moral because the most successful criminals and evil people know that already. The most intelligent, dangerous, and powerful evil people are pretty good and deciding when to be immoral within their own self-interest, but such immorality is some of the most damaging behavior we could face.

    Additionally, the choice to do something above the call of duty is not something that I expect to see happen as often if we only concern ourselves with ‘rational choice.’

    A moral realist gives us a reason to be good (above the call of duty and refuse to be immoral) that the anti-realist can’t offer. The realist position is highly intuitive, uncontroversial, and plausible. (More needs to be said in its defense to fully appreciate this fact.)

    2. We have good reason to believe in intrinsic values, which don’t require “agreement” and seem sufficient to have moral rationality beyond the rationality of self-interest. Additionally, “moral oughts” are a product of moral rationality — how to promote intrinsic values best.

    I agree.

    I also believe I know the origins of all these intrinsic values that we experience as having moral force (for example: it was just the right thing to do, I am motivated by my emotions to act unselfishly, I am motivated to punish people who violate these intrinsic values, and so forth). Those intrinsic values that, cross culturally, require no agreement are likely biological in nature. Within a culture, cultural intrinsic values also do not generally require ‘agreement’ in the sense that they are already shared intrinsic values.

    You have misunderstood the term “intrinsic value.” What people desire biologically is not necessarily intrinsically good. That is what Tim Dean calls “intrinsic desire.” Instead, what is intrinsically good is what is morally rational for us to desire.

    I discuss intrinsic values in greater detail here: http://ethicalrealism.wordpress.com/2009/12/29/is-there-a-meaning-of-life/

    If intrinsic value beliefs were merely caused by our biology, then we would suspect that they were delusional. However, we have good reason to think they couldn’t possibly be delusional. I have argued that position in detail elsewhere.

    Consider that we all have a biology that causes us to have certain beliefs based on our sight. I see my hand and know I have a hand. I don’t merely see color blotches free to interpret however I choose. That is a biologically influenced method of knowledge we call “observation,” but we believe it’s a reliable form of justification. I think morality can be justified in a similar way.

    3. We seem to observe some intrinsic (moral) values. We know they exist similar to how we know pain exists.

    I agree.

    I am a little concerned that sometimes (such as when you mentioned pain just above) you appear to be thinking about purely biological intrinsic moral values (empathy, guilt, righteous indignation, willingness to risk injury and death to defend family and friends, and so forth). However, other times you seem to be including culturally based intrinsic moral values like equal rights for all people and that sort of thing. I am not sure that is a problem for you or not, but the difference between purely biologically based moral behavior and cultural standard based moral behavior Is a BIG deal when talking about their common underlying function (to motivate unselfish behaviors that increase the benefits of cooperation). It is a big deal because the relevant benefits of cooperation in the two cases are very different.

    I don’t know that equal rights has intrinsic value, but equal rights (insofar as we ought to have them) should be justified in terms of moral values. They should “promote” or “express” intrinsic values.

    I don’t know what it means to say that an intrinsic value has to be biological. I don’t think intrinsic values have to be instinctual anymore than knowing I have a hand is instinctual.

    We agree there are moral facts. You are thinking of them as the intrinsic moral values discussed above? I am thinking there is really just one that moral fact that is really important. That is the underlying function of all intrinsic moral values – my proposed moral principle. However, while my proposed moral principle comes with no imperative oughts, it does come with ‘rational choice’oughts that appear adequate for a culturally useful morality.

    I don’t know what you are saying here. Intrinsic values can be relevant to personal choices because it tells us what “benefits” are important. Morality can be more than cooperation because I can choose to be healthy, go back to school, or be self-destructive. My personal choices are relevant to the most important things in life. Cooperation is important precisely because it helps us achieve intrinsic values. The fact that cooperation is greatly justified through self-interest is of lesser importance because my self-interest only has as much intrinsic value as I personally have, and the same goes for everyone else.

    It is true that we would care about self-interest even if there were no intrinsic values, but then nothing would really be important. There would merely be people who treat something as important.

    Comment by James Gray — December 6, 2010 @ 6:51 am | Reply

  20. “I’m not that impressed with saying we should “almost always” be moral because the most successful criminals and evil people know that already. The most intelligent, dangerous, and powerful evil people are pretty good and deciding when to be immoral within their own self-interest, but such immorality is some of the most damaging behavior we could face.”
    “Additionally, the choice to do something above the call of duty is not something that I expect to see happen as often if we only concern ourselves with ‘rational choice.’”

    James, we may have very different goals in studying morality. My goal is limited to finding (if any exist) moral principles that could be the basis of a more culturally useful secular moral system than could be constructed based on any available alternative moral principles.

    I am not disappointed that my proposed moral principle “is NOT always overriding” in the sense of Kant’s categorical imperatives. (Actually, it has never occurred to me that such a moral principle even could exist.) If my proposed moral principle actually is more culturally useful than any alternative secular moral principle, I will be well satisfied. If you come up with a moral principle (or other argument) that can be the basis of a more culturally useful secular moral system, I will happy to promote that moral principle or other argument.

    My goal is just utility, not perfection. There is no logical error in adopting and practicing a more culturally useful secular morality while we are waiting for moral philosophy, perhaps in the form of your ideas, to finally disclose what moral behavior ‘ought’ to be.

    “You have misunderstood the term “intrinsic value.” What people desire biologically is not necessarily intrinsically good. That is what Tim Dean calls “intrinsic desire.” Instead, what is intrinsically good is what is morally rational for us to desire.”

    I don’t understand your comment. It would be silly to say “What people desire biologically is intrinsically good”. Our ‘base animal instincts’ are just as much a part of our biology as our moral emotions and moral intuitions (of course moral intuitions are shaped by culture but their motivating power is biological). I was addressing only our sense of “intrinsic value” when I gave the examples of unselfish behavior motivated by “it was just the right thing to do, I am motivated by my emotions to act unselfishly, I am motivated to punish people who violate these intrinsic values, and so forth”.

    “If intrinsic value beliefs were merely caused by our biology, then we would suspect that they were delusional. However, we have good reason to think they couldn’t possibly be delusional. I have argued that position in detail elsewhere.”

    Intrinsic value beliefs regarding relations between people in a society have origins both in our biology and in our cultures. My claim is that these Intrinsic value beliefs, whether from our biology or culture or some inextricably intertwined combination of the two (the most common case), exist because they motivated behaviors that were effective at exploiting the benefits of cooperation in groups.

    In saying: “Morality can be more than cooperation because I can choose to be healthy, go back to school, or be self-destructive”, you are expanding the domain of morality way beyond the domain of past and present cultural moral standards. You seem to be moving over into what is prudent rather than what is moral.

    Surely you are not saying that if you have unhealthy habits, do or do not go back to school, or are self destructive than you DESERVE punishment motivated by righteous indignation? A necessary characteristic of all past and present moral standards I am aware of is the consensus in that culture that violators deserve punishment motivated by righteous indignation. If you are going to expand the meaning of moral to this extent, perhaps you should start calling it something else.

    All past and present moral standards I am aware of are norms concerning relations between people in social groups. Even prohibitions against suicide appear aimed at the well-being of the cooperative group.

    Comment by Mark Sloan — December 6, 2010 @ 5:53 pm | Reply

    • Mark,

      James, we may have very different goals in studying morality. My goal is limited to finding (if any exist) moral principles that could be the basis of a more culturally useful secular moral system than could be constructed based on any available alternative moral principles.

      I am not disappointed that my proposed moral principle “is NOT always overriding” in the sense of Kant’s categorical imperatives. (Actually, it has never occurred to me that such a moral principle even could exist.) If my proposed moral principle actually is more culturally useful than any alternative secular moral principle, I will be well satisfied. If you come up with a moral principle (or other argument) that can be the basis of a more culturally useful secular moral system, I will happy to promote that moral principle or other argument.

      I don’t see it as very useful because I think people would behave the same way without morality (given genuine and sincere anti-realism). We would come up with laws and social contracts and never have the need to call such things “moral.” Additionally, we can learn ways to mutually benefit ourselves even without morality. This is something an anti-realist can do and I think it is worth doing.

      I think that I am working on a “secular” morality and lots of other moral realist philosophers are as well. Your use of the words “secular” and “supernatural” implies that you see some plausible connection between God and morality. I see no plausible connection there. Such a connection has been thoroughly demolished by philosophers and should be considered to be a “myth” that has already been dispelled.

      I discussed the irrelevance of God to intrinsic values here: http://ethicalrealism.wordpress.com/2009/12/21/does-morality-require-god/

      My goal is just utility, not perfection. There is no logical error in adopting and practicing a more culturally useful secular morality while we are waiting for moral philosophy, perhaps in the form of your ideas, to finally disclose what moral behavior ‘ought’ to be.

      It sounds like you are saying we can learn how to benefit from cooperation even without moral realism. That is true. However, I’m not sure I would call that a “cultural morality.”

      Intrinsic value beliefs regarding relations between people in a society have origins both in our biology and in our cultures. My claim is that these Intrinsic value beliefs, whether from our biology or culture or some inextricably intertwined combination of the two (the most common case), exist because they motivated behaviors that were effective at exploiting the benefits of cooperation in groups.

      It sounds to me like you are saying that intrinsic value beliefs are nothing more than prejudice, but I just made it clear why I disagree with that idea. I think we are repeating ourselves and that you didn’t respond properly to my concerns here.

      There can be a reality to intrinsic values that can be discovered. If there is such a reality, it’s not just based on our biology. I already made this clear with my example of how we know we have hands. A better example might be that we know our hands are “solid” because solidity and intrinsic values are both properties.

      In saying: “Morality can be more than cooperation because I can choose to be healthy, go back to school, or be self-destructive”, you are expanding the domain of morality way beyond the domain of past and present cultural moral standards. You seem to be moving over into what is prudent rather than what is moral.

      Surely you are not saying that if you have unhealthy habits, do or do not go back to school, or are self destructive than you DESERVE punishment motivated by righteous indignation? A necessary characteristic of all past and present moral standards I am aware of is the consensus in that culture that violators deserve punishment motivated by righteous indignation. If you are going to expand the meaning of moral to this extent, perhaps you should start calling it something else.

      I certainly agree that deserving punishment and “righteous indignation” is not relevant to what I just said. Morality is not reducible to such things. There is a lot of nuance involved in morality. Some behavior is good, some is bad, some is unacceptable, some is obligatory, and so on. I think we are allowed to hurt other people to a certain amount even though it is “wrong” to do so. It would simply be unrealistic demands on people so fallible to think we could be wise enough to know how to never hurt other people. We often excuse behavior on ignorance and so on.

      I think it is plausible that “righteous indignation” is appropriate because people can really do terrible things that “really matter.” That only makes sense if intrinsic values exist.

      I don’t know that punishment is ever “deserved.” That sounds like an endorsement of vengeance. I would prefer if criminals are educated and given therapy. We shouldn’t demonize or dehumanize criminals and act like the world is a better place when they are in pain.

      Comment by James Gray — December 6, 2010 @ 9:09 pm | Reply

      • James’ comment: “I don’t see it as very useful because I think people would behave the same way without morality (given genuine and sincere anti-realism). We would come up with laws and social contracts and never have the need to call such things “moral.” “

        James, without our moral emotions and moral intuitions, but with our ‘base animal instincts’ intact, I agree it is possible we might come up with rule of law. However, I think it highly unlikely. The reason it would be highly unlikely is that the connection between rule of law (when there isn’t any) and long term advantages to the individual is not clear. What is always clear, and is motivated by our base animal instincts, are actions that produce short term advantages by there being no rule of law. Social contracts have the same problem. What I am describing here is a society of rational psychopaths, or “moral idiots” as they were previously known, where morality as we know it does not exist. I am very glad we have our moral emotions and moral intuitions.

        “I think that I am working on a “secular” morality and lots of other moral realist philosophers are as well. Your use of the words “secular” and “supernatural” implies that you see some plausible connection between God and morality. I see no plausible connection there. Such a connection has been thoroughly demolished by philosophers and should be considered to be a “myth” that has already been dispelled.”

        You really misunderstood what I was saying. Some sincerely religious people will always believe their ‘supernatural’ moral system will be “culturally more useful” than any secular system. For people who are sincerely religious, I think many of them are right on this point. They may be better off due to the emotional support provided by their community of believers who sincerely think the creator of the universe has commanded them to be kind to and to look after each other.

        I no more believe ‘supernatural’ sources are good sources of moral wisdom than you do.

        “It sounds to me like you are saying that intrinsic value beliefs are nothing more than prejudice, but I just made it clear why I disagree with that idea. I think we are repeating ourselves and that you didn’t respond properly to my concerns here.”

        I normally use prejudice in the sense of an adverse opinion or leaning formed without just grounds or before sufficient knowledge. In no way am I claiming our intrinsic value beliefs are adverse or based on unjust or insufficient grounds. Just the opposite is the case. Intrinsic value beliefs exist because of empirical experiences of millions of people over thousands of generations. Both our biology and our cultures have been shaped by the increased benefits of cooperation in groups caused by the behaviors motivated by our intrinsic value beliefs. What I classify as our intrinsic value beliefs are not in any way “prejudice”, they are the tried and true moral wisdom of the ages.

        “I don’t know that punishment is ever “deserved.” That sounds like an endorsement of vengeance. I would prefer if criminals are educated and given therapy. We shouldn’t demonize or dehumanize criminals and act like the world is a better place when they are in pain.”

        Without ‘punishment’ of some kind, a moral society is impossible. Just as Rule of Law would be culturally useless without punishment for violators, moral standards would be culturally useless without ‘punishment’ (perhaps just public disapproval) of violators. Our moral emotion “righteous indignation” exists just to motivate such punishment, even at a cost to the actor, simply because such ‘punishment’ (or perhaps just correction for immature violators) is required to sustain a morality.

        ‘Punishment’ of immoral behavior is not vengeance. ‘Punishment’ of immoral behavior can be a moral act only when it is the kind of punishment that is likely to increase, on average, the benefits of cooperation in groups.

        What this kind of moral ‘punishment’ should be is a hot topic in game theory. So far as I know, moral philosophy lacks the tools needed to make useful contributions to what is one of the main problems to be resolved for a maximally useful cultural morality.

        I expect we are still in a state of mutual misunderstanding. I am working on a post entitled “An Outsider’s View of the Origins of Moral Foundations” that might be useful and will drop you a note when it is posted in case you have any interest.

        Aside from that, I will sign off unless you reply with something that I can’t resist replying to. Good luck in your studies.

        Comment by Mark Sloan — December 7, 2010 @ 12:37 am

  21. James, without our moral emotions and moral intuitions, but with our ‘base animal instincts’ intact, I agree it is possible we might come up with rule of law. However, I think it highly unlikely. The reason it would be highly unlikely is that the connection between rule of law (when there isn’t any) and long term advantages to the individual is not clear. What is always clear, and is motivated by our base animal instincts, are actions that produce short term advantages by there being no rule of law. Social contracts have the same problem. What I am describing here is a society of rational psychopaths, or “moral idiots” as they were previously known, where morality as we know it does not exist. I am very glad we have our moral emotions and moral intuitions.

    We can have social emotions and intuitions without moral realism, but it’s not clear how rational they are. I don’t think moral realism has to be endorsed just because we have social instincts and emotions.

    You really misunderstood what I was saying. Some sincerely religious people will always believe their ‘supernatural’ moral system will be “culturally more useful” than any secular system. For people who are sincerely religious, I think many of them are right on this point. They may be better off due to the emotional support provided by their community of believers who sincerely think the creator of the universe has commanded them to be kind to and to look after each other.

    I no more believe ‘supernatural’ sources are good sources of moral wisdom than you do.

    You seemed to imply that intrinsic values might require the supernatural. That might not mean that morality as you see it requires such things, but you seemed to think that morality as I see it does. If not, then I’m not sure what you were arguing about.

    I normally use prejudice in the sense of an adverse opinion or leaning formed without just grounds or before sufficient knowledge. In no way am I claiming our intrinsic value beliefs are adverse or based on unjust or insufficient grounds. Just the opposite is the case. Intrinsic value beliefs exist because of empirical experiences of millions of people over thousands of generations. Both our biology and our cultures have been shaped by the increased benefits of cooperation in groups caused by the behaviors motivated by our intrinsic value beliefs. What I classify as our intrinsic value beliefs are not in any way “prejudice”, they are the tried and true moral wisdom of the ages.

    Then we are in agreement here.

    Without ‘punishment’ of some kind, a moral society is impossible. Just as Rule of Law would be culturally useless without punishment for violators, moral standards would be culturally useless without ‘punishment’ (perhaps just public disapproval) of violators. Our moral emotion “righteous indignation” exists just to motivate such punishment, even at a cost to the actor, simply because such ‘punishment’ (or perhaps just correction for immature violators) is required to sustain a morality.

    To say that punishment is necessary doesn’t mean someone “deserves it.” The word “deserve” is ambiguous and I made it clear in what sense I don’t think people deserve punishment. That said, I don’t know how necessary punishment is. We can put people in a safe place to protect us and to educate dangerous people we don’t trust. However, this punishment business seems irrelevant to our conversation and it might be a good idea not to worry about it for now.

    I expect we are still in a state of mutual misunderstanding. I am working on a post entitled “An Outsider’s View of the Origins of Moral Foundations” that might be useful and will drop you a note when it is posted in case you have any interest.

    Aside from that, I will sign off unless you reply with something that I can’t resist replying to. Good luck in your studies.

    Yes, we are in a state of misunderstanding. Are you only interested in what needs to be done for us to gain from cooperation, or are you trying to do something more than that? I am not entirely sure what you are trying to do.

    Even so, I think moral realism is relevant to benefiting from cooperation. We need to know what counts as a “benefit.” When we punish people and coerce people into cooperation, that punishment and coercion might be wrong even if utility is promoted. When we encourage people to have more empathy rather than less, we might be encouraging them to be irrational for our own benefit. When we try to get other people to be moral the same problem crops up — it is possible that morality is irrational.

    It might be possible to have a moral theory that doesn’t require us to know a lot about intrinsic values. What you are doing might not contradict moral reality and it might very well be a morally praiseworthy goal to have. I have attempted to develop a theory that requires little understanding of intrinsic values, which I called neo-Aristonianism. The idea was that we should try to be virtuous and the goals considered to be “good” would be those that are necessary to any conceivable and non-self defeating sort of virtue. That was part of my master’s thesis and can be found here: http://ethicalrealism.wordpress.com/2010/02/08/two-new-stoic-ethical-theories-free-ebook/

    You suggested that we might need some practical moral life without having to know a lot about meta-ethics and so on. That’s basically what neo-Aristonianism is.

    Although it might be possible for an anti-realist to be a neo-Aristonianism, I think it would be ultimately make more sense to be a moral realist. The same might be true with your project.

    Comment by James Gray — December 7, 2010 @ 1:02 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 207 other followers

%d bloggers like this: