Ethical Realism

September 23, 2010

Ravi Iyer’s Argument Against “Moral Absolutism”

Filed under: ethics,metaethics,philosophy — JW Gray @ 1:45 am
Tags: , , ,

I just read “Sam Harris’ TED video and the danger of liberal atheist moral absolutism” by Ravi Iyer. He argues that moral absolutism – the belief that there are right and wrong actions – is dangerous based on some data that was collected.1 He uses this argument as a response to Sam Harris’s suggestion that science can help us find answers to moral questions. I find several problems with Iyer’s essay and argument. For example, Iyer seems to think that there are no true moral beliefs, which would imply that he disagrees that “torturing babies is wrong.”

Iyer starts off his essay with some agreement with Harris:

Harris correctly observes that “the only people who seem to generally agree with me (Harris) and who think that there are right or wrong answers to moral questions are religious demagogues, of one form or another, and of course they think there are right and wrong answers to moral questions because they got these answers from a voice in a whirlwind, not because they made an intelligent analysis of the conditions of human and animal well-being.” [My emphasis.]

The problem is that Harris is clearly wrong here. Who doesn’t think torturing babies is wrong? Just about everyone thinks it’s wrong. It’s not just religious people. It’s not controversial in the least. Only someone completely detached from reality would believe that torturing babies isn’t wrong.

Iyer then say something quite puzzling,

His conception of morality is remarkably close to the construct of moral absolutism vs. moral relativism, measured on the YourMorals.org site using agreement to statements like “Different types of moralities cannot be compared as to ‘rightness’” with agreement indicating more absolutism and disagreement indicating relativism.

We aren’t told how exactly YourMorals defined “moral absolutism” nor are we told how we can identify a person as a moral absolutist. The “statement” mentioned, “Different types of moralities cannot be compared as to ‘rightness’” doesn’t even make sense. If that statement was used for the research, then I don’t know how the data could prove anything about what people believe. Moral absolutism was implied earlier to mean “there are right or wrong answers to moral questions” and there is no way that the data proved that people reject this belief.

Ask people, “Is torturing babies wrong?” People can rate their agreement with this statement 1-10, with 10 being absolute agreement, and people will tend to rate their agreement as a 9 or 10. That will easily prove people are “moral absolutists!”

Iyer then wants to argue that he doesn’t think people should want others to agree with them about morality. He says, “I do not believe that my values should be the values of other people as well” (my emphasis). He wants to prove that moral agreement is a bad thing. (I suppose he might define moral absolutism as the view that moral agreement is a good thing.) He offers us two reasons to reject moral agreement:

Iyer’s objections to moral agreement

Objection 1

Even the most liberal person can be made to consider ideas of morality outside of the idea of the greatest well-being possible. For example, liberals believe in equity too, such that some people deserve more well-being than others. Jon Haidt’s brother-sister incest dilemma confounds both liberals and conservatives meaning that there is a universal ability to moralize disgust, even if it is less developed in some than others. Harm and well-being are not the only considerations.

Iyer argues that somehow incest proves that people shouldn’t agree with each other about morality, but his argument makes no sense. I will offer three objections to Iyer’s argument:

First, it’s not clear what incest has to do with wanting people to agree to our values. We should want them to agree with our values when they correctly relate to the world. If incest is wrong, then we have good reason not to want other people to do it. In that case we should want people to agree with the truth.

I suggest that Iyer wants to argue the following:

  1. People disagree about morality.
  2. Therefore, people should disagree about morality.

He wants to suggest that people disagree about the moral status of incest (some people think that incest is wrong, and others think it isn’t wrong); therefore, people should disagree about the moral status of incest. However, this argument is clearly fallacious. The fact that people do something doesn’t mean that they should do it. Some people think evolution is false, but scientists have found that evolution is probably true, so they should agree with the scientists.

Second, Iyer suggests that those who believe that incest is wrong must do so despite the fact that it isn’t harmful. However, Incest is “dangerous” for emotional reasons to say the least. The actual harms of actions are not necessary to make it wrong. If your brother or sister asked you to have sex, that could have seriously damaging effects on your relationship.

Third, disgust is not a good criteria for morality. No philosopher has ever claimed disgust is a good criteria for morality.

Objection 2

His second reason for thinking we shouldn’t want people to agree with our moral beliefs is:

Moral absolutism generally leads to more human suffering, not less, as people fight great wars to enforce their vision of morality on others. Consider the below 2 graphs of your morals data relating moral relativism, the opposite of absolutism, and attitudes toward war.

It is here that Iyer comes to his main argument that tries to prove that moral absolutism is dangerous. I will offer six objections to this argument:

First, he didn’t explicitly define “moral absolutism,” but he implied earlier that it means “there are right and wrong answers to moral questions.” However, even anti-realists almost unanimously agree that torturing babies is wrong. So, under that definition everyone is an absolutist and the data doesn’t prove anything because there’s no way a study found people who thought that torturing babies isn’t wrong.

At one point Iyer implies that “moral absolutist” refers to a person who thinks “I want other people to agree with my personal moral beliefs.” Given this definition of moral absolutism, I would want everyone to agree with my moral beliefs, but you would want everyone to agree with your moral beliefs (assuming we are both moral absolutists). Even so, we want other people to agree with our moral knowledge. For example, we want them to agree that torturing babies is wrong because we know it’s wrong.

I suppose that Iyer might want to argue that moral realism is dangerous. He might equate “moral absolutism” with “moral realism.” However, there have been no psychological studies to find out how many people are moral realists and whether or not they support war, and even if there were such studies I doubt it would give useful data because almost no one knows what “moral realism” means. If you asked someone if their family members have real value, they would probably answer, “Yes,” even if they have many anti-realist beliefs. People’s beliefs are not necessarily consistent.

A moral realist could be expected to want everyone to believe in moral facts—they should know the truth about morality. If that is his definition of moral absolutism, then there’s nothing wrong with moral absolutism at all. It would be great if we all knew the truth about morality.

Second, moral absolutism doesn’t lead to more suffering given any of the definitions. Believing that torturing babies is wrong doesn’t hurt society. Sure, we throw people who torture babies into prison, but that is a price we should be willing to pay to stop such insane people from hurting others. We should force people to refrain from torturing babies even that when they don’t think it’s wrong.

Third, even if the statistics were accurately measuring someone’s adherence to “moral absolutism,” it wouldn’t prove that such beliefs cause suffering. Why? It’s a fallacy. “Mistakes about type-level causation are the result of confusing correlation with causation” (Fallacy Files).

Correlation does not indicate causation. If people are moral absolutists and they are unreasonable, arrogant, or fanatical; then being unreasonable, arrogant, and fanatical might be the cause of being willing to harm others. I find it hard to believe that many philosophers will be very unreasonable and willing to hurt others, but that’s not what the data represents. At best, it represents masses of people who know little to nothing about reasonableness or moral philosophy and therefore arrogantly think they know right from wrong better than others.

Fourth, the data is about being pro-war, but what was the criteria for being pro-war? Is war that is used to stop genocide so terrible? Is a person who wants war to be used when necessary to stop genocide someone who will be more dangerous than usual? No.

Fifth, even if moral absolutism causes dangerous behavior, it wouldn’t prove that relativism (non-absolutism) doesn’t cause dangerous behavior. The argument is clearly committing a suppressed evidence fallacy insofar as it doesn’t even consider evidence that relativism can be dangerous. People who don’t believe in right and wrong might be more willing to hurt people because “it’s not wrong.” What I find very likely is that fanaticism and unreasonableness is dangerous rather than someone’s adherence to absolutism.

Sixth, in the comments section David Flint already pointed out that moral absolutism can’t be false just because it’s dangerous. Such an assumption is obviously fallacious. So, even if moral absolutism is dangerous and relativism isn’t, it could still be true.

Iyer’s conclusion

Iyer concludes,

Moral absolutism, believing that you are more right about morality than others, can be thought of as the first step toward hypermoralism, harming others in support of your moral principles. Human beings are already good at believing that our moral system is superior, with war sometimes as the consequence….instead or narrowing our conceptions of morality, we should be working to expand our moral imaginations.

It is here that Iyer implies that “moral absolutist” denotes a person who believes, “I know more about morality than everyone else,” but that’s not what Harris wanted to say. He wanted to say that he has a lot to learn about morality based on future scientific data. This conclusion implies that Iyer has changed Harris’s argument and committed a straw man fallacy. He falsely attributed an argument to Harris that doesn’t exist.

Additionally, it’s not surprising that moral arrogance is dangerous because it’s unreasonable. People think they know more about morality than they really do—they think they know more than philosophers who spend their whole lives trying to answer these difficult questions. Yes, believing you know things you don’t know is stupid. That doesn’t mean no one should ever think “my moral opinion is superior to someone else’s” because the moral opinion of a philosopher could be better than that of someone else, and an idiot who thinks torturing babies isn’t wrong clearly has an inferior opinion to the vast majority.

Finally, Iyer said something puzzling in the comments section, “[I]t’s interesting that being truly rational requires one to accept the limits of rationality. In this way, I think support for rationality can almost be considered religious.” I agree with everything here except that supporting rationality is somehow religious. Is he kidding? Science and philosophy requires evidence and justification – rationality – to a much higher degree than religion. Religious authority has no intellectual standards and doesn’t always have educational requirements. There is no peer-review and there is no worry that their ideas are “unpublishable.”

Conclusion

Ravi Iyer makes little sense in his argument against moral absolutism. He doesn’t make it clear what exactly his argument is or what it proves. However, I can’t imagine his argument having any success. Moral absolutism can mean any of the following things:

  1. Moral realism.
  2. The belief that there are true moral beliefs.
  3. The belief that moral agreement is a good thing.
  4. The desire for others to agree with our personal moral beliefs.
  5. The belief that our personal moral beliefs are better than other people’s.

None of these definitions of moral absolutism is relevant to his arguments. He in no way proved that any of these five forms of moral absolutism are dangerous.

What is disconcerting is Ravi’s muddled thoughts about morality seems to endorse a juvenile form of moral relativism. He seems to think that no one should agree with other people concerning moral beliefs, but such agreement can make a lot of sense. Worse, he seems to think that there are no true moral beliefs, which would imply that torturing babies isn’t wrong after all.

What is even more disconcerting is that no one seemed to question Iyer’s absurd beliefs in the comments section, which implies that there are many people who are seriously confused about the nature of morality and meta-ethics in particular.

Iyer is not the only person arguing that moral realism is dangerous. I discuss this concern in more detail here.

Notes

1 I don’t think that “moral absolutism” means what Iyer thinks it means, but I will use the word “moral absolutism” however I infer Iyer to use it just for the sake of argument.

5 Comments »

  1. Hey, I just discovered your blog via a link from Common Sense Atheism, and I think it’s pretty great. Could you make the RSS feed show the full post? It’s my preferred way of reading blogs.

    Comment by josefjohann — September 28, 2010 @ 5:11 pm | Reply

  2. I’m glad you enjoyed reading some of my ideas or reviews. I don’t want the RSS to show too much at this moment because of the dangers of “duplicate content penalties” used by search engines.

    Comment by James Gray — September 28, 2010 @ 10:31 pm | Reply

  3. If you believed there wasn’t a penalty, or that there were other benefits that overrode the risk of such a penalty, would you change your mind?

    I did a google search for “atheism blog” (without quotes), and checked whether the top 10 had full rss feeds

    #1 Atheist Revolution, has a full content rss feed

    #2 No God Blog, has a full rss feed

    #3 Austin’s Atheism Blog, does not have a full rss feed

    #4 Planet Atheism has a full rss feed

    #5 Atheist Media Blog has a full rss feed

    #6 Friendly atheist has full rss

    #7 The Atheist Blog has full rss

    #8 Atheist Central has full rss

    #9 The Raving Atheist seems not to. Hasn’t been updated in a while.

    #10 The Atheist Blogger has full rss

    Plus, the high-traffic Common Sense Atheism has full RSS. Also, Luke Muelhauser of CSA has compiled a list of the “top 20” (actually 16) Atheist blogs based on Alexa traffic.

    By my check, 12 out of the 16 on his list have full RSS feeds.

    That’s hardly a scientific survey, and maybe I miscounted. It’s possible they have encountered penalties for “duplicate content penalties,” but have overcome them, or that they are exposed to such penalties.

    In any case, it seems that it’s possible for blogs with full content RSS feeds can do well relative to other blogs, even with such possible penalties.

    My subjective sense is that Google and Bing are clever companies, and they realize that RSS is a mainstream form of content distribution. I presume they have made some effort to distinguish between abuses and valid uses of the medium.

    The last thing I would say is that at least 16 of your readers subscribe via RSS (thats the number listed through Google Reader). I would wager that they are highly likely to prefer full content rss to abbreviated summaries. It’s my preference at least.

    Comment by josefjohann — September 29, 2010 @ 11:17 pm | Reply

  4. I submitted a comment here about RSS feeds and search engine penalties. It had some html links. I hope it gets through the filters ok.

    Comment by josefjohann — September 29, 2010 @ 11:44 pm | Reply

    • It was stuck in the spam area, but I saved it for you.

      Google tends to be good at knowing what to count as duplicate content, but not always. Sometimes I have some problems finding my own articles because the results are pointing to the wrong part of the website. I’ve even had problems where google decided to have results for category tags on one of my websites rather than more appropriate results.

      I will try the “full feed” option and see what happens.

      Comment by James Gray — September 30, 2010 @ 12:06 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a comment

Create a free website or blog at WordPress.com.