In A New Stoicism Lawrence C. Becker attempts to develop a new form of Stoicism compatible with current scientific assumptions concerning reality—without the Ancient Stoic metaphysical or psychological assumptions (such as the existence of a deity). Becker argues that his new Stoicism will agree that virtue is the greatest good and that all virtuous people are happy. Becker does not spell out his new Stoicism’s moral psychology in detail, but he does describe his new Stoicism’s understanding of virtue as “ideal agency.” I will discuss his understanding of virtue and offer my objection to it. In particular, I find this understanding of virtue to be impractical.
What is ideal agency? An ideal agent is someone who has optimized the number of her successful goals, and that requires her to have perfectly coherent beliefs and goals (New 81). Our beliefs are coherent when none of our beliefs imply a contradiction. Our goals are coherent when none of them conflict (New 50). An ideal agent would succeed in accomplishing all their goals—or at least the most goals possible.
It might not be possible to ever become an ideal agent, and Becker argues that progress towards ideal agency is worthless because it is increasingly dangerous:
Inferences are either valid or not; this property is not a matter of degree. Neither is soundness. One false proposition in an argument makes the inference unsound, period. Below the level of ideal agency, invalid or otherwise unsound inferences may have merely local effects. Imperfect agency may be incompletely integrated, for example, and less than comprehensively controlling. If so, and if we do not exercise agency properly or at all in some areas, then we may not notice conflicts between our endeavors, or attempt to generalize from one to another. This has obvious disadvantages for learning, of course, but it also has the peculiar advantage that the effects of our errors are limited simply because we fail to apply those errors widely. Thus, short of achieving perfection itself, the closer we get to ideal agency with respect to integrating all our endeavors and controlling them all with practical intelligence, the more likely it is that errors in anything we do will invalidate everything we do. (New 119).
Why do many of our false beliefs remain isolated rather than “comprehensively controlling?” We are often uncertain about many things, and we don’t usually take such uncertain beliefs too seriously. Our beliefs are likely to have an impact on our goals and behavior, but we usually don’t rely on risky beliefs. For example, I might not believe in global warming, but I might not be certain about it. My uncertainty might be a good reason to try to pump less rather than more carbon dioxide into the air, even when it would be profitable.
Becker argues that people approaching ideal agency would be more coherent and their uncertain beliefs would be more “comprehensively controlling” as a result. If a nearly-ideal agent thinks global warming is false, then she will likely pump more carbon dioxide into the air (if it’s profitable).
I disagree with Becker that virtuous people would allow potentially dangerous uncertain beliefs to become “comprehensively controlling” in this way precisely for at least one reason—because they would realize which of their beliefs could be false and dangerous, and they would not allow these beliefs to play a central role in their decision-making process. (It’s pretty much for the exact reasons that Becker gives for thinking that “moral progress” is dangerous.) We should make sure that potentially dangerous uncertain beliefs remain relatively isolated and don’t become “comprehensively controlling” because (a) that is what the virtue of modesty requires and (b) we know it’s wrong to needlessly endanger people’s well-being. To do otherwise would be arrogant and foolhardy.
Moreover, we must not reject highly plausible beliefs just for the sake of being coherent. No one should reject the highly plausible statement “murder is wrong” in favor of an implausible one like “genocide is good.” At the very least, Becker (a) misunderstands “moral progress,” (b) misunderstands virtue as “ideal agency,” or (c) misunderstands “ideal agency”—or so I argue.
Examples of how virtuous people should treat their incompatible beliefs can be found in science and philosophy. For example, utilitarians are likely to admit that certain counterexamples against utilitarianism are plausible, but they don’t always reject utilitarianism or the counterxample. It might be unreasonable to reject either. In a similar fashion almost all scientific theories are potentially falsified via anomalies (such as the strange activity in space that motivated the hypothesis known as “dark matter”). Nonetheless, scientists do not usually think anomalies disprove their theories because (a) there’s often “background assumptions” that could be falsified instead and (b) there might not be a better theory available.
Becker admits that the ideal agent will realize “that he is fallible and possibly mistaken about what virtue requires in particular cases,” but Becker doesn’t consider that the nearly-virtuous are going to be cautious for the same reason—and this undermines his understanding of moral progress towards ideal agency (New 132).
What I endorse instead of ideal agency
I find “ideal agency” to be a too abstract and impractical way to understand virtue. My solution is “A virtuous person has reasonable beliefs and goals based on appropriate justification, modesty, and caution.” Our beliefs are appropriately justified when we have more reason to accept than reject it based on our understanding of the world. We show modesty with our beliefs when we tentatively hold a belief knowing that we aren’t certain that it’s true. We show caution with our beliefs when we make sure potentially dangerous uncertain beliefs stay relatively isolated and don’t play a central role in our decision-making process. Our goals should be based on our beliefs and are based on appropriate justifications, modesty, and caution when they are based on reasonable beliefs that embody these qualities.
We know some beliefs are more certain and potentially dangerous than others. We know that some beliefs are highly plausible. We must not reject any highly plausible belief in favor of an uncertain one, and we are often unable to reasonably reject a highly plausible belief that could contradict other highly plausible beliefs. When that happens we have no choice but to risk being incoherent, and then we have a good reason to feel less confident about knowing that our beliefs are true.
Although I don’t agree with everything in Lawrence Becker’s A New Stoicism , I still found it to be a good read and he seems aware of the major objection that I discussed here. Becker’s book is an incredibly ambitious attempt at capturing an entire ethical worldview—and even attempts to naturalize his ethics and argue that you can get an ‘ought’ from an ‘is.’ Although I found it to be impractical, perhaps Becker does not. If not, I would like to know how exactly he applies his own theory.