The greatest moral challenge of our time? It’s how we think about morality itself

moral issue essay brainly

Honorary Associate in Philosophy, University of Sydney

Disclosure statement

Tim Dean does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

University of Sydney provides funding as a member of The Conversation AU.

View all partners

moral issue essay brainly

In this series, we have invited philosophers to write about what they consider to be the greatest moral challenge of our time, and how we should address it.

It would be easy to conclude that there’s a deficit of morality in the world today. That if only people were more motivated to behave ethically, if only they made morality more prominent in their thinking, then the world would be a better place.

But when it comes to pinning down a single greatest moral challenge of our time, I’d argue that there’s not a lack of morality in the world; there’s too much.

In fact, I believe the greatest moral challenge of our time is our flawed conception of morality itself. The way we tend to think and talk about morality stifles our ability to engage with views other than our own, it makes managing diversity and disagreement harder, and it tends to lock us into thinking patterns that produce more instances of suffering and unrest than they solve.

Right, wrong, black, white

Murder is wrong. This is not just a matter of subjective personal preference, it’s an objective fact. That means if it’s true for me, then it’s true for you and for everyone else too. And if someone claims that murder is OK, then they’re mistaken.

This is the way many of us tend to think and talk about many moral issues, not just murder. We refer to moral facts. And we prove our moral stance is the correct one by appealing to these facts.

Some of us justify these facts by appealing to commandments delivered to us by some divine being. Others justify it by appealing to natural rights, or fundamental facts about human nature, such as that suffering is intrinsically bad so we should prevent it wherever possible.

Many of us see morality as like a science, where we can learn new moral facts about the world, such as when we discovered that slavery was wrong or that women ought to have the same rights as men, and we updated our moral attitudes accordingly.

Three problems

I believe there are three major problems with this commonsense view of morality.

First: it’s wrong.

I’m not convinced there is any objective source of morality. I’ve spent a lot of time looking for one but am yet to find anything that isn’t deeply unconvincing.

Even if you believe there is a divine moral source that can dictate absolute right from wrong, it’s still down to us mere mortals to figure out the correct interpretation of its will. And history has shown that disagreements over rival interpretations of divine goodness can cause untold suffering, and still do today when dogmatists attempt to force their version of morality on the unwilling.

The second problem is that the idea of there being One True Morality is fundamentally at odds with the vast amount of moral diversity we see around the world. For example, there is widespread disagreement over whether the state should be able to execute criminals, whether terminally ill people have a right to die, and how sexuality can be expressed and practised in private and public.

If you believe that morality is a matter of objective truth, then this diversity means that most (if not all) people throughout the world are just plain wrong about their most deeply held moral convictions. If that’s the case, then it speaks poorly of our collective ability to understand what morality is at all.

Read more: Looking for truth in the Facebook age? Seek out views you aren't going to 'like'

The third problem is that this view of morality steers us towards thinking in black and white terms. It directs moral discourse towards proving other people wrong, or bending them to our moral views. It makes it much harder, if not impossible, for people to take other moral viewpoints seriously and engage in ethical negotiation or compromise.

This is one of the major reasons that social media, not to mention dinner table, discourse is in such a terrible state right now. Those on one side simply write off their opponents as being morally perverse, which shuts down any possibility of positive engagement or bipartisan cooperation.

Moral reform

So to respond to the greatest moral challenge of our time, we need to seriously rethink morality itself.

The best way to think about morality is as a cultural tool that we humans invented to help us live and work together in social situations. After all, we each have our interests that we want to pursue. They vary from individual to individual, but generally include things like being able to provide for ourselves and our loved ones, avoiding suffering and hardship, and pursuing pleasurable and fulfilling experiences.

moral issue essay brainly

The best way to satisfy these interests is to live socially, interacting and cooperating with others. But often our interests, or means of satisfying them, conflict with others. And that conflict can end up being bad for everyone.

So morality is the set of rules we live by that seek to reduce harm and help us live together effectively. We didn’t just discover it. It wasn’t handed to us from above. We had to figure it out for ourselves.

Of course, we haven’t always thought about morality in these terms, so we’ve justified it in any number of ways, often by appealing to religion or tradition. But we haven’t updated our thinking about morality to purge it of the baggage that came with religion and the rigid cultural conformity of the past.

We now know there are many ways of pursuing a fulfilling life, and the rules that promote one version might conflict with the ways of another. So moral rules that encourage strong communal bonds, for example, might conflict with the rules that enable people to choose their own life path.

Also, the problems that morality is trying to solve vary from one place to the next. People living in a small community in a resource-limited area like the Arctic tundra have different problems to solve than people living in a modern metropolis like Sydney or Melbourne, surrounded by abundance. If we apply the morality of the former to the latter environment, we can exacerbate conflict rather than resolve it.

All this means that morality should be less about “proving” your view and more about tolerance and negotiation. We need to learn to understand that different people - and different communities and cultures - have different conceptions of the good life. And we need to understand that the problems of social living, and their solutions, don’t apply equally well in every community.

It also means we must learn to become less morally dogmatic and more morally adaptable. Above all, we need to abandon the idea that morality is about objective facts that apply to all people at all times.

This doesn’t mean morality becomes an “anything goes” form of relativism. There are ways to judge the usefulness of a particular moral norm, namely: does it actually help solve the problems of social living for the people using it? Many don’t, so deserve to be challenged or reformed.

In an increasingly interconnected, diverse and multicultural world, it is more important than ever that we reform the way we think and talk about morality itself. If we don’t, no matter what other moral challenge you think we face, it will only become harder to solve.

Later articles in this series include Looking for truth in the Facebook age? Seek out views you aren’t going to ‘like’ and We need to become global citizens to rebuild trust in our globalised world .

moral issue essay brainly

University Relations Manager

moral issue essay brainly

2024 Vice-Chancellor's Research Fellowships

moral issue essay brainly

Head of Research Computing & Data Solutions

moral issue essay brainly

Community member RANZCO Education Committee (Volunteer)

moral issue essay brainly

Director of STEM

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Is Morality?

Societal underpinnings of "right" and "wrong"

How Morals Are Established

Morals that transcend time and culture, examples of morals, morality vs. ethics, morality and laws.

Morality refers to the set of standards that enable people to live cooperatively in groups. It’s what societies determine to be “right” and “acceptable.”

Sometimes, acting in a moral manner means individuals must sacrifice their own short-term interests to benefit society. Individuals who go against these standards may be considered immoral.

It may be helpful to differentiate between related terms, such as immoral , nonmoral , and amoral . Each has a slightly different meaning:

  • Immoral : Describes someone who purposely commits an offensive act, even though they know the difference between what is right and wrong
  • Nonmoral : Describes situations in which morality is not a concern
  • Amoral : Describes someone who acknowledges the difference between right and wrong, but who is not concerned with morality

Morality isn’t fixed. What’s considered acceptable in your culture might not be acceptable in another culture. Geographical regions, religion, family, and life experiences all influence morals. 

Scholars don’t agree on exactly how morals are developed. However, there are several theories that have gained attention over the years:

  • Freud’s morality and the superego: Sigmund Freud suggested moral development occurred as a person’s ability to set aside their selfish needs (id) to be replaced by the values of important socializing agents, such as a person’s parents, teachers, and institutions (superego).
  • Piaget’s theory of moral development: Jean Piaget focused on the social-cognitive perspective of moral development. He theorized that moral development unfolds over time alongside the progressing stages of cognitive development. Early on, children learn to adopt certain moral behaviors for their own sake (it makes them feel good), rather than just abide by moral codes because they don’t want to get into trouble. By adolescence, you can think more abstractly, and begin to make moral decisions based on higher universal principles and the greater good of society.
  • B.F. Skinner’s behavioral theory: B.F. Skinner focused on the power of external forces that shaped an individual’s development. For example, a child who receives praise for being kind may treat someone with kindness again out of a desire to receive more positive attention in the future.
  • Kohlberg’s moral reasoning: Lawrence Kohlberg proposed six stages of moral development that went beyond Piaget’s theory. Through a series of questions or moral dilemmas, Kohlberg proposed that an adult’s stage of reasoning could be identified.
  • Gilligan's perspective of gender differences in moral reasoning . Carol Gilligan criticized Kohlberg for being male-centric in his theory of moral development. She explained that men are more justice-oriented in their moral reasoning; whereas, women are more care-oriented . Within that context, moral dilemmas will have different solutions depending on which gender is doing the reasoning.

What Is the Basis of Morality?

There are different theories as to how morals are developed. However, most theories acknowledge the external factors (parents, community, etc.) that contribute to a child's moral development. These morals are intended to benefit the group that has created them.

Most morals aren’t fixed. They usually shift and change over time.

Ideas about whether certain behaviors are moral—such as engaging in pre-marital sex, entering into same-sex relationships, and using cannabis—have shifted over time. While the bulk of the population once viewed these behaviors as “wrong,” the vast majority of the population now finds these activities to be “acceptable.”

In some regions, cultures, and religions, using contraception is considered immoral. In other parts of the world, some people consider contraception the moral thing to do, as it reduces unplanned pregnancy, manages the population, and reduces the risk of sexually transmitted illnesses.

7 Universal Morals

Some morals seem to transcend across the globe and across time, however. Researchers have discovered that these seven morals seem somewhat universal:

  • Defer to authority
  • Help your group
  • Love your family
  • Return favors
  • Respect others’ property

The following are common morality examples that you may have been taught growing up, and may have even passed on to younger generations:

  • Have empathy
  • Don't steal
  • Tell the truth
  • Treat others as you want to be treated

People might adhere to these principles by:

  • Being an upstanding citizen
  • Doing volunteer work
  • Donating money to charity
  • Forgiving someone
  • Not gossiping about others
  • Offering their time and help to others

To get a sense of the types of morality you were raised with, think about what your parents, community and/or religious leaders told you that you "should" or "ought" to do.

Some scholars don’t distinguish between morals and ethics . Both have to do with “right and wrong.”

However, some people believe morality is personal while ethics refer to the standards of a community.

For example, your community may not view premarital sex as a problem. But on a personal level, you might consider it immoral. By this definition, your morality would contradict the ethics of your community.

Both laws and morals are meant to regulate behavior in a community to allow people to live in harmony. Both have firm foundations in the concept that everyone should have autonomy and show respect to one another.

Legal thinkers interpret the relationship between laws and morality differently. Some argue that laws and morality are independent. This means that laws can’t be disregarded simply because they’re morally indefensible.

Others believe law and morality are interdependent. These thinkers believe that laws that claim to regulate behavioral expectations must be in harmony with moral norms. Therefore, all laws must secure the welfare of the individual and be in place for the good of the community.

Something like adultery may be considered immoral by some, but it’s legal in most states. Additionally, it’s illegal to drive slightly over the speed limit but it isn’t necessarily considered immoral to do so.

There may be times when some people argue that breaking the law is the “moral” thing to do. Stealing food to feed a starving person, for example, might be illegal but it also might be considered the “right thing” to do if it’s the only way to prevent someone from suffering or dying.

Think About It

It can be helpful to spend some time thinking about the morals that guide your decisions about things like friendship, money, education, and family. Understanding what’s really important to you can help you understand yourself better and it may make difficult decisions easier.

Merriam-Webster. A lesson on 'unmoral,' 'immoral,' 'nonmoral,' and 'amoral.'

Ellemers N, van der Toorn J, Paunov Y, van Leeuwen T. The psychology of morality: A review and analysis of empirical studies published from 1940 through 2017 . Pers Soc Psychol Rev. 2019;23(4):332-366. doi:10.1177/1088868318811759

Curry OS, Mullins DA, Whitehouse H. Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies . Current Anthropology. 2019;60(1):47-69. doi:10.1086/701478

Encyclopædia Britannica.  What's the difference between morality and ethics?

Moka-Mubelo W. Law and morality . In:  Reconciling Law and Morality in Human Rights Discourse . Vol 3. Springer International Publishing; 2017:51-88. doi:10.1007/978-3-319-49496-8_3

By Amy Morin, LCSW Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Moral Reasoning

While moral reasoning can be undertaken on another’s behalf, it is paradigmatically an agent’s first-personal (individual or collective) practical reasoning about what, morally, they ought to do. Philosophical examination of moral reasoning faces both distinctive puzzles – about how we recognize moral considerations and cope with conflicts among them and about how they move us to act – and distinctive opportunities for gleaning insight about what we ought to do from how we reason about what we ought to do.

Part I of this article characterizes moral reasoning more fully, situates it in relation both to first-order accounts of what morality requires of us and to philosophical accounts of the metaphysics of morality, and explains the interest of the topic. Part II then takes up a series of philosophical questions about moral reasoning, so understood and so situated.

1.1 Defining “Moral Reasoning”

1.2 empirical challenges to moral reasoning, 1.3 situating moral reasoning, 1.4 gaining moral insight from studying moral reasoning, 1.5 how distinct is moral reasoning from practical reasoning in general, 2.1 moral uptake, 2.2 moral principles, 2.3 sorting out which considerations are most relevant, 2.4 moral reasoning and moral psychology, 2.5 modeling conflicting moral considerations, 2.6 moral learning and the revision of moral views, 2.7 how can we reason, morally, with one another, other internet resources, related entries, 1. the philosophical importance of moral reasoning.

This article takes up moral reasoning as a species of practical reasoning – that is, as a type of reasoning directed towards deciding what to do and, when successful, issuing in an intention (see entry on practical reason ). Of course, we also reason theoretically about what morality requires of us; but the nature of purely theoretical reasoning about ethics is adequately addressed in the various articles on ethics . It is also true that, on some understandings, moral reasoning directed towards deciding what to do involves forming judgments about what one ought, morally, to do. On these understandings, asking what one ought (morally) to do can be a practical question, a certain way of asking about what to do. (See section 1.5 on the question of whether this is a distinctive practical question.) In order to do justice to the full range of philosophical views about moral reasoning, we will need to have a capacious understanding of what counts as a moral question. For instance, since a prominent position about moral reasoning is that the relevant considerations are not codifiable, we would beg a central question if we here defined “ morality ” as involving codifiable principles or rules. For present purposes, we may understand issues about what is right or wrong, or virtuous or vicious, as raising moral questions.

Even when moral questions explicitly arise in daily life, just as when we are faced with child-rearing, agricultural, and business questions, sometimes we act impulsively or instinctively rather than pausing to reason, not just about what to do, but about what we ought to do. Jean-Paul Sartre described a case of one of his students who came to him in occupied Paris during World War II, asking advice about whether to stay by his mother, who otherwise would have been left alone, or rather to go join the forces of the Free French, then massing in England (Sartre 1975). In the capacious sense just described, this is probably a moral question; and the young man paused long enough to ask Sartre’s advice. Does that mean that this young man was reasoning about his practical question? Not necessarily. Indeed, Sartre used the case to expound his skepticism about the possibility of addressing such a practical question by reasoning. But what is reasoning?

Reasoning, of the sort discussed here, is active or explicit thinking, in which the reasoner, responsibly guided by her assessments of her reasons (Kolodny 2005) and of any applicable requirements of rationality (Broome 2009, 2013), attempts to reach a well-supported answer to a well-defined question (Hieronymi 2013). For Sartre’s student, at least such a question had arisen. Indeed, the question was relatively definite, implying that the student had already engaged in some reflection about the various alternatives available to him – a process that has well been described as an important phase of practical reasoning, one that aptly precedes the effort to make up one’s mind (Harman 1986, 2).

Characterizing reasoning as responsibly conducted thinking of course does not suffice to analyze the notion. For one thing, it fails to address the fraught question of reasoning’s relation to inference (Harman 1986, Broome 2009). In addition, it does not settle whether formulating an intention about what to do suffices to conclude practical reasoning or whether such intentions cannot be adequately worked out except by starting to act. Perhaps one cannot adequately reason about how to repair a stone wall or how to make an omelet with the available ingredients without actually starting to repair or to cook (cf. Fernandez 2016). Still, it will do for present purposes. It suffices to make clear that the idea of reasoning involves norms of thinking. These norms of aptness or correctness in practical thinking surely do not require us to think along a single prescribed pathway, but rather permit only certain pathways and not others (Broome 2013, 219). Even so, we doubtless often fail to live up to them.

Our thinking, including our moral thinking, is often not explicit. We could say that we also reason tacitly, thinking in much the same way as during explicit reasoning, but without any explicit attempt to reach well-supported answers. In some situations, even moral ones, we might be ill-advised to attempt to answer our practical questions by explicit reasoning. In others, it might even be a mistake to reason tacitly – because, say, we face a pressing emergency. “Sometimes we should not deliberate about what to do, and just drive” (Arpaly and Schroeder 2014, 50). Yet even if we are not called upon to think through our options in all situations, and even if sometimes it would be positively better if we did not, still, if we are called upon to do so, then we should conduct our thinking responsibly: we should reason.

Recent work in empirical ethics has indicated that even when we are called upon to reason morally, we often do so badly. When asked to give reasons for our moral intuitions, we are often “dumbfounded,” finding nothing to say in their defense (Haidt 2001). Our thinking about hypothetical moral scenarios has been shown to be highly sensitive to arbitrary variations, such as in the order of presentation. Even professional philosophers have been found to be prone to such lapses of clear thinking (e.g., Schwitzgebel & Cushman 2012). Some of our dumbfounding and confusion has been laid at the feet of our having both a fast, more emotional way of processing moral stimuli and a slow, more cognitive way (e.g., Greene 2014). An alternative explanation of moral dumbfounding looks to social norms of moral reasoning (Sneddon 2007). And a more optimistic reaction to our confusion sees our established patterns of “moral consistency reasoning” as being well-suited to cope with the clashing input generated by our fast and slow systems (Campbell & Kumar 2012) or as constituting “a flexible learning system that generates and updates a multidimensional evaluative landscape to guide decision and action” (Railton, 2014, 813).

Eventually, such empirical work on our moral reasoning may yield revisions in our norms of moral reasoning. This has not yet happened. This article is principally concerned with philosophical issues posed by our current norms of moral reasoning. For example, given those norms and assuming that they are more or less followed, how do moral considerations enter into moral reasoning, get sorted out by it when they clash, and lead to action? And what do those norms indicate about what we ought to do do?

The topic of moral reasoning lies in between two other commonly addressed topics in moral philosophy. On the one side, there is the first-order question of what moral truths there are, if any. For instance, are there any true general principles of morality, and if so, what are they? At this level utilitarianism competes with Kantianism, for instance, and both compete with anti-theorists of various stripes, who recognize only particular truths about morality (Clarke & Simpson 1989). On the other side, a quite different sort of question arises from seeking to give a metaphysical grounding for moral truths or for the claim that there are none. Supposing there are some moral truths, what makes them true? What account can be given of the truth-conditions of moral statements? Here arise familiar questions of moral skepticism and moral relativism ; here, the idea of “a reason” is wielded by many hoping to defend a non-skeptical moral metaphysics (e.g., Smith 2013). The topic of moral reasoning lies in between these two other familiar topics in the following simple sense: moral reasoners operate with what they take to be morally true but, instead of asking what makes their moral beliefs true, they proceed responsibly to attempt to figure out what to do in light of those considerations. The philosophical study of moral reasoning concerns itself with the nature of these attempts.

These three topics clearly interrelate. Conceivably, the relations between them would be so tight as to rule out any independent interest in the topic of moral reasoning. For instance, if all that could usefully be said about moral reasoning were that it is a matter of attending to the moral facts, then all interest would devolve upon the question of what those facts are – with some residual focus on the idea of moral attention (McNaughton 1988). Alternatively, it might be thought that moral reasoning is simply a matter of applying the correct moral theory via ordinary modes of deductive and empirical reasoning. Again, if that were true, one’s sufficient goal would be to find that theory and get the non-moral facts right. Neither of these reductive extremes seems plausible, however. Take the potential reduction to getting the facts right, first.

Contemporary advocates of the importance of correctly perceiving the morally relevant facts tend to focus on facts that we can perceive using our ordinary sense faculties and our ordinary capacities of recognition, such as that this person has an infection or that this person needs my medical help . On such a footing, it is possible to launch powerful arguments against the claim that moral principles undergird every moral truth (Dancy 1993) and for the claim that we can sometimes perfectly well decide what to do by acting on the reasons we perceive instinctively – or as we have been trained – without engaging in any moral reasoning. Yet this is not a sound footing for arguing that moral reasoning, beyond simply attending to the moral facts, is always unnecessary. On the contrary, we often find ourselves facing novel perplexities and moral conflicts in which our moral perception is an inadequate guide. In addressing the moral questions surrounding whether society ought to enforce surrogate-motherhood contracts, for instance, the scientific and technological novelties involved make our moral perceptions unreliable and shaky guides. When a medical researcher who has noted an individual’s illness also notes the fact that diverting resources to caring, clinically, for this individual would inhibit the progress of my research, thus harming the long-term health chances of future sufferers of this illness , he or she comes face to face with conflicting moral considerations. At this juncture, it is far less plausible or satisfying simply to say that, employing one’s ordinary sensory and recognitional capacities, one sees what is to be done, both things considered. To posit a special faculty of moral intuition that generates such overall judgments in the face of conflicting considerations is to wheel in a deus ex machina . It cuts inquiry short in a way that serves the purposes of fiction better than it serves the purposes of understanding. It is plausible instead to suppose that moral reasoning comes in at this point (Campbell & Kumar 2012).

For present purposes, it is worth noting, David Hume and the moral sense theorists do not count as short-circuiting our understanding of moral reasoning in this way. It is true that Hume presents himself, especially in the Treatise of Human Nature , as a disbeliever in any specifically practical or moral reasoning. In doing so, however, he employs an exceedingly narrow definition of “reasoning” (Hume 2000, Book I, Part iii, sect. ii). For present purposes, by contrast, we are using a broader working gloss of “reasoning,” one not controlled by an ambition to parse out the relative contributions of (the faculty of) reason and of the passions. And about moral reasoning in this broader sense, as responsible thinking about what one ought to do, Hume has many interesting things to say, starting with the thought that moral reasoning must involve a double correction of perspective (see section 2.4 ) adequately to account for the claims of other people and of the farther future, a double correction that is accomplished with the aid of the so-called “calm passions.”

If we turn from the possibility that perceiving the facts aright will displace moral reasoning to the possibility that applying the correct moral theory will displace – or exhaust – moral reasoning, there are again reasons to be skeptical. One reason is that moral theories do not arise in a vacuum; instead, they develop against a broad backdrop of moral convictions. Insofar as the first potentially reductive strand, emphasizing the importance of perceiving moral facts, has force – and it does have some – it also tends to show that moral theories need to gain support by systematizing or accounting for a wide range of moral facts (Sidgwick 1981). As in most other arenas in which theoretical explanation is called for, the degree of explanatory success will remain partial and open to improvement via revisions in the theory (see section 2.6 ). Unlike the natural sciences, however, moral theory is an endeavor that, as John Rawls once put it, is “Socratic” in that it is a subject pertaining to actions “shaped by self-examination” (Rawls 1971, 48f.). If this observation is correct, it suggests that the moral questions we set out to answer arise from our reflections about what matters. By the same token – and this is the present point – a moral theory is subject to being overturned because it generates concrete implications that do not sit well with us on due reflection. This being so, and granting the great complexity of the moral terrain, it seems highly unlikely that we will ever generate a moral theory on the basis of which we can serenely and confidently proceed in a deductive way to generate answers to what we ought to do in all concrete cases. This conclusion is reinforced by a second consideration, namely that insofar as a moral theory is faithful to the complexity of the moral phenomena, it will contain within it many possibilities for conflicts among its own elements. Even if it does deploy some priority rules, these are unlikely to be able to cover all contingencies. Hence, some moral reasoning that goes beyond the deductive application of the correct theory is bound to be needed.

In short, a sound understanding of moral reasoning will not take the form of reducing it to one of the other two levels of moral philosophy identified above. Neither the demand to attend to the moral facts nor the directive to apply the correct moral theory exhausts or sufficiently describes moral reasoning.

In addition to posing philosophical problems in its own right, moral reasoning is of interest on account of its implications for moral facts and moral theories. Accordingly, attending to moral reasoning will often be useful to those whose real interest is in determining the right answer to some concrete moral problem or in arguing for or against some moral theory. The characteristic ways we attempt to work through a given sort of moral quandary can be just as revealing about our considered approaches to these matters as are any bottom-line judgments we may characteristically come to. Further, we may have firm, reflective convictions about how a given class of problems is best tackled, deliberatively, even when we remain in doubt about what should be done. In such cases, attending to the modes of moral reasoning that we characteristically accept can usefully expand the set of moral information from which we start, suggesting ways to structure the competing considerations.

Facts about the nature of moral inference and moral reasoning may have important direct implications for moral theory. For instance, it might be taken to be a condition of adequacy of any moral theory that it play a practically useful role in our efforts at self-understanding and deliberation. It should be deliberation-guiding (Richardson 2018, §1.2). If this condition is accepted, then any moral theory that would require agents to engage in abstruse or difficult reasoning may be inadequate for that reason, as would be any theory that assumes that ordinary individuals are generally unable to reason in the ways that the theory calls for. J.S. Mill (1979) conceded that we are generally unable to do the calculations called for by utilitarianism, as he understood it, and argued that we should be consoled by the fact that, over the course of history, experience has generated secondary principles that guide us well enough. Rather more dramatically, R. M. Hare defended utilitarianism as well capturing the reasoning of ideally informed and rational “archangels” (1981). Taking seriously a deliberation-guidance desideratum for moral theory would favor, instead, theories that more directly inform efforts at moral reasoning by we “proletarians,” to use Hare’s contrasting term.

Accordingly, the close relations between moral reasoning, the moral facts, and moral theory do not eliminate moral reasoning as a topic of interest. To the contrary, because moral reasoning has important implications about moral facts and moral theories, these close relations lend additional interest to the topic of moral reasoning.

The final threshold question is whether moral reasoning is truly distinct from practical reasoning more generally understood. (The question of whether moral reasoning, even if practical, is structurally distinct from theoretical reasoning that simply proceeds from a proper recognition of the moral facts has already been implicitly addressed and answered, for the purposes of the present discussion, in the affirmative.) In addressing this final question, it is difficult to overlook the way different moral theories project quite different models of moral reasoning – again a link that might be pursued by the moral philosopher seeking leverage in either direction. For instance, Aristotle’s views might be as follows: a quite general account can be given of practical reasoning, which includes selecting means to ends and determining the constituents of a desired activity. The difference between the reasoning of a vicious person and that of a virtuous person differs not at all in its structure, but only in its content, for the virtuous person pursues true goods, whereas the vicious person simply gets side-tracked by apparent ones. To be sure, the virtuous person may be able to achieve a greater integration of his or her ends via practical reasoning (because of the way the various virtues cohere), but this is a difference in the result of practical reasoning and not in its structure. At an opposite extreme, Kant’s categorical imperative has been taken to generate an approach to practical reasoning (via a “typic of practical judgment”) that is distinctive from other practical reasoning both in the range of considerations it addresses and its structure (Nell 1975). Whereas prudential practical reasoning, on Kant’s view, aims to maximize one’s happiness, moral reasoning addresses the potential universalizability of the maxims – roughly, the intentions – on which one acts. Views intermediate between Aristotle’s and Kant’s in this respect include Hare’s utilitarian view and Aquinas’ natural-law view. On Hare’s view, just as an ideal prudential agent applies maximizing rationality to his or her own preferences, an ideal moral agent’s reasoning applies maximizing rationality to the set of everyone’s preferences that its archangelic capacity for sympathy has enabled it to internalize (Hare 1981). Thomistic, natural-law views share the Aristotelian view about the general unity of practical reasoning in pursuit of the good, rightly or wrongly conceived, but add that practical reason, in addition to demanding that we pursue the fundamental human goods, also, and distinctly, demands that we not attack these goods. In this way, natural-law views incorporate some distinctively moral structuring – such as the distinctions between doing and allowing and the so-called doctrine of double effect’s distinction between intending as a means and accepting as a by-product – within a unified account of practical reasoning (see entry on the natural law tradition in ethics ). In light of this diversity of views about the relation between moral reasoning and practical or prudential reasoning, a general account of moral reasoning that does not want to presume the correctness of a definite moral theory will do well to remain agnostic on the question of how moral reasoning relates to non-moral practical reasoning.

2. General Philosophical Questions about Moral Reasoning

To be sure, most great philosophers who have addressed the nature of moral reasoning were far from agnostic about the content of the correct moral theory, and developed their reflections about moral reasoning in support of or in derivation from their moral theory. Nonetheless, contemporary discussions that are somewhat agnostic about the content of moral theory have arisen around important and controversial aspects of moral reasoning. We may group these around the following seven questions:

  • How do relevant considerations get taken up in moral reasoning?
  • Is it essential to moral reasoning for the considerations it takes up to be crystallized into, or ranged under, principles?
  • How do we sort out which moral considerations are most relevant?
  • In what ways do motivational elements shape moral reasoning?
  • What is the best way to model the kinds of conflicts among considerations that arise in moral reasoning?
  • Does moral reasoning include learning from experience and changing one’s mind?
  • How can we reason, morally, with one another?

The remainder of this article takes up these seven questions in turn.

One advantage to defining “reasoning” capaciously, as here, is that it helps one recognize that the processes whereby we come to be concretely aware of moral issues are integral to moral reasoning as it might more narrowly be understood. Recognizing moral issues when they arise requires a highly trained set of capacities and a broad range of emotional attunements. Philosophers of the moral sense school of the 17th and 18th centuries stressed innate emotional propensities, such as sympathy with other humans. Classically influenced virtue theorists, by contrast, give more importance to the training of perception and the emotional growth that must accompany it. Among contemporary philosophers working in empirical ethics there is a similar divide, with some arguing that we process situations using an innate moral grammar (Mikhail 2011) and some emphasizing the role of emotions in that processing (Haidt 2001, Prinz 2007, Greene 2014). For the moral reasoner, a crucial task for our capacities of moral recognition is to mark out certain features of a situation as being morally salient. Sartre’s student, for instance, focused on the competing claims of his mother and the Free French, giving them each an importance to his situation that he did not give to eating French cheese or wearing a uniform. To say that certain features are marked out as morally salient is not to imply that the features thus singled out answer to the terms of some general principle or other: we will come to the question of particularism, below. Rather, it is simply to say that recognitional attention must have a selective focus.

What will be counted as a moral issue or difficulty, in the sense requiring moral agents’ recognition, will again vary by moral theory. Not all moral theories would count filial loyalty and patriotism as moral duties. It is only at great cost, however, that any moral theory could claim to do without a layer of moral thinking involving situation-recognition. A calculative sort of utilitarianism, perhaps, might be imagined according to which there is no need to spot a moral issue or difficulty, as every choice node in life presents the agent with the same, utility-maximizing task. Perhaps Jeremy Bentham held a utilitarianism of this sort. For the more plausible utilitarianisms mentioned above, however, such as Mill’s and Hare’s, agents need not always calculate afresh, but must instead be alive to the possibility that because the ordinary “landmarks and direction posts” lead one astray in the situation at hand, they must make recourse to a more direct and critical mode of moral reasoning. Recognizing whether one is in one of those situations thus becomes the principal recognitional task for the utilitarian agent. (Whether this task can be suitably confined, of course, has long been one of the crucial questions about whether such indirect forms of utilitarianism, attractive on other grounds, can prevent themselves from collapsing into a more Benthamite, direct form: cf. Brandt 1979.)

Note that, as we have been describing moral uptake, we have not implied that what is perceived is ever a moral fact. Rather, it might be that what is perceived is some ordinary, descriptive feature of a situation that is, for whatever reason, morally relevant. An account of moral uptake will interestingly impinge upon the metaphysics of moral facts, however, if it holds that moral facts can be perceived. Importantly intermediate, in this respect, is the set of judgments involving so-called “thick” evaluative concepts – for example, that someone is callous, boorish, just, or brave (see the entry on thick ethical concepts ). These do not invoke the supposedly “thinner” terms of overall moral assessment, “good,” or “right.” Yet they are not innocent of normative content, either. Plainly, we do recognize callousness when we see clear cases of it. Plainly, too – whatever the metaphysical implications of the last fact – our ability to describe our situations in these thick normative terms is crucial to our ability to reason morally.

It is debated how closely our abilities of moral discernment are tied to our moral motivations. For Aristotle and many of his ancient successors, the two are closely linked, in that someone not brought up into virtuous motivations will not see things correctly. For instance, cowards will overestimate dangers, the rash will underestimate them, and the virtuous will perceive them correctly ( Eudemian Ethics 1229b23–27). By the Stoics, too, having the right motivations was regarded as intimately tied to perceiving the world correctly; but whereas Aristotle saw the emotions as allies to enlist in support of sound moral discernment, the Stoics saw them as inimical to clear perception of the truth (cf. Nussbaum 2001).

That one discerns features and qualities of some situation that are relevant to sizing it up morally does not yet imply that one explicitly or even implicitly employs any general claims in describing it. Perhaps all that one perceives are particularly embedded features and qualities, without saliently perceiving them as instantiations of any types. Sartre’s student may be focused on his mother and on the particular plights of several of his fellow Frenchmen under Nazi occupation, rather than on any purported requirements of filial duty or patriotism. Having become aware of some moral issue in such relatively particular terms, he might proceed directly to sorting out the conflict between them. Another possibility, however, and one that we frequently seem to exploit, is to formulate the issue in general terms: “An only child should stick by an otherwise isolated parent,” for instance, or “one should help those in dire need if one can do so without significant personal sacrifice.” Such general statements would be examples of “moral principles,” in a broad sense. (We do not here distinguish between principles and rules. Those who do include Dworkin 1978 and Gert 1998.)

We must be careful, here, to distinguish the issue of whether principles commonly play an implicit or explicit role in moral reasoning, including well-conducted moral reasoning, from the issue of whether principles necessarily figure as part of the basis of moral truth. The latter issue is best understood as a metaphysical question about the nature and basis of moral facts. What is currently known as moral particularism is the view that there are no defensible moral principles and that moral reasons, or well-grounded moral facts, can exist independently of any basis in a general principle. A contrary view holds that moral reasons are necessarily general, whether because the sources of their justification are all general or because a moral claim is ill-formed if it contains particularities. But whether principles play a useful role in moral reasoning is certainly a different question from whether principles play a necessary role in accounting for the ultimate truth-conditions of moral statements. Moral particularism, as just defined, denies their latter role. Some moral particularists seem also to believe that moral particularism implies that moral principles cannot soundly play a useful role in reasoning. This claim is disputable, as it seems a contingent matter whether the relevant particular facts arrange themselves in ways susceptible to general summary and whether our cognitive apparatus can cope with them at all without employing general principles. Although the metaphysical controversy about moral particularism lies largely outside our topic, we will revisit it in section 2.5 , in connection with the weighing of conflicting reasons.

With regard to moral reasoning, while there are some self-styled “anti-theorists” who deny that abstract structures of linked generalities are important to moral reasoning (Clarke, et al. 1989), it is more common to find philosophers who recognize both some role for particular judgment and some role for moral principles. Thus, neo-Aristotelians like Nussbaum who emphasize the importance of “finely tuned and richly aware” particular discernment also regard that discernment as being guided by a set of generally describable virtues whose general descriptions will come into play in at least some kinds of cases (Nussbaum 1990). “Situation ethicists” of an earlier generation (e.g. Fletcher 1997) emphasized the importance of taking into account a wide range of circumstantial differentiae, but against the background of some general principles whose application the differentiae help sort out. Feminist ethicists influenced by Carol Gilligan’s path breaking work on moral development have stressed the moral centrality of the kind of care and discernment that are salient and well-developed by people immersed in particular relationships (Held 1995); but this emphasis is consistent with such general principles as “one ought to be sensitive to the wishes of one’s friends”(see the entry on feminist moral psychology ). Again, if we distinguish the question of whether principles are useful in responsibly-conducted moral thinking from the question of whether moral reasons ultimately all derive from general principles, and concentrate our attention solely on the former, we will see that some of the opposition to general moral principles melts away.

It should be noted that we have been using a weak notion of generality, here. It is contrasted only with the kind of strict particularity that comes with indexicals and proper names. General statements or claims – ones that contain no such particular references – are not necessarily universal generalizations, making an assertion about all cases of the mentioned type. Thus, “one should normally help those in dire need” is a general principle, in this weak sense. Possibly, such logically loose principles would be obfuscatory in the context of an attempt to reconstruct the ultimate truth-conditions of moral statements. Such logically loose principles would clearly be useless in any attempt to generate a deductively tight “practical syllogism.” In our day-to-day, non-deductive reasoning, however, such logically loose principles appear to be quite useful. (Recall that we are understanding “reasoning” quite broadly, as responsibly conducted thinking: nothing in this understanding of reasoning suggests any uniquely privileged place for deductive inference: cf. Harman 1986. For more on defeasible or “default” principles, see section 2.5 .)

In this terminology, establishing that general principles are essential to moral reasoning leaves open the further question whether logically tight, or exceptionless, principles are also essential to moral reasoning. Certainly, much of our actual moral reasoning seems to be driven by attempts to recast or reinterpret principles so that they can be taken to be exceptionless. Adherents and inheritors of the natural-law tradition in ethics (e.g. Donagan 1977) are particularly supple defenders of exceptionless moral principles, as they are able to avail themselves not only of a refined tradition of casuistry but also of a wide array of subtle – some would say overly subtle – distinctions, such as those mentioned above between doing and allowing and between intending as a means and accepting as a byproduct.

A related role for a strong form of generality in moral reasoning comes from the Kantian thought that one’s moral reasoning must counter one’s tendency to make exceptions for oneself. Accordingly, Kant holds, as we have noted, that we must ask whether the maxims of our actions can serve as universal laws. As most contemporary readers understand this demand, it requires that we engage in a kind of hypothetical generalization across agents, and ask about the implications of everybody acting that way in those circumstances. The grounds for developing Kant’s thought in this direction have been well explored (e.g., Nell 1975, Korsgaard 1996, Engstrom 2009). The importance and the difficulties of such a hypothetical generalization test in ethics were discussed the influential works Gibbard 1965 and Goldman 1974.

Whether or not moral considerations need the backing of general principles, we must expect situations of action to present us with multiple moral considerations. In addition, of course, these situations will also present us with a lot of information that is not morally relevant. On any realistic account, a central task of moral reasoning is to sort out relevant considerations from irrelevant ones, as well as to determine which are especially relevant and which only slightly so. That a certain woman is Sartre’s student’s mother seems arguably to be a morally relevant fact; what about the fact (supposing it is one) that she has no other children to take care of her? Addressing the task of sorting what is morally relevant from what is not, some philosophers have offered general accounts of moral relevant features. Others have given accounts of how we sort out which of the relevant features are most relevant, a process of thinking that sometimes goes by the name of “casuistry.”

Before we look at ways of sorting out which features are morally relevant or most morally relevant, it may be useful to note a prior step taken by some casuists, which was to attempt to set out a schema that would capture all of the features of an action or proposed action. The Roman Catholic casuists of the middle ages did so by drawing on Aristotle’s categories. Accordingly, they asked, where, when, why, how, by what means, to whom, or by whom the action in question is to be done or avoided (see Jonsen and Toulmin 1988). The idea was that complete answers to these questions would contain all of the features of the action, of which the morally relevant ones would be a subset. Although metaphysically uninteresting, the idea of attempting to list all of an action’s features in this way represents a distinctive – and extreme – heuristic for moral reasoning.

Turning to the morally relevant features, one of the most developed accounts is Bernard Gert’s. He develops a list of features relevant to whether the violation of a moral rule should be generally allowed. Given the designed function of Gert’s list, it is natural that most of his morally relevant features make reference to the set of moral rules he defended. Accordingly, some of Gert’s distinctions between dimensions of relevant features reflect controversial stances in moral theory. For example, one of the dimensions is whether “the violation [is] done intentionally or only knowingly” (Gert 1998, 234) – a distinction that those who reject the doctrine of double effect would not find relevant.

In deliberating about what we ought, morally, to do, we also often attempt to figure out which considerations are most relevant. To take an issue mentioned above: Are surrogate motherhood contracts more akin to agreements with babysitters (clearly acceptable) or to agreements with prostitutes (not clearly so)? That is, which feature of surrogate motherhood is more relevant: that it involves a contract for child-care services or that it involves payment for the intimate use of the body? Both in such relatively novel cases and in more familiar ones, reasoning by analogy plays a large role in ordinary moral thinking. When this reasoning by analogy starts to become systematic – a social achievement that requires some historical stability and reflectiveness about what are taken to be moral norms – it begins to exploit comparison to cases that are “paradigmatic,” in the sense of being taken as settled. Within such a stable background, a system of casuistry can develop that lends some order to the appeal to analogous cases. To use an analogy: the availability of a widely accepted and systematic set of analogies and the availability of what are taken to be moral norms may stand to one another as chicken does to egg: each may be an indispensable moment in the genesis of the other.

Casuistry, thus understood, is an indispensable aid to moral reasoning. At least, that it is would follow from conjoining two features of the human moral situation mentioned above: the multifariousness of moral considerations that arise in particular cases and the need and possibility for employing moral principles in sound moral reasoning. We require moral judgment, not simply a deductive application of principles or a particularist bottom-line intuition about what we should do. This judgment must be responsible to moral principles yet cannot be straightforwardly derived from them. Accordingly, our moral judgment is greatly aided if it is able to rest on the sort of heuristic support that casuistry offers. Thinking through which of two analogous cases provides a better key to understanding the case at hand is a useful way of organizing our moral reasoning, and one on which we must continue to depend. If we lack the kind of broad consensus on a set of paradigm cases on which the Renaissance Catholic or Talmudic casuists could draw, our casuistic efforts will necessarily be more controversial and tentative than theirs; but we are not wholly without settled cases from which to work. Indeed, as Jonsen and Toulmin suggest at the outset of their thorough explanation and defense of casuistry, the depth of disagreement about moral theories that characterizes a pluralist society may leave us having to rest comparatively more weight on the cases about which we can find agreement than did the classic casuists (Jonsen and Toulmin 1988).

Despite the long history of casuistry, there is little that can usefully be said about how one ought to reason about competing analogies. In the law, where previous cases have precedential importance, more can be said. As Sunstein notes (Sunstein 1996, chap. 3), the law deals with particular cases, which are always “potentially distinguishable” (72); yet the law also imposes “a requirement of practical consistency” (67). This combination of features makes reasoning by analogy particularly influential in the law, for one must decide whether a given case is more like one set of precedents or more like another. Since the law must proceed even within a pluralist society such as ours, Sunstein argues, we see that analogical reasoning can go forward on the basis of “incompletely theorized judgments” or of what Rawls calls an “overlapping consensus” (Rawls 1996). That is, although a robust use of analogous cases depends, as we have noted, on some shared background agreement, this agreement need not extend to all matters or all levels of individuals’ moral thinking. Accordingly, although in a pluralist society we may lack the kind of comprehensive normative agreement that made the high casuistry of Renaissance Christianity possible, the path of the law suggests that normatively forceful, case-based, analogical reasoning can still go on. A modern, competing approach to case-based or precedent-respecting reasoning has been developed by John F. Horty (2016). On Horty’s approach, which builds on the default logic developed in (Horty 2012), the body of precedent systematically shifts the weights of the reasons arising in a new case.

Reasoning by appeal to cases is also a favorite mode of some recent moral philosophers. Since our focus here is not on the methods of moral theory, we do not need to go into any detail in comparing different ways in which philosophers wield cases for and against alternative moral theories. There is, however, an important and broadly applicable point worth making about ordinary reasoning by reference to cases that emerges most clearly from the philosophical use of such reasoning. Philosophers often feel free to imagine cases, often quite unlikely ones, in order to attempt to isolate relevant differences. An infamous example is a pair of cases offered by James Rachels to cast doubt on the moral significance of the distinction between killing and letting die, here slightly redescribed. In both cases, there is at the outset a boy in a bathtub and a greedy older cousin downstairs who will inherit the family manse if and only if the boy predeceases him (Rachels 1975). In Case A, the cousin hears a thump, runs up to find the boy unconscious in the bath, and reaches out to turn on the tap so that the water will rise up to drown the boy. In Case B, the cousin hears a thump, runs up to find the boy unconscious in the bath with the water running, and decides to sit back and do nothing until the boy drowns. Since there is surely no moral difference between these cases, Rachels argued, the general distinction between killing and letting die is undercut. “Not so fast!” is the well-justified reaction (cf. Beauchamp 1979). Just because a factor is morally relevant in a certain way in comparing one pair of cases does not mean that it either is or must be relevant in the same way or to the same degree when comparing other cases. Shelly Kagan has dubbed the failure to take account of this fact of contextual interaction when wielding comparison cases the “additive fallacy” (1988). Kagan concludes from this that the reasoning of moral theorists must depend upon some theory that helps us anticipate and account for ways in which factors will interact in various contexts. A parallel lesson, reinforcing what we have already observed in connection with casuistry proper, would apply for moral reasoning in general: reasoning from cases must at least implicitly rely upon a set of organizing judgments or beliefs, of a kind that would, on some understandings, count as a moral “theory.” If this is correct, it provides another kind of reason to think that moral considerations could be crystallized into principles that make manifest the organizing structure involved.

We are concerned here with moral reasoning as a species of practical reasoning – reasoning directed to deciding what to do and, if successful, issuing in an intention. But how can such practical reasoning succeed? How can moral reasoning hook up with motivationally effective psychological states so as to have this kind of causal effect? “Moral psychology” – the traditional name for the philosophical study of intention and action – has a lot to say to such questions, both in its traditional, a priori form and its newly popular empirical form. In addition, the conclusions of moral psychology can have substantive moral implications, for it may be reasonable to assume that if there are deep reasons that a given type of moral reasoning cannot be practical, then any principles that demand such reasoning are unsound. In this spirit, Samuel Scheffler has explored “the importance for moral philosophy of some tolerably realistic understanding of human motivational psychology” (Scheffler 1992, 8) and Peter Railton has developed the idea that certain moral principles might generate a kind of “alienation” (Railton 1984). In short, we may be interested in what makes practical reasoning of a certain sort psychologically possible both for its own sake and as a way of working out some of the content of moral theory.

The issue of psychological possibility is an important one for all kinds of practical reasoning (cf. Audi 1989). In morality, it is especially pressing, as morality often asks individuals to depart from satisfying their own interests. As a result, it may appear that moral reasoning’s practical effect could not be explained by a simple appeal to the initial motivations that shape or constitute someone’s interests, in combination with a requirement, like that mentioned above, to will the necessary means to one’s ends. Morality, it may seem, instead requires individuals to act on ends that may not be part of their “motivational set,” in the terminology of Williams 1981. How can moral reasoning lead people to do that? The question is a traditional one. Plato’s Republic answered that the appearances are deceiving, and that acting morally is, in fact, in the enlightened self-interest of the agent. Kant, in stark contrast, held that our transcendent capacity to act on our conception of a practical law enables us to set ends and to follow morality even when doing so sharply conflicts with our interests. Many other answers have been given. In recent times, philosophers have defended what has been called “internalism” about morality, which claims that there is a necessary conceptual link between agents’ moral judgment and their motivation. Michael Smith, for instance, puts the claim as follows (Smith 1994, 61):

If an agent judges that it is right for her to Φ in circumstances C , then either she is motivated to Φ in C or she is practically irrational.

Even this defeasible version of moral judgment internalism may be too strong; but instead of pursuing this issue further, let us turn to a question more internal to moral reasoning. (For more on the issue of moral judgment internalism, see moral motivation .)

The traditional question we were just glancing at picks up when moral reasoning is done. Supposing that we have some moral conclusion, it asks how agents can be motivated to go along with it. A different question about the intersection of moral reasoning and moral psychology, one more immanent to the former, concerns how motivational elements shape the reasoning process itself.

A powerful philosophical picture of human psychology, stemming from Hume, insists that beliefs and desires are distinct existences (Hume 2000, Book II, part iii, sect. iii; cf. Smith 1994, 7). This means that there is always a potential problem about how reasoning, which seems to work by concatenating beliefs, links up to the motivations that desire provides. The paradigmatic link is that of instrumental action: the desire to Ψ links with the belief that by Φing in circumstances C one will Ψ. Accordingly, philosophers who have examined moral reasoning within an essentially Humean, belief-desire psychology have sometimes accepted a constrained account of moral reasoning. Hume’s own account exemplifies the sort of constraint that is involved. As Hume has it, the calm passions support the dual correction of perspective constitutive of morality, alluded to above. Since these calm passions are seen as competing with our other passions in essentially the same motivational coinage, as it were, our passions limit the reach of moral reasoning.

An important step away from a narrow understanding of Humean moral psychology is taken if one recognizes the existence of what Rawls has called “principle-dependent desires” (Rawls 1996, 82–83; Rawls 2000, 46–47). These are desires whose objects cannot be characterized without reference to some rational or moral principle. An important special case of these is that of “conception-dependent desires,” in which the principle-dependent desire in question is seen by the agent as belonging to a broader conception, and as important on that account (Rawls 1996, 83–84; Rawls 2000, 148–152). For instance, conceiving of oneself as a citizen, one may desire to bear one’s fair share of society’s burdens. Although it may look like any content, including this, may substitute for Ψ in the Humean conception of desire, and although Hume set out to show how moral sentiments such as pride could be explained in terms of simple psychological mechanisms, his influential empiricism actually tends to restrict the possible content of desires. Introducing principle-dependent desires thus seems to mark a departure from a Humean psychology. As Rawls remarks, if “we may find ourselves drawn to the conceptions and ideals that both the right and the good express … , [h]ow is one to fix limits on what people might be moved by in thought and deliberation and hence may act from?” (1996, 85). While Rawls developed this point by contrasting Hume’s moral psychology with Kant’s, the same basic point is also made by neo-Aristotelians (e.g., McDowell 1998).

The introduction of principle-dependent desires bursts any would-be naturalist limit on their content; nonetheless, some philosophers hold that this notion remains too beholden to an essentially Humean picture to be able to capture the idea of a moral commitment. Desires, it may seem, remain motivational items that compete on the basis of strength. Saying that one’s desire to be just may be outweighed by one’s desire for advancement may seem to fail to capture the thought that one has a commitment – even a non-absolute one – to justice. Sartre designed his example of the student torn between staying with his mother and going to fight with the Free French so as to make it seem implausible that he ought to decide simply by determining which he more strongly wanted to do.

One way to get at the idea of commitment is to emphasize our capacity to reflect about what we want. By this route, one might distinguish, in the fashion of Harry Frankfurt, between the strength of our desires and “the importance of what we care about” (Frankfurt 1988). Although this idea is evocative, it provides relatively little insight into how it is that we thus reflect. Another way to model commitment is to take it that our intentions operate at a level distinct from our desires, structuring what we are willing to reconsider at any point in our deliberations (e.g. Bratman 1999). While this two-level approach offers some advantages, it is limited by its concession of a kind of normative primacy to the unreconstructed desires at the unreflective level. A more integrated approach might model the psychology of commitment in a way that reconceives the nature of desire from the ground up. One attractive possibility is to return to the Aristotelian conception of desire as being for the sake of some good or apparent good (cf. Richardson 2004). On this conception, the end for the sake of which an action is done plays an important regulating role, indicating, in part, what one will not do (Richardson 2018, §§8.3–8.4). Reasoning about final ends accordingly has a distinctive character (see Richardson 1994, Schmidtz 1995). Whatever the best philosophical account of the notion of a commitment – for another alternative, see (Tiberius 2000) – much of our moral reasoning does seem to involve expressions of and challenges to our commitments (Anderson and Pildes 2000).

Recent experimental work, employing both survey instruments and brain imaging technologies, has allowed philosophers to approach questions about the psychological basis of moral reasoning from novel angles. The initial brain data seems to show that individuals with damage to the pre-frontal lobes tend to reason in more straightforwardly consequentialist fashion than those without such damage (Koenigs et al. 2007). Some theorists take this finding as tending to confirm that fully competent human moral reasoning goes beyond a simple weighing of pros and cons to include assessment of moral constraints (e.g., Wellman & Miller 2008, Young & Saxe 2008). Others, however, have argued that the emotional responses of the prefrontal lobes interfere with the more sober and sound, consequentialist-style reasoning of the other parts of the brain (e.g. Greene 2014). The survey data reveals or confirms, among other things, interesting, normatively loaded asymmetries in our attribution of such concepts as responsibility and causality (Knobe 2006). It also reveals that many of moral theory’s most subtle distinctions, such as the distinction between an intended means and a foreseen side-effect, are deeply built into our psychologies, being present cross-culturally and in young children, in a way that suggests to some the possibility of an innate “moral grammar” (Mikhail 2011).

A final question about the connection between moral motivation and moral reasoning is whether someone without the right motivational commitments can reason well, morally. On Hume’s official, narrow conception of reasoning, which essentially limits it to tracing empirical and logical connections, the answer would be yes. The vicious person could trace the causal and logical implications of acting in a certain way just as a virtuous person could. The only difference would be practical, not rational: the two would not act in the same way. Note, however, that the Humean’s affirmative answer depends on departing from the working definition of “moral reasoning” used in this article, which casts it as a species of practical reasoning. Interestingly, Kant can answer “yes” while still casting moral reasoning as practical. On his view in the Groundwork and the Critique of Practical Reason , reasoning well, morally, does not depend on any prior motivational commitment, yet remains practical reasoning. That is because he thinks the moral law can itself generate motivation. (Kant’s Metaphysics of Morals and Religion offer a more complex psychology.) For Aristotle, by contrast, an agent whose motivations are not virtuously constituted will systematically misperceive what is good and what is bad, and hence will be unable to reason excellently. The best reasoning that a vicious person is capable of, according to Aristotle, is a defective simulacrum of practical wisdom that he calls “cleverness” ( Nicomachean Ethics 1144a25).

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.

One influential building-block for thinking about moral conflicts is W. D. Ross’s notion of a “ prima facie duty”. Although this term misleadingly suggests mere appearance – the way things seem at first glance – it has stuck. Some moral philosophers prefer the term “ pro tanto duty” (e.g., Hurley 1989). Ross explained that his term provides “a brief way of referring to the characteristic (quite distinct from that of being a duty proper) which an act has, in virtue of being of a certain kind (e.g., the keeping of a promise), of being an act which would be a duty proper if it were not at the same time of another kind which is morally significant.” Illustrating the point, he noted that a prima facie duty to keep a promise can be overridden by a prima facie duty to avert a serious accident, resulting in a proper, or unqualified, duty to do the latter (Ross 1988, 18–19). Ross described each prima facie duty as a “parti-resultant” attribute, grounded or explained by one aspect of an act, whereas “being one’s [actual] duty” is a “toti-resultant” attribute resulting from all such aspects of an act, taken together (28; see Pietroski 1993). This suggests that in each case there is, in principle, some function that generally maps from the partial contributions of each prima facie duty to some actual duty. What might that function be? To Ross’s credit, he writes that “for the estimation of the comparative stringency of these prima facie obligations no general rules can, so far as I can see, be laid down” (41). Accordingly, a second strand in Ross simply emphasizes, following Aristotle, the need for practical judgment by those who have been brought up into virtue (42).

How might considerations of the sort constituted by prima facie duties enter our moral reasoning? They might do so explicitly, or only implicitly. There is also a third, still weaker possibility (Scheffler 1992, 32): it might simply be the case that if the agent had recognized a prima facie duty, he would have acted on it unless he considered it to be overridden. This is a fact about how he would have reasoned.

Despite Ross’s denial that there is any general method for estimating the comparative stringency of prima facie duties, there is a further strand in his exposition that many find irresistible and that tends to undercut this denial. In the very same paragraph in which he states that he sees no general rules for dealing with conflicts, he speaks in terms of “the greatest balance of prima facie rightness.” This language, together with the idea of “comparative stringency,” ineluctably suggests the idea that the mapping function might be the same in each case of conflict and that it might be a quantitative one. On this conception, if there is a conflict between two prima facie duties, the one that is strongest in the circumstances should be taken to win. Duly cautioned about the additive fallacy (see section 2.3 ), we might recognize that the strength of a moral consideration in one set of circumstances cannot be inferred from its strength in other circumstances. Hence, this approach will need still to rely on intuitive judgments in many cases. But this intuitive judgment will be about which prima facie consideration is stronger in the circumstances, not simply about what ought to be done.

The thought that our moral reasoning either requires or is benefited by a virtual quantitative crutch of this kind has a long pedigree. Can we really reason well morally in a way that boils down to assessing the weights of the competing considerations? Addressing this question will require an excursus on the nature of moral reasons. Philosophical support for this possibility involves an idea of practical commensurability. We need to distinguish, here, two kinds of practical commensurability or incommensurability, one defined in metaphysical terms and one in deliberative terms. Each of these forms might be stated evaluatively or deontically. The first, metaphysical sort of value incommensurability is defined directly in terms of what is the case. Thus, to state an evaluative version: two values are metaphysically incommensurable just in case neither is better than the other nor are they equally good (see Chang 1998). Now, the metaphysical incommensurability of values, or its absence, is only loosely linked to how it would be reasonable to deliberate. If all values or moral considerations are metaphysically (that is, in fact) commensurable, still it might well be the case that our access to the ultimate commensurating function is so limited that we would fare ill by proceeding in our deliberations to try to think about which outcomes are “better” or which considerations are “stronger.” We might have no clue about how to measure the relevant “strength.” Conversely, even if metaphysical value incommensurability is common, we might do well, deliberatively, to proceed as if this were not the case, just as we proceed in thermodynamics as if the gas laws obtained in their idealized form. Hence, in thinking about the deliberative implications of incommensurable values , we would do well to think in terms of a definition tailored to the deliberative context. Start with a local, pairwise form. We may say that two options, A and B, are deliberatively commensurable just in case there is some one dimension of value in terms of which, prior to – or logically independently of – choosing between them, it is possible adequately to represent the force of the considerations bearing on the choice.

Philosophers as diverse as Immanuel Kant and John Stuart Mill have argued that unless two options are deliberatively commensurable, in this sense, it is impossible to choose rationally between them. Interestingly, Kant limited this claim to the domain of prudential considerations, recognizing moral reasoning as invoking considerations incommensurable with those of prudence. For Mill, this claim formed an important part of his argument that there must be some one, ultimate “umpire” principle – namely, on his view, the principle of utility. Henry Sidgwick elaborated Mill’s argument and helpfully made explicit its crucial assumption, which he called the “principle of superior validity” (Sidgwick 1981; cf. Schneewind 1977). This is the principle that conflict between distinct moral or practical considerations can be rationally resolved only on the basis of some third principle or consideration that is both more general and more firmly warranted than the two initial competitors. From this assumption, one can readily build an argument for the rational necessity not merely of local deliberative commensurability, but of a global deliberative commensurability that, like Mill and Sidgwick, accepts just one ultimate umpire principle (cf. Richardson 1994, chap. 6).

Sidgwick’s explicitness, here, is valuable also in helping one see how to resist the demand for deliberative commensurability. Deliberative commensurability is not necessary for proceeding rationally if conflicting considerations can be rationally dealt with in a holistic way that does not involve the appeal to a principle of “superior validity.” That our moral reasoning can proceed holistically is strongly affirmed by Rawls. Rawls’s characterizations of the influential ideal of reflective equilibrium and his related ideas about the nature of justification imply that we can deal with conflicting considerations in less hierarchical ways than imagined by Mill or Sidgwick. Instead of proceeding up a ladder of appeal to some highest court or supreme umpire, Rawls suggests, when we face conflicting considerations “we work from both ends” (Rawls 1999, 18). Sometimes indeed we revise our more particular judgments in light of some general principle to which we adhere; but we are also free to revise more general principles in light of some relatively concrete considered judgment. On this picture, there is no necessary correlation between degree of generality and strength of authority or warrant. That this holistic way of proceeding (whether in building moral theory or in deliberating: cf. Hurley 1989) can be rational is confirmed by the possibility of a form of justification that is similarly holistic: “justification is a matter of the mutual support of many considerations, of everything fitting together into one coherent view” (Rawls 1999, 19, 507). (Note that this statement, which expresses a necessary aspect of moral or practical justification, should not be taken as a definition or analysis thereof.) So there is an alternative to depending, deliberatively, on finding a dimension in terms of which considerations can be ranked as “stronger” or “better” or “more stringent”: one can instead “prune and adjust” with an eye to building more mutual support among the considerations that one endorses on due reflection. If even the desideratum of practical coherence is subject to such re-specification, then this holistic possibility really does represent an alternative to commensuration, as the deliberator, and not some coherence standard, retains reflective sovereignty (Richardson 1994, sec. 26). The result can be one in which the originally competing considerations are not so much compared as transformed (Richardson 2018, chap. 1)

Suppose that we start with a set of first-order moral considerations that are all commensurable as a matter of ultimate, metaphysical fact, but that our grasp of the actual strength of these considerations is quite poor and subject to systematic distortions. Perhaps some people are much better placed than others to appreciate certain considerations, and perhaps our strategic interactions would cause us to reach suboptimal outcomes if we each pursued our own unfettered judgment of how the overall set of considerations plays out. In such circumstances, there is a strong case for departing from maximizing reasoning without swinging all the way to the holist alternative. This case has been influentially articulated by Joseph Raz, who develops the notion of an “exclusionary reason” to occupy this middle position (Raz 1990).

“An exclusionary reason,” in Raz’s terminology, “is a second order reason to refrain from acting for some reason” (39). A simple example is that of Ann, who is tired after a long and stressful day, and hence has reason not to act on her best assessment of the reasons bearing on a particularly important investment decision that she immediately faces (37). This notion of an exclusionary reason allowed Raz to capture many of the complexities of our moral reasoning, especially as it involves principled commitments, while conceding that, at the first order, all practical reasons might be commensurable. Raz’s early strategy for reconciling commensurability with complexity of structure was to limit the claim that reasons are comparable with regard to strength to reasons of a given order. First-order reasons compete on the basis of strength; but conflicts between first- and second-order reasons “are resolved not by the strength of the competing reasons but by a general principle of practical reasoning which determines that exclusionary reasons always prevail” (40).

If we take for granted this “general principle of practical reasoning,” why should we recognize the existence of any exclusionary reasons, which by definition prevail independently of any contest of strength? Raz’s principal answer to this question shifts from the metaphysical domain of the strengths that various reasons “have” to the epistemically limited viewpoint of the deliberator. As in Ann’s case, we can see in certain contexts that a deliberator is likely to get things wrong if he or she acts on his or her perception of the first-order reasons. Second-order reasons indicate, with respect to a certain range of first-order reasons, that the agent “must not act for those reasons” (185). The broader justification of an exclusionary reason, then, can consistently be put in terms of the commensurable first-order reasons. Such a justification can have the following form: “Given this agent’s deliberative limitations, the balance of first-order reasons will likely be better conformed with if he or she refrains from acting for certain of those reasons.”

Raz’s account of exclusionary reasons might be used to reconcile ultimate commensurability with the structured complexity of our moral reasoning. Whether such an attempt could succeed would depend, in part, on the extent to which we have an actual grasp of first-order reasons, conflict among which can be settled solely on the basis of their comparative strength. Our consideration, above, of casuistry, the additive fallacy, and deliberative incommensurability may combine to make it seem that only in rare pockets of our practice do we have a good grasp of first-order reasons, if these are defined, à la Raz, as competing only in terms of strength. If that is right, then we will almost always have good exclusionary reasons to reason on some other basis than in terms of the relative strength of first-order reasons. Under those assumptions, the middle way that Raz’s idea of exclusionary reasons seems to open up would more closely approach the holist’s.

The notion of a moral consideration’s “strength,” whether put forward as part of a metaphysical picture of how first-order considerations interact in fact or as a suggestion about how to go about resolving a moral conflict, should not be confused with the bottom-line determination of whether one consideration, and specifically one duty, overrides another. In Ross’s example of conflicting prima facie duties, someone must choose between averting a serious accident and keeping a promise to meet someone. (Ross chose the case to illustrate that an “imperfect” duty, or a duty of commission, can override a strict, prohibitive duty.) Ross’s assumption is that all well brought-up people would agree, in this case, that the duty to avert serious harm to someone overrides the duty to keep such a promise. We may take it, if we like, that this judgment implies that we consider the duty to save a life, here, to be stronger than the duty to keep the promise; but in fact this claim about relative strength adds nothing to our understanding of the situation. Yet we do not reach our practical conclusion in this case by determining that the duty to save the boy’s life is stronger. The statement that this duty is here stronger is simply a way to embellish the conclusion that of the two prima facie duties that here conflict, it is the one that states the all-things-considered duty. To be “overridden” is just to be a prima facie duty that fails to generate an actual duty because another prima facie duty that conflicts with it – or several of them that do – does generate an actual duty. Hence, the judgment that some duties override others can be understood just in terms of their deontic upshots and without reference to considerations of strength. To confirm this, note that we can say, “As a matter of fidelity, we ought to keep the promise; as a matter of beneficence, we ought to save the life; we cannot do both; and both categories considered we ought to save the life.”

Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas . Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  • He ought to do A .
  • He ought to do B .
  • He cannot do both A and B .
  • (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B . If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

Jonathan Dancy has well highlighted a kind of contextual variability in moral reasons that has come to be known as “reasons holism”: “a feature that is a reason in one case may be no reason at all, or an opposite reason, in another” (Dancy 2004). To adapt one of his examples: while there is often moral reason not to lie, when playing liar’s poker one generally ought to lie; otherwise, one will spoil the game (cf. Dancy 1993, 61). Dancy argues that reasons holism supports moral particularism of the kind discussed in section 2.2 , according to which there are no defensible moral principles. Taking this conclusion seriously would radically affect how we conducted our moral reasoning. The argument’s premise of holism has been challenged (e.g., Audi 2004, McKeever & Ridge 2006). Philosophers have also challenged the inference from reasons holism to particularism in various ways. Mark Lance and Margaret Olivia Little (2007) have done so by exhibiting how defeasible generalizations, in ethics and elsewhere, depend systematically on context. We can work with them, they suggest, by utilizing a skill that is similar to the skill of discerning morally salient considerations, namely the skill of discerning relevant similarities among possible worlds. More generally, John F. Horty has developed a logical and semantic account according to which reasons are defaults and so behave holistically, but there are nonetheless general principles that explain how they behave (Horty 2012). And Mark Schroeder has argued that our holistic views about reasons are actually better explained by supposing that there are general principles (Schroeder 2011).

This excursus on moral reasons suggests that there are a number of good reasons why reasoning about moral matters might not simply reduce to assessing the weights of competing considerations.

If we have any moral knowledge, whether concerning general moral principles or concrete moral conclusions, it is surely very imperfect. What moral knowledge we are capable of will depend, in part, on what sorts of moral reasoning we are capable of. Although some moral learning may result from the theoretical work of moral philosophers and theorists, much of what we learn with regard to morality surely arises in the practical context of deliberation about new and difficult cases. This deliberation might be merely instrumental, concerned only with settling on means to moral ends, or it might be concerned with settling those ends. There is no special problem about learning what conduces to morally obligatory ends: that is an ordinary matter of empirical learning. But by what sorts of process can we learn which ends are morally obligatory, or which norms morally required? And, more specifically, is strictly moral learning possible via moral reasoning?

Much of what was said above with regard to moral uptake applies again in this context, with approximately the same degree of dubiousness or persuasiveness. If there is a role for moral perception or for emotions in agents’ becoming aware of moral considerations, these may function also to guide agents to new conclusions. For instance, it is conceivable that our capacity for outrage is a relatively reliable detector of wrong actions, even novel ones, or that our capacity for pleasure is a reliable detector of actions worth doing, even novel ones. (For a thorough defense of the latter possibility, which intriguingly interprets pleasure as a judgment of value, see Millgram 1997.) Perhaps these capacities for emotional judgment enable strictly moral learning in roughly the same way that chess-players’ trained sensibilities enable them to recognize the threat in a previously unencountered situation on the chessboard (Lance and Tanesini 2004). That is to say, perhaps our moral emotions play a crucial role in the exercise of a skill whereby we come to be able to articulate moral insights that we have never before attained. Perhaps competing moral considerations interact in contextually specific and complex ways much as competing chess considerations do. If so, it would make sense to rely on our emotionally-guided capacities of judgment to cope with complexities that we cannot model explicitly, but also to hope that, once having been so guided, we might in retrospect be able to articulate something about the lesson of a well-navigated situation.

A different model of strictly moral learning puts the emphasis on our after-the-fact reactions rather than on any prior, tacit emotional or judgmental guidance: the model of “experiments in living,” to use John Stuart Mill’s phrase (see Anderson 1991). Here, the basic thought is that we can try something and see if “it works.” For this to be an alternative to empirical learning about what causally conduces to what, it must be the case that we remain open as to what we mean by things “working.” In Mill’s terminology, for instance, we need to remain open as to what are the important “parts” of happiness. If we are, then perhaps we can learn by experience what some of them are – that is, what are some of the constitutive means of happiness. These paired thoughts, that our practical life is experimental and that we have no firmly fixed conception of what it is for something to “work,” come to the fore in Dewey’s pragmatist ethics (see esp. Dewey 1967 [1922]). This experimentalist conception of strictly moral learning is brought to bear on moral reasoning in Dewey’s eloquent characterizations of “practical intelligence” as involving a creative and flexible approach to figuring out “what works” in a way that is thoroughly open to rethinking our ultimate aims.

Once we recognize that moral learning is a possibility for us, we can recognize a broader range of ways of coping with moral conflicts than was canvassed in the last section. There, moral conflicts were described in a way that assumed that the set of moral considerations, among which conflicts were arising, was to be taken as fixed. If we can learn, morally, however, then we probably can and should revise the set of moral considerations that we recognize. Often, we do this by re-interpreting some moral principle that we had started with, whether by making it more specific, making it more abstract, or in some other way (cf. Richardson 2000 and 2018).

So far, we have mainly been discussing moral reasoning as if it were a solitary endeavor. This is, at best, a convenient simplification. At worst, it is, as Jürgen Habermas has long argued, deeply distorting of reasoning’s essentially dialogical or conversational character (e.g., Habermas 1984; cf. Laden 2012). In any case, it is clear that we often do need to reason morally with one another.

Here, we are interested in how people may actually reason with one another – not in how imagined participants in an original position or ideal speech situation may be said to reason with one another, which is a concern for moral theory, proper. There are two salient and distinct ways of thinking about people morally reasoning with one another: as members of an organized or corporate body that is capable of reaching practical decisions of its own; and as autonomous individuals working outside any such structure to figure out with each other what they ought, morally, to do.

The nature and possibility of collective reasoning within an organized collective body has recently been the subject of some discussion. Collectives can reason if they are structured as an agent. This structure might or might not be institutionalized. In line with the gloss of reasoning offered above, which presupposes being guided by an assessment of one’s reasons, it is plausible to hold that a group agent “counts as reasoning, not just rational, only if it is able to form not only beliefs in propositions – that is, object-language beliefs – but also belief about propositions” (List and Pettit 2011, 63). As List and Pettit have shown (2011, 109–113), participants in a collective agent will unavoidably have incentives to misrepresent their own preferences in conditions involving ideologically structured disagreements where the contending parties are oriented to achieving or avoiding certain outcomes – as is sometimes the case where serious moral disagreements arise. In contexts where what ultimately matters is how well the relevant group or collective ends up faring, “team reasoning” that takes advantage of orientation towards the collective flourishing of the group can help it reach a collectively optimal outcome (Sugden 1993, Bacharach 2006; see entry on collective intentionality ). Where the group in question is smaller than the set of persons, however, such a collectively prudential focus is distinct from a moral focus and seems at odds with the kind of impartiality typically thought distinctive of the moral point of view. Thinking about what a “team-orientation” to the set all persons might look like might bring us back to thoughts of Kantian universalizability; but recall that here we are focused on actual reasoning, not hypothetical reasoning. With regard to actual reasoning, even if individuals can take up such an orientation towards the “team” of all persons, there is serious reason, highlighted by another strand of the Kantian tradition, for doubting that any individual can aptly surrender their moral judgment to any group’s verdict (Wolff 1998).

This does not mean that people cannot reason together, morally. It suggests, however, that such joint reasoning is best pursued as a matter of working out together, as independent moral agents, what they ought to do with regard to an issue on which they have some need to cooperate. Even if deferring to another agent’s verdict as to how one morally ought to act is off the cards, it is still possible that one may licitly take account of the moral testimony of others (for differing views, see McGrath 2009, Enoch 2014).

In the case of independent individuals reasoning morally with one another, we may expect that moral disagreement provides the occasion rather than an obstacle. To be sure, if individuals’ moral disagreement is very deep, they may not be able to get this reasoning off the ground; but as Kant’s example of Charles V and his brother each wanting Milan reminds us, intractable disagreement can arise also from disagreements that, while conceptually shallow, are circumstantially sharp. If it were true that clear-headed justification of one’s moral beliefs required seeing them as being ultimately grounded in a priori principles, as G.A. Cohen argued (Cohen 2008, chap. 6), then room for individuals to work out their moral disagreements by reasoning with one another would seem to be relatively restricted; but whether the nature of (clearheaded) moral grounding is really so restricted is seriously doubtful (Richardson 2018, §9.2). In contrast to what such a picture suggests, individuals’ moral commitments seem sufficiently open to being re-thought that people seem able to engage in principled – that is, not simply loss-minimizing – compromise (Richardson 2018, §8.5).

What about the possibility that the moral community as a whole – roughly, the community of all persons – can reason? This possibility does not raise the kind of threat to impartiality that is raised by the team reasoning of a smaller group of people; but it is hard to see it working in a way that does not run afoul of the concern about whether any person can aptly defer, in a strong sense, to the moral judgments of another agent. Even so, a residual possibility remains, which is that the moral community can reason in just one way, namely by accepting or ratifying a moral conclusion that has already become shared in a sufficiently inclusive and broad way (Richardson 2018, chap. 7).

  • Anderson, E. S., 1991. “John Stuart Mill and experiments in living,” Ethics , 102: 4–26.
  • Anderson, E. S. and Pildes, R. H., 2000. “Expressive theories of law: A general restatement,” University of Pennsylvania Law Review , 148: 1503–1575.
  • Arpaly, N. and Schroeder, T. In praise of desire , Oxford: Oxford University Press.
  • Audi, R., 1989. Practical reasoning , London: Routledge.
  • –––. 2004. The good in the right: A theory of good and intrinsic value , Princeton: Princeton University Press.
  • Bacharach, M., 2006. Beyond individual choice: Teams and frames in game theory , Princeton: Princeton University Press.
  • Beauchamp, T. L., 1979. “A reply to Rachels on active and passive euthanasia,” in Medical responsibility , ed. W. L. Robinson, Clifton, N.J.: Humana Press, 182–95.
  • Brandt, R. B., 1979. A theory of the good and the right , Oxford: Oxford University Press.
  • Bratman, M., 1999. Faces of intention: Selected essays on intention and agency , Cambridge, England: Cambridge University Press.
  • Broome, J., 2009. “The unity of reasoning?” in Spheres of reason , ed. S. Robertson, Oxford: Oxford University Press.
  • –––, 2013. Rationality through Reasoning , Chichester, West Sussex: Wiley Blackwell.
  • Campbell, R. and Kumar, V., 2012. “Moral reasoning on the ground,” Ethics , 122: 273–312.
  • Chang, R. (ed.), 1998. Incommensurability, incomparability, and practical reason , Cambridge, Mass.: Harvard University Press.
  • Clarke, S. G., and E. Simpson, 1989. Anti-theory in ethics and moral conservativism , Albany: SUNY Press.
  • Dancy, J., 1993. Moral reasons , Oxford: Blackwell.
  • –––, 2004. Ethics without principles , Oxford: Oxford University Press.
  • Dewey, J., 1967. The middle works, 1899–1924 , Vol. 14, Human nature and conduct , ed. J. A. Boydston, Carbondale: Southern Illinois University Press.
  • Donagan, A., 1977. The theory of morality , Chicago: University of Chicago Press.
  • Dworkin, R., 1978. Taking rights seriously , Cambridge: Harvard University Press.
  • Engstrom, S., 2009. The form of practical knowledge: A study of the categorical imperative , Cambridge, Mass.: Harvard University Press.
  • Enoch, D., 2014. “In defense of moral deference,” Journal of philosophy , 111: 229–58.
  • Fernandez, P. A., 2016. “Practical reasoning: Where the action is,” Ethics , 126: 869–900.
  • Fletcher, J., 1997. Situation ethics: The new morality , Louisville: Westminster John Knox Press.
  • Frankfurt, H. G., 1988. The importance of what we care about: Philosophical essays , Cambridge: Cambridge University Press.
  • Gert, B., 1998. Morality: Its nature and justification , New York: Oxford University Press.
  • Gibbard, Allan, 1965. “Rule-utilitarianism: Merely an illusory alternative?,” Australasian Journal of Philosophy , 43: 211–220.
  • Goldman, Holly S., 1974. “David Lyons on utilitarian generalization,” Philosophical Studies , 26: 77–95.
  • Greene, J. D., 2014. “Beyond point-and-shoot morality: Why cognitive (neuro)science matters for ethics,” Ethics , 124: 695–726.
  • Habermas, J., 1984. The theory of communicative action: Vol. I, Reason and the rationalization of society , Boston: Beacon Press.
  • Haidt, J., 2001. “The emotional dog and its rational tail: A social intuitionist approach to moral judgment,” Psychological Review , 108: 814–34.
  • Hare, R. M., 1981. Moral thinking: Its levels, method, and point , Oxford: Oxford University Press.
  • Harman, G., 1986. Change in view: principles of peasoning , Cambridge, Mass.: MIT Press.
  • Held, V., 1995. Justice and care: Essential readings in feminist ethics , Boulder, Colo.: Westview Press.
  • Hieronymi, P., 2013. “The use of reasons in thought (and the use of earmarks in arguments),” Ethics , 124: 124–27.
  • Horty, J. F., 2012. Reasons as defaults , Oxford: Oxford University Press.
  • –––, 2016. “Reasoning with precedents as constrained natural reasoning,” in E. Lord and B. McGuire (eds.), Weighing Reasons , Oxford: Oxford University Press: 193–212.
  • Hume, D., 2000 [1739–40]. A treatise of human nature , ed. D. F. Norton and M. J. Norton, Oxford: Oxford University Press.
  • Hurley, S. L., 1989. Natural reasons: Personality and polity , New York: Oxford University Press.
  • Jonsen, A. R., and S. Toulmin, 1988. The abuse of casuistry: A history of moral reasoning , Berkeley: University of California Press.
  • Kagan, S., 1988. “The additive fallacy,” Ethics , 90: 5–31.
  • Knobe, J., 2006. “The concept of individual action: A case study in the uses of folk psychology,” Philosophical Studies , 130: 203–231.
  • Koenigs, M., 2007. “Damage to the prefrontal cortex increases utilitarian moral judgments,” Nature , 446: 908–911.
  • Kolodny, N., 2005. “Why be rational?” Mind , 114: 509–63.
  • Laden, A. S., 2012. Reasoning: A social picture , Oxford: Oxford University Press.
  • Korsgaard, C. M., 1996. Creating the kingdom of ends , Cambridge: Cambridge University Press.
  • Lance, M. and Little, M., 2007. “Where the Laws Are,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics (Volume 2), Oxford: Oxford University Press.
  • List, C. and Pettit, P., 2011. Group agency: The possibility, design, and status of corporate agents , Oxford: Oxford University Press.
  • McDowell, John, 1998. Mind, value, and reality , Cambridge, Mass.: Harvard University Press.
  • McGrath, S., 2009. “The puzzle of moral deference,” Philosophical Perspectives , 23: 321–44.
  • McKeever, S. and Ridge, M. 2006., Principled Ethics: Generalism as a Regulative Idea , Oxford: Oxford University Press.
  • McNaughton, D., 1988. Moral vision: An introduction to ethics , Oxford: Blackwell.
  • Mill, J. S., 1979 [1861]. Utilitarianism , Indianapolis: Hackett Publishing.
  • Millgram, E., 1997. Practical induction , Cambridge, Mass.: Harvard University Press.
  • Mikhail, J., 2011. Elements of moral cognition: Rawls’s linguistic analogy and the cognitive science of moral and legal judgment , Cambridge: Cambridge University Press.
  • Nell, O., 1975. Acting on principle: An essay on Kantian ethics , New York: Columbia University Press.
  • Nussbaum, M. C., 1990. Love’s knowledge: Essays on philosophy and literature , New York: Oxford University Press.
  • –––, 2001. Upheavals of thought: The intelligence of emotions , Cambridge, England: Cambridge University Press.
  • Pietroski, P. J., 1993. “Prima facie obligations, ceteris paribus laws in moral theory,” Ethics , 103: 489–515.
  • Prinz, J., 2007. The emotional construction of morals , Oxford: Oxford University Press.
  • Rachels, J., 1975. “Active and passive euthanasia,” New England Journal of Medicine , 292: 78–80.
  • Railton, P., 1984. “Alienation, consequentialism, and the demands of morality,” Philosophy and Public Affairs , 13: 134–71.
  • –––, 2014. “The affective dog and its rational tale: Intuition and attunement,” Ethics , 124: 813–59.
  • Rawls, J., 1971. A theory of justice , Cambridge, Mass.: Harvard University Press.
  • –––, 1996. Political liberalism , New York: Columbia University Press.
  • –––, 1999. A theory of justice , revised edition, Cambridge, Mass.: Harvard University Press.
  • –––, 2000. Lectures on the history of moral philosophy , Cambridge, Mass.: Harvard University Press.
  • Raz, J., 1990. Practical reason and norms , Princeton: Princeton University Press.
  • Richardson, H. S., 1994. Practical reasoning about final ends , Cambridge: Cambridge University Press.
  • –––, 2000. “Specifying, balancing, and interpreting bioethical principles,” Journal of Medicine and Philosophy , 25: 285–307.
  • –––, 2002. Democratic autonomy: Public reasoning about the ends of policy , New York: Oxford University Press.
  • –––, 2004. “Thinking about conflicts of desires,” in Practical conflicts: New philosophical essays , eds. P. Baumann and M. Betzler, Cambridge: Cambridge University Press, 96–117.
  • –––, 2018. Articulating the moral community: Toward a constructive ethical pragmatism , New York: Oxford University Press.
  • Ross, W. D., 1988. The right and the good , Indianapolis: Hackett.
  • Sandel, M., 1998. Liberalism and the limits of justice , Cambridge: Cambridge University Press.
  • Sartre, J. P., 1975. “Existentialism is a Humanism,” in Existentialism from Dostoyevsky to Sartre , ed. W. Kaufmann, New York: Meridian-New American, 345–69.
  • Scheffler, Samuel, 1992. Human morality , New York: Oxford University Press.
  • Schmidtz, D., 1995. Rational choice and moral agency , Princeton: Princeton University Press.
  • Schneewind, J.B., 1977. Sidgwick’s ethics and Victorian moral philosophy , Oxford: Oxford University Press.
  • Schroeder, M., 2011. “Holism, weight, and undercutting.” Noûs , 45: 328–44.
  • Schwitzgebel, E. and Cushman, F., 2012. “Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers,” Mind and Language , 27: 135–53.
  • Sidgwick, H., 1981. The methods of ethics , reprinted, 7th edition, Indianapolis: Hackett.
  • Sinnott-Armstrong, W., 1988. Moral dilemmas , Oxford: Basil Blackwell.
  • Smith, M., 1994. The moral problem , Oxford: Blackwell.
  • –––, 2013. “A constitutivist theory of reasons: Its promise and parts,” Law, Ethics and Philosophy , 1: 9–30.
  • Sneddon, A., 2007. “A social model of moral dumbfounding: Implications for studying moral reasoning and moral judgment,” Philosophical Psychology , 20: 731–48.
  • Sugden, R., 1993. “Thinking as a team: Towards an explanation of nonselfish behavior,” Social Philosophy and Policy , 10: 69–89.
  • Sunstein, C. R., 1996. Legal reasoning and political conflict , New York: Oxford University Press.
  • Tiberius, V., 2000. “Humean heroism: Value commitments and the source of normativity,” Pacific Philosophical Quarterly , 81: 426–446.
  • Vogler, C., 1998. “Sex and talk,” Critical Inquiry , 24: 328–65.
  • Wellman, H. and Miller, J., 2008. “Including deontic reasoning as fundamental to theory of mind,” Human Development , 51: 105–35
  • Williams, B., 1981. Moral luck: Philosophical papers 1973–1980 , Cambridge: Cambridge University Press.
  • Wolff, R. P., 1998. In defense of anarchism , Berkeley and Los Angeles: University of California Press.
  • Young, L. and Saxe, R., 2008. “The neural basis of belief encoding and integration in moral judgment,” NeuroImage , 40: 1912–20.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

agency: shared | intentionality: collective | moral dilemmas | moral particularism | moral particularism: and moral generalism | moral relativism | moral skepticism | practical reason | prisoner’s dilemma | reflective equilibrium | value: incommensurable

Acknowledgments

The author is grateful for help received from Gopal Sreenivasan and the students in a seminar on moral reasoning taught jointly with him, to the students in a more recent seminar in moral reasoning, and, for criticisms received, to David Brink, Margaret Olivia Little and Mark Murphy. He welcomes further criticisms and suggestions for improvement.

Copyright © 2018 by Henry S. Richardson < richardh @ georgetown . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

How to Make an Argument That’s Actually Persuasive

Persuade

I n a fascinating experiment, researchers from Stanford and the University of Toronto examined how we try to persuade other people to change their minds . It involved roughly 200 people—half of whom identified themselves as politically liberal and half as conservative. The liberals were asked to write a few sentences to convince conservatives to support same-sex marriage . The conservatives were asked to write a few sentences to convince liberals to support English as the official language of the United States. What happened?

Almost everyone failed.

Among the liberals , the vast majority (74%) made their arguments for same-sex marriage by evoking values often favored by liberals , like fairness and equality. (For example, “They deserve the same equal rights as other Americans.”) Only 9% of liberals appealed to conservatives with values often favored by conservatives, like loyalty and unity. (“Our fellow citizens of the United States of America deserve to stand alongside us.”) Worse, a third of liberals (34%) also used arguments that contradicted values favored by conservatives, like the importance of faith. (“Although you may personally believe your faith should be against such a thing . . . your religion should play no part in the laws of the United States.”) Talk about not reading the room!

The conservatives fared no better. The vast majority (70%) argued for English as the official language of the United States using values often favored by conservatives , like loyalty and unity. (For example, “Making English the official language will help unify the country as we all can communicate with one another and speak the same national language.”) Only 8% of conservatives appealed to liberals with values favored by liberals, like fairness. (“By making English our official language, there will be less racism and discrimination.”) And 14% of conservatives used arguments that contradicted values favored by liberals. (“So those of you preaching diversity and equality, who think everyone should take advantage of us, should think real hard.”) Attacking your audience is . . . not persuasive.

At one time or another, we’ve all done it, whether at Thanksgiving dinner with our families or a community meeting with our neighbors. We often try to convince someone of something by using the arguments and appealing to the values that make sense to us . And other people do it right back to us. We talk past each other. “Most people are not very good at appealing to other people’s values,” said Matthew Feinberg, a coauthor of the study and a professor of organizational behavior at the University of Toronto.

Which may also explain why sometimes the more we talk, the less likely we are to change the minds of people who see the world differently than we do. In another study, researchers asked hundreds of liberals to follow prominent conservatives on what was then Twitter and hundreds of conservatives to follow prominent liberals. Think anyone was persuaded? Of course not. In fact, after about a month, the conservatives were even more conservative in their beliefs and the liberals were even more liberal.

Read More: Fighting With a Family Member Over Politics? Try These 4 Steps

Why are we so bad at persuading people who see the world through different eyes?

Researchers at the University of California Irvine have one possible answer. They argue that many of us have a “moral empathy gap”—we often fail to appreciate that other people have a different moral worldview than our own. “Our inability to feel what others feel,” the researchers explained, “makes it difficult to understand how they think”—which, in turn, makes it difficult to connect, communicate, and persuade.

But there’s hope.

Some social psychologists argue that we humans tend to see the world through several major prisms, or “moral foundations.” These foundations go something like this:

Care/Harm—a focus on caring for and protecting others from harm.

Fairness/Cheating—an emphasis on equal treatment and an opposition to cheating.

Authority/Subversion—a deep respect for hierarchy and authority and a disapproval of subversion. 

Loyalty/Betrayal—devotion to family, community, and country, and a disdain for betrayal.

Sanctity/Degradation—a belief in upholding the sanctity of our bodies, institutions, and lives.

Liberty/Oppression—an emphasis on independence and a rejection of oppression. Of course, these six moral foundations alone can’t explain everyone’s beliefs about every issue, and few of us identify with just one. Still, as you read through them, you may find yourself drawn more to certain ones. Consider yourself more progressive or liberal? You might gravitate toward care and fairness. More traditional or conservative in your views? You might see a lot to like with authority, loyalty, and sanctity. And whatever our ideological orientation, what’s not to like about liberty? When I first learned about moral foundations after leaving the White House, where I worked as a speechwriter for President Obama from 2009 to 2017, it felt like an epiphany. I felt like I’d finally found the theory behind what I’d been practicing for decades in my work. Because, to me, “moral foundations” is another way of saying “values,” and values can help you build bridges with any audience, even in these polarized times .

The values we share

One Saturday a few years ago, I spent the day in a church basement in Virginia listening to about a dozen Americans talk about their lives, their beliefs, and their country. Half identified themselves as conservative and half as liberal. As you’d expect, things got heated. Fast. Some people struggled to express themselves without disparaging the other side. A few folks fell back on familiar talking points they’d heard from politicians and TV pundits .

But there were also some surprises, which was the point. The meeting was convened by Braver Angels, a group devoted to helping Americans bridge partisan divides. Over seven hours of intense and emotionally exhausting conversations, some of these conservatives and liberals started to sound like . . . one another. Liberals proudly described their deep religious faith , their service in the military, and how they value family above all else. Conservatives said it was important for communities to welcome immigrants of all backgrounds and that America needs to be a place where people of all races and religions can thrive.

At times, these conservatives and liberals even used the exact same words to describe their beliefs and goals—“the dignity of the individual . . . respect for all people . . . creating opportunity for more Americans to succeed.”

“The other side,” joked one person, “was not as unreasonable as I expected.”

Of course, cherishing the same values can sometimes lead people to hold profoundly different opinions on specific issues. To many conservatives, “freedom” means freedom from excessive government regulation; to many liberals, it means a larger role for government in areas like education and health care to help people live their lives in freedom and security. To many conservatives, “caring for others” and “protecting life” means protecting the unborn from abortion; to many liberals, it means protecting the life and choices of the mother.

Still, many of us fail to realize that, even as we disagree with one another on specific issues (sometimes vehemently), most people share our basic values. In one survey, Republicans and Democrats were asked about their own views and those of people in the other party. Less than a third of Democrats believed that Republicans think it’s “extremely or very important” for Americans to learn from the past so the country can make progress. In fact, 91% of Republicans said they believe that. Likewise, only about a third of Republicans believed that Democrats think it’s “extremely or very important” that government be accountable to the people. In fact, 90% of Democrats said they believe that.

It’s true across other values as well. Roughly 90% of the people in the survey, Republicans and Democrats alike, said that personal responsibility, fair enforcement of the law, compassion, and respect across differences were important to them. What one person in that church basement in Virginia said seems to be true: “We have sincere differences, but I think we’re motivated by deeply shared principles.”

That's the beauty and the power of appealing to values. Values can help us transcend the usual fault lines in our families, companies, and countries.

Which means the next time you get up to speak and try to connect with or persuade your audience—whether at Thanksgiving dinner or your town meeting—don't simply argue the points that make sense to you. Instead, try to speak to the broader, deeper values that matter most to your listeners and that, even in these divided times, can help us find some common ground in our families, our communities, and our country.

Adapted excerpt from Say It Well: Find Your Voice, Speak Your Mind, Inspire Any Audience by Terry Szuplat. Copyright 2024 by Terry Szuplat. To be published by Harper Business, a division of HarperCollins Publishers. Reprinted by permission.

More Must-Reads from TIME

  • How Kamala Harris Knocked Donald Trump Off Course
  • Introducing TIME's 2024 Latino Leaders
  • What Makes a Friendship Last Forever?
  • 33 True Crime Documentaries That Shaped the Genre
  • Long COVID Looks Different in Kids
  • Your Questions About Early Voting , Answered
  • Column: Your Cynicism Isn’t Helping Anybody
  • The 100 Most Influential People in AI 2024

Contact us at [email protected]

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • America’s Abortion Quandary

2. Social and moral considerations on abortion

Table of contents.

  • 1. Americans’ views on whether, and in what circumstances, abortion should be legal
  • Public views of what would change the number of abortions in the U.S.
  • A majority of Americans say women should have more say in setting abortion policy in the U.S.
  • How do certain arguments about abortion resonate with Americans?
  • In their own words: How Americans feel about abortion 
  • 3. How the issue of abortion touches Americans personally
  • Acknowledgments
  • Methodology

Relatively few Americans view the morality of abortion in stark terms: Overall, just 7% of all U.S. adults say abortion is morally acceptable in all cases, and 13% say it is morally wrong in all cases. A third say that abortion is morally wrong in  most  cases, while about a quarter (24%) say it is morally acceptable most of the time. About an additional one-in-five do not consider abortion a moral issue.

A chart showing wide religious and partisan differences in views of the morality of abortion

There are wide differences on this question by political party and religious affiliation. Among Republicans and independents who lean toward the Republican Party, most say that abortion is morally wrong either in most (48%) or all cases (20%). Among Democrats and Democratic leaners, meanwhile, only about three-in-ten (29%) hold a similar view. About four-in-ten Democrats say abortion is morally  acceptable  in most (32%) or all (11%) cases, while an additional 28% say abortion is not a moral issue. 

White evangelical Protestants overwhelmingly say abortion is morally wrong in most (51%) or all cases (30%). A slim majority of Catholics (53%) also view abortion as morally wrong, but many also say it is morally acceptable in most (24%) or all cases (4%), or that it is not a moral issue (17%). And among religiously unaffiliated Americans, about three-quarters see abortion as morally acceptable (45%) or not a moral issue (32%).

There is strong alignment between people’s views of whether abortion is morally wrong and whether it should be illegal. For example, among U.S. adults who take the view that abortion should be illegal in all cases without exception, fully 86% also say abortion is always morally wrong. The prevailing view among adults who say abortion should be legal in all circumstances is that abortion is not a moral issue (44%), though notable shares of this group also say it is morally acceptable in all (27%) or most (22%) cases. 

Most Americans who say abortion should be illegal with some exceptions take the view that abortion is morally wrong in  most  cases (69%). Those who say abortion should be legal with some exceptions are somewhat more conflicted, with 43% deeming abortion morally acceptable in most cases and 26% saying it is morally wrong in most cases; an additional 24% say it is not a moral issue. 

The survey also asked respondents who said abortion is morally wrong in at least some cases whether there are situations where abortion should still be legal  despite  being morally wrong. Roughly half of U.S. adults (48%) say that there are, in fact, situations where abortion is morally wrong but should still be legal, while just 22% say that whenever abortion is morally wrong, it should also be illegal. An additional 28% either said abortion is morally acceptable in all cases or not a moral issue, and thus did not receive the follow-up question.

Across both political parties and all major Christian subgroups – including Republicans and White evangelicals – there are substantially more people who say that there are situations where abortion should still be  legal  despite being morally wrong than there are who say that abortion should always be  illegal  when it is morally wrong.

A chart showing roughly half of Americans say there are situations where abortion is morally wrong, but should still be legal

Asked about the impact a number of policy changes would have on the number of abortions in the U.S., nearly two-thirds of Americans (65%) say “more support for women during pregnancy, such as financial assistance or employment protections” would reduce the number of abortions in the U.S. Six-in-ten say the same about expanding sex education and similar shares say more support for parents (58%), making it easier to place children for adoption in good homes (57%) and passing stricter abortion laws (57%) would have this effect. 

While about three-quarters of White evangelical Protestants (74%) say passing stricter abortion laws would reduce the number of abortions in the U.S., about half of religiously unaffiliated Americans (48%) hold this view. Similarly, Republicans are more likely than Democrats to say this (67% vs. 49%, respectively). By contrast, while about seven-in-ten unaffiliated adults (69%) say expanding sex education would reduce the number of abortions in the U.S., only about half of White evangelicals (48%) say this. Democrats also are substantially more likely than Republicans to hold this view (70% vs. 50%). 

Democrats are somewhat more likely than Republicans to say support for parents – such as paid family leave or more child care options – would reduce the number of abortions in the country (64% vs. 53%, respectively), while Republicans are more likely than Democrats to say making adoption into good homes easier would reduce abortions (64% vs. 52%).

Majorities across both parties and other subgroups analyzed in this report say that more support for women during pregnancy would reduce the number of abortions in America.

A chart showing Republicans more likely than Democrats to say passing stricter abortion laws would reduce number of abortions in the United States

More than half of U.S. adults (56%) say women should have more say than men when it comes to setting policies around abortion in this country – including 42% who say women should have “a lot” more say. About four-in-ten (39%) say men and women should have equal say in abortion policies, and 3% say men should have more say than women. 

Six-in-ten women and about half of men (51%) say that women should have more say on this policy issue. 

Democrats are much more likely than Republicans to say women should have more say than men in setting abortion policy (70% vs. 41%). Similar shares of Protestants (48%) and Catholics (51%) say women should have more say than men on this issue, while the share of religiously unaffiliated Americans who say this is much higher (70%).

Seeking to gauge Americans’ reactions to several common arguments related to abortion, the survey presented respondents with six statements and asked them to rate how well each statement reflects their views on a five-point scale ranging from “extremely well” to “not at all well.” 

About half of U.S. adults say if legal abortions are too hard to get, women will seek out unsafe ones

The list included three statements sometimes cited by individuals wishing to protect a right to abortion: “The decision about whether to have an abortion should belong solely to the pregnant woman,” “If legal abortions are too hard to get, then women will seek out unsafe abortions from unlicensed providers,” and “If legal abortions are too hard to get, then it will be more difficult for women to get ahead in society.” The first two of these resonate with the greatest number of Americans, with about half (53%) saying each describes their views “extremely” or “very” well. In other words, among the statements presented in the survey, U.S. adults are most likely to say that women alone should decide whether to have an abortion, and that making abortion illegal will lead women into unsafe situations.

The three other statements are similar to arguments sometimes made by those who wish to restrict access to abortions: “Human life begins at conception, so a fetus is a person with rights,” “If legal abortions are too easy to get, then people won’t be as careful with sex and contraception,” and “If legal abortions are too easy to get, then some pregnant women will be pressured into having an abortion even when they don’t want to.” 

Fewer than half of Americans say each of these statements describes their views extremely or very well. Nearly four-in-ten endorse the notion that “human life begins at conception, so a fetus is a person with rights” (26% say this describes their views extremely well, 12% very well), while about a third say that “if legal abortions are too easy to get, then people won’t be as careful with sex and contraception” (20% extremely well, 15% very well).

When it comes to statements cited by proponents of abortion rights, Democrats are much more likely than Republicans to identify with all three of these statements, as are religiously unaffiliated Americans compared with Catholics and Protestants. Women also are more likely than men to express these views – and especially more likely to say that decisions about abortion should fall solely to pregnant women and that restrictions on abortion will put women in unsafe situations. Younger adults under 30 are particularly likely to express the view that if legal abortions are too hard to get, then it will be difficult for women to get ahead in society.

A chart showing most Democrats say decisions about abortion should fall solely to pregnant women

In the case of the three statements sometimes cited by opponents of abortion, the patterns generally go in the opposite direction. Republicans are more likely than Democrats to say each statement reflects their views “extremely” or “very” well, as are Protestants (especially White evangelical Protestants) and Catholics compared with the religiously unaffiliated. In addition, older Americans are more likely than young adults to say that human life begins at conception and that easy access to abortion encourages unsafe sex.

Gender differences on these questions, however, are muted. In fact, women are just as likely as men to say that human life begins at conception, so a fetus is a person with rights (39% and 38%, respectively).

A chart showing nearly three-quarters of White evangelicals say human life begins at conception

Analyzing certain statements together allows for an examination of the extent to which individuals can simultaneously hold two views that may seem to some as in conflict. For instance, overall, one-in-three U.S. adults say that  both  the statement “the decision about whether to have an abortion should belong solely to the pregnant woman” and the statement “human life begins at conception, so the fetus is a person with rights” reflect their own views at least somewhat well. This includes 12% of adults who say both statements reflect their views “extremely” or “very” well. 

Republicans are slightly more likely than Democrats to say both statements reflect their own views at least somewhat well (36% vs. 30%), although Republicans are much more likely to say  only  the statement about the fetus being a person with rights reflects their views at least somewhat well (39% vs. 9%) and Democrats are much more likely to say  only  the statement about the decision to have an abortion belonging solely to the pregnant woman reflects their views at least somewhat well (55% vs. 19%).

Additionally, those who take the stance that abortion should be legal in all cases with no exceptions are overwhelmingly likely (76%) to say only the statement about the decision belonging solely to the pregnant woman reflects their views extremely, very or somewhat well, while a nearly identical share (73%) of those who say abortion should be  illegal  in all cases with no exceptions say only the statement about human life beginning at conception reflects their views at least somewhat well.

A chart showing one-third of U.S. adults say both that abortion decision belongs solely to the pregnant woman, and that life begins at conception and fetuses have rights

When asked to describe whether they had any other additional views or feelings about abortion, adults shared a range of strong or complex views about the topic. In many cases, Americans reiterated their strong support – or opposition to – abortion in the U.S. Others reflected on how difficult or nuanced the issue was, offering emotional responses or personal experiences to one of two open-ended questions asked on the survey. 

One open-ended question asked respondents if they wanted to share any other views or feelings about abortion overall. The other open-ended question asked respondents about their feelings or views regarding abortion restrictions. The responses to both questions were similar. 

Overall, about three-in-ten adults offered a response to either of the open-ended questions. There was little difference in the likelihood to respond by party, religion or gender, though people who say they have given a “lot” of thought to the issue were more likely to respond than people who have not. 

Of those who did offer additional comments, about a third of respondents said something in support of legal abortion. By far the most common sentiment expressed was that the decision to have an abortion should be solely a personal decision, or a decision made jointly with a woman and her health care provider, with some saying simply that it “should be between a woman and her doctor.” Others made a more general point, such as one woman who said, “A woman’s body and health should not be subject to legislation.” 

About one-in-five of the people who responded to the question expressed disapproval of abortion – the most common reason being a belief that a fetus is a person or that abortion is murder. As one woman said, “It is my belief that life begins at conception and as much as is humanly possible, we as a society need to support, protect and defend each one of those little lives.” Others in this group pointed to the fact that they felt abortion was too often used as a form of birth control. For example, one man said, “Abortions are too easy to obtain these days. It seems more women are using it as a way of birth control.” 

About a quarter of respondents who opted to answer one of the open-ended questions said that their views about abortion were complex; many described having mixed feelings about the issue or otherwise expressed sympathy for both sides of the issue. One woman said, “I am personally opposed to abortion in most cases, but I think it would be detrimental to society to make it illegal. I was alive before the pill and before legal abortions. Many women died.” And one man said, “While I might feel abortion may be wrong in some cases, it is never my place as a man to tell a woman what to do with her body.” 

The remaining responses were either not related to the topic or were difficult to interpret.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Christianity
  • Evangelicalism
  • Political Issues
  • Politics & Policy
  • Protestantism
  • Religion & Abortion
  • Religion & Politics
  • Religion & Social Values

Americans see many federal agencies favorably, but Republicans grow more critical of Justice Department

9 facts about americans and marijuana, nearly three-quarters of americans say it would be ‘too risky’ to give presidents more power, biden nears 100-day mark with strong approval, positive rating for vaccine rollout, biden begins presidency with positive ratings; trump departs with lowest-ever job mark, most popular, report materials.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.

© 2024 Pew Research Center

IMAGES

  1. Contemporary Moral Issues Essay

    moral issue essay brainly

  2. Solving a Moral Dilemma Personal Essay on Samploon.com

    moral issue essay brainly

  3. Essay on the topic the importance of moral education for

    moral issue essay brainly

  4. what is the different between moral and non-moral standars.

    moral issue essay brainly

  5. 1. What is the moral value/lesson of the story?2.Based on the short

    moral issue essay brainly

  6. Learning Activity Sheet No.3.3Determining the various social, Moral

    moral issue essay brainly

VIDEO

  1. 10 Lines Essay on Moral Values//English Essay/Moral Values

  2. Civics Issue Essay Breakdown (August 2023)

  3. AWA

  4. Social Issue Presentation

  5. write an essay on moral education in english || essay on moral education || role of moral education

  6. Dr. Frank A. Thomas Houses, Fields, and Lands

COMMENTS

  1. An argumentative essay about an ethical issue

    An argumentative essay on an ethical issue involves analyzing controversial topics within applied ethics. This can include complex considerations in bioethics like abortion or vaccination controversies, often exploring the balance between individual rights, societal safety, and personal beliefs. ... which focuses on applying moral norms and ...

  2. Writing an Argumentative Essay about an Ethical Issue

    AI-generated answer. To write an argumentative essay about an ethical issue, you should follow a pre-writing process. Here's a step-by-step guide to help you: 1. Choose an Ethical Issue: Select a specific ethical issue that you want to discuss in your essay. It could be anything from animal testing to capital punishment.

  3. What's morally acceptable? It depends on where in the world you live

    It depends on where in the world you live. By Jacob Poushter. The Pew Research Center asked people in 40 countries about what is morally unacceptable, morally acceptable or not a moral issue. The issues included: married people having an affair, gambling, homosexuality, having an abortion, sex between unmarried adults, drinking alcohol, getting ...

  4. The greatest moral challenge of our time? It's how we think about

    Moral reform. So to respond to the greatest moral challenge of our time, we need to seriously rethink morality itself. The best way to think about morality is as a cultural tool that we humans ...

  5. Writing an argumentative essay about an ethical issue brainly

    Argumentative essays are written with the goal of proving or disproving a specific argument. You can use some of the ideas and arguments presented here to write a full essay on the topic of an ethical quandary. 'If fast-tracking medicine testing in human subjects is a valid action in times of epidemics that require swift action,' the argument goes.

  6. Argumentative Essay on an Ethical Issue

    Among the various genres of writing, crafting an argumentative essay on ethical issue demands a nuanced approach that combines critical thinking, logical reasoning, and ethical analysis. This guide aims to provide a comprehensive roadmap for mastering this skill, empowering writers to articulate their perspectives effectively and ethically. 1.

  7. I need a long essay on what moral issues best defines me

    Moral Issue of Abortion There are many moral topics people can chose to debate about. One of the most popular ones is abortion and whether it is morally right or if it is morally wrong. Personally I believe abortion is morally wrong. The main reason is taking another persons life. There are many causes and outcomes that can take place.

  8. Essay about Social, Moral, Economic Issues 1000 words

    report flag outlined. Answer: Social, moral, and economic issues are intertwined and have a profound impact on our lives. They shape our beliefs, values, and behavior, and determine the quality of life we lead. In this essay, we will explore some of the most pressing social, moral, and economic issues facing our society today. Social Issues.

  9. Make a Moral issues essay

    Answer. Answer: Sure, here is a sample moral issues essay: Moral Issues: The Importance of Ethical Behavior in Society. Ethical behavior is one of the most important aspects of society. It is the foundation of trust, respect, and fairness that enables individuals and communities to coexist peacefully and productively.

  10. Types of Moral Principles and Examples of Each

    However, blindly following moral principles without considering their origin or using your judgment based on the situation can lead to issues. The best course of action is usually to adhere to a loosely defined set of moral principles that align with your beliefs and society as a whole while also considering each situation individually.

  11. Morality: Definition, Theories, and Examples

    Morality vs. Ethics. Morality and Laws. Morality refers to the set of standards that enable people to live cooperatively in groups. It's what societies determine to be "right" and "acceptable.". Sometimes, acting in a moral manner means individuals must sacrifice their own short-term interests to benefit society.

  12. Moral Reasoning

    1. The Philosophical Importance of Moral Reasoning 1.1 Defining "Moral Reasoning" This article takes up moral reasoning as a species of practical reasoning - that is, as a type of reasoning directed towards deciding what to do and, when successful, issuing in an intention (see entry on practical reason).Of course, we also reason theoretically about what morality requires of us; but the ...

  13. How Appealing to Values Can Help Change Someone's Mind

    Of course, these six moral foundations alone can't explain everyone's beliefs about every issue, and few of us identify with just one. Still, as you read through them, you may find yourself ...

  14. Writing an argumentative essay about an ethical issue brainly

    AnnuG. Argumentative essays are prepared with a certain claim in mind that is either supported or refuted. You can build a complete essay on the subject of an ethical issue by using some of these ideas and the justification presented here. If fast-tracking medicine testing in human beings is a legal course of action during epidemics that call ...

  15. PDF REFLECTION AND MORALITY

    focus on what reasons there may be to be moral, what acting morally entails, or in what sense, if any, moral judgments count as true or false. All of these are important issues. But often the taken-for-granted deserves the greatest scrutiny. That we should be able at all to view the world imper-sonally, recognizing the independent and equal ...

  16. 2. Social and moral considerations on abortion

    Social and moral considerations on abortion. Relatively few Americans view the morality of abortion in stark terms: Overall, just 7% of all U.S. adults say abortion is morally acceptable in all cases, and 13% say it is morally wrong in all cases. A third say that abortion is morally wrong in most cases, while about a quarter (24%) say it is ...

  17. Write a 3-sentence essay on your own opinion on the social ...

    Write a 3-sentence essay on your own opinion on the social and moral issues suggested in the poem. POEM Social issue is a problem that influences many citizens within a society Social awareness enables us to consider the perspective of other people and understand their needs.

  18. Define moral issue, moral judgment, moral decision, moral ...

    moral issue : A situation or problem requires an organization or a person to choose between some alternatives. These are evaluated as ethical (right) or unethical (wrong). Involvement of a difference of belief is a moral issue. moral judgment : Moral judgment is the evaluation of a certain behavior as good or bad, or as right or wrong.

  19. Cite a contemporary moral issue in our society. Apply ONE ethical

    Cite a contemporary moral issue in our society. Apply ONE ethical theory learned in class to judge this moral issue. Use the theory's claims in each paragraph to back up and support your arguments. This essay must be a fresh/new essay. It cannot be your Utilitarian paper again. You must have a minimum of five paragraphs. This question is worth ...

  20. Writing an argumentative essay about an ethical issue?

    An argumentative essay on the ethical aspect of targeting uninformed consumers requires presenting a claim, providing supporting evidences, considering counterarguments, and concluding the essay effectively. Explanation: Writing an argumentative essay about the ethical implications of targeting uniformed consumers can be an intriguing subject.

  21. Social Issue Moral Issue Economic Issue

    Moral Issue. Economic Issue . problems or challenges that impact society's well-being, relationships, and quality of life. These issues can include poverty, inequality, discrimination, homelessness, crime, substance abuse, and access to healthcare and education. Social issues often stem from systemic factors such as societal norms, cultural ...

  22. Describe how gene therapy can be a social issue and give rise to moral

    Justify your response in two or more complete sentences in the essay box below. Describe how cloning can be a social issue and give rise to moral, ethical and legal debates. Justify your response in two or more complete sentences in the essay box below. Will give branlist

  23. Meaning of moral issue

    Meaning of moral issue - 13958431. Answer: A moral issue can be understood as an issue to be resolved not only by considering the technical stuff but also by keeping moral values in mind. ..."Moral issue is a working definition of an issue of moral concern is presented as any issue with the potential to help or harm anyone, including oneself."