Do Emotions and Morality Mix?

A philosopher explains how feelings influence right and wrong.

feelings and moral decision making essay

Daily life is peppered with moral decisions. Some are so automatic that they fail to register—like holding the door for a mother struggling with a stroller, or resisting a passing urge to elbow the guy who cut you in line at Starbucks. Others chafe a little more, like deciding whether or not to give money to a figure rattling a cup of coins on a darkening evening commute. A desire to help, a fear of danger, and a cost-benefit analysis of the contents of my wallet; these gut reactions and reasoned arguments all swirl beneath conscious awareness.

While society urges people towards morally commendable choices with laws and police, and religious traditions stipulate good and bad through divine commands, scriptures, and sermons, the final say lies within each of our heads. Rational thinking, of course, plays a role in how we make moral decisions. But our moral compasses are also powerfully influenced by the fleeting forces of disgust, fondness, or fear.

Should subjective feelings matter when deciding right and wrong? Philosophers have debated this question for thousands of years. Some say absolutely: Emotions, like our love for our friends and family, are a crucial part of what give life meaning, and ought to play a guiding role in morality. Some say absolutely not: Cold, impartial, rational thinking is the only proper way to make a decision. Emotion versus reason—it’s one of the oldest and most epic standoffs we know.

Could using modern scientific tools to separate the soup of moral decision-making—peeking into the brain to see how emotion and reason really operate—shed light on these philosophical questions? The field of moral cognition, an interdisciplinary effort between researchers in social and cognitive psychology, behavioral economics, and neuroscience, has tried to do just that. Since the early 2000s, moral psychologists have been using experimental designs to assess people’s behavior and performance on certain tasks, along with fMRI scans to glimpse the brain’s hidden activity, to illuminate the structure of moral thinking.

One pioneer in this field, the philosopher and Harvard University psychology professor Joshua Greene, combined an iconic and thorny ethical thought experiment—the “ trolley problem ,” when you must decide whether or not you’d flip a switch, or push a man off a footbridge, to cause one person to die instead of five—with brain imaging back in 2001 . Those experiments, and subsequent ones, have helped to demystify the role that intuition plays in how we make ethical tradeoffs—and ultimately showed that moral decisions are subject to the same biases as any other type of decision.

I spoke with Greene about how moral-cognition research illuminates the role of emotion in morality—scientifically, but perhaps also philosophically. Below is a lightly edited and condensed transcript of our conversation.

Lauren Cassani Davis : Your research has revealed that people’s intuitions about right and wrong often influence their decisions in ways that seem irrational. If we know they have the potential to lead us astray, are our moral intuitions still useful?

Joshua Greene : Oh, absolutely. Our emotions, our gut reactions, evolved biologically, culturally, and through our own personal experiences because they have served us well in the past—at least, according to certain criteria, which we may or may not endorse. The idea is not that they’re all bad, but rather that they’re not necessarily up to the task of helping us work through modern moral problems, the kinds of problems that people disagree about arising from cultural differences and new opportunities or problems created by technology, and so on.

Recommended Reading

feelings and moral decision making essay

Would You Pull the Trolley Switch? Does it Matter?

feelings and moral decision making essay

Is Human Morality a Product of Evolution?

feelings and moral decision making essay

Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality

Davis : You describe moral decision-making as a process that combines two types of thinking: “manual” thinking that is slow, consciously controlled, and rule-based, and “automatic” mental processes that are fast, emotional, and effortless. How widespread is this “dual-process” theory of the human mind?

Greene : I haven’t taken a poll but it’s certainly—not just for morality but for decision-making in general—very hard to find a paper that doesn’t support, criticize, or otherwise engage with the dual-process perspective. Thanks primarily to Daniel Kahneman [the author of Thinking, Fast and Slow ] and Amos Tversky, and everything that follows them, it’s the dominant perspective in judgment and decision making. But it does have its critics. There are some people, coming from neuroscience especially, who think that it’s oversimplified. They are starting with the brain and are very much aware of its complexity, aware that these processes are dynamic and interacting, aware that there aren’t just two circuits there, and as a result they say that the dual-process framework is wrong. But to me, it's just different levels of description, different levels of specificity. I haven't encountered any evidence that has caused me to rethink the basic idea that automatic and controlled processing make distinct contributions to judgment and decision making.

Davis : These neural mechanisms you describe are involved in making any kind of decision, right?— the brain weighs an emotional response with a more calculated cost-benefit analysis whether you’re deciding whether to push a guy off a bridge to save people from a runaway train, or trying not to impulse buy a pair of shoes.

Greene : Right, it’s not specific to morality at all.

Davis : Does this have implications for how much we think about morality as special or unique?

Greene : Oh, absolutely. I think that's the clearest lesson of the last 10 to 15 years exploring morality from a neuroscientific perspective: There is, as far as we can tell, no distinctive moral faculty. Instead what we see are different parts of the brain doing all the same kinds of things that they do in other contexts. There’s no special moral circuitry, or moral part of the brain, or distinctive type of moral thinking. What makes moral thinking moral thinking is the function that is plays in society, not the mechanical processes that are taking place in the brain when people are doing it. I, among others, think that function is cooperation, allowing otherwise selfish individuals to reap the benefits of living and working together.

Davis : The idea that morality has no special place in the brain seems counterintuitive, especially when you think about the sacredness surrounding morality in religious contexts, and its association with the divine. Have you ever had pushback—people saying, this general-purpose mechanical explanation doesn’t feel right?

Greene : Yes, people often assume that morality has to be a special thing in the brain. And early on, there was—and to some extent there still is—a lot of research that compares thinking about a moral thing to thinking about a similar non-moral thing, and the researchers say, aha, here are the neural correlates of morality. But in retrospect it seems clear that when you compare a moral question to a non-moral question, if you see any differences there, it’s not because moral things engage a distinctive kind of cognition; instead, it’s something more basic about the content of what is being considered.

Davis : Professional ethicists often argue about whether we are more morally responsible for the harm caused by something we actively did than something we passively let happen—like in the medical setting where doctors are legally allowed to let someone die; but not to actively end the life of a terminally ill patient, even if that’s their wish. You’ve argued that this “action-omission distinction” may draw a lot of its force from incidental features of our mental machinery. Have ideas like this trickled into the real world?

Greene : People have been making similar points for some time. Peter Singer, for example, says that we should be focused more on outcomes and less on what he views as incidental features of the action itself. He’s argued for a focus on quality of life over sanctity of life. Implicit in the sanctity-of-life idea is that it’s ok to allow someone to die, but it’s not ok to actively take someone’s life, even if it’s what they want, even if they have no quality of life. So certainly, the idea of being less mystical about these things and thinking more pragmatically about consequences, and letting people choose their own way—that, I think, has had a very big influence on bioethics. And I think I’m lending some additional support to those ideas.

Davis : Philosophers have long prided themselves on using reason—often worshipped as a glorious, infallible thing—not emotion, to solve moral problems. But at one point in your book, Moral Tribes, you effectively debunk the work of one of the most iconic proponents of reason, Immanuel Kant. You say that many of Kant’s arguments are just esoteric rationalizations of the emotions and intuitions he inherited from his culture. You’ve said that his most famous arguments are not fundamentally different from his other lesser-known arguments, whose conclusions we rarely take seriously today—like his argument that masturbation is morally wrong because it involves “using oneself as a means.” How have people reacted to that interpretation?

Greene : As you might guess, there are philosophers who really don’t like it. I like to think that I’ve changed some people's minds. What seems to happen more often is that people who are just starting out and confronting this whole debate and set of ideas for the first time, but who don’t already have a stake in one side or the other and who understand the science, read that and say, oh, right, that makes sense.

Davis : How can we know when we’re engaged in genuine moral reasoning and not mere rationalization of our emotions?

Greene : I think one way to tell is, do you find yourself taking seriously conclusions that on a gut level you don’t like? Are you putting up any kind of fight with your gut reactions? I think that’s the clearest indication that you are actually thinking it through as opposed to just justifying your gut reactions.

Davis : In the context of everything you’ve studied, from philosophy to psychology, what do you think wisdom means?

Greene : I would say that a wise person is someone who can operate his or her own mind in the same way that a skilled photographer can operate a camera. You need to not only be good with the automatic settings, and to be good with the manual mode, but also to have a good sense of when to use one and when to use the other. And which automatic settings to rely on, specifically, in which kinds of circumstances.

Over the course of your life you build up intuitions about how to act, but then circumstances may change over the course of your life. And what worked at one point didn’t work at another point. And so you can build up these higher-order intuitions about when to let go and try something new. There really is no perfect algorithm, but I would say that a wise mind is one that has the right levels of rigidity and flexibility at multiple levels of abstraction.

Davis : What do you think about the potential for specific introspective techniques—I’m thinking about meditation or mindfulness techniques from the Buddhist tradition—to act as a means of improving our own moral self-awareness?

Greene : That’s an interesting connection—you’re exploring your own mental machinery in meditation. You’re learning to handle your own mind in the same way that an experienced photographer learns to handle her camera. And so you’re building these higher-order skills, where you’re not only thinking, but you’re thinking about how to think, and monitoring your own lower-level thinking from a higher level—you have this integrated hierarchical thinking.

And from what I hear from the people who study it, certain kinds of meditation really do encourage compassion and willingness to help others. It sounds very plausible to me. Tania Singer , for example, has been doing some work on this recently that has been interesting and very compelling. This isn’t something I can speak on as an expert, but based on what I’ve heard from scientists I respect, it sounds plausible to me that meditation of the right kind can change you in a way that most people would consider a moral improvement.

Logo for University System of New Hampshire Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Analyzing the basics of ethical thinking for leaders and organizations in society

This chapter will introduce the basic constructs of moral thinking. We will begin by defining the terms morality and ethics.  After creating a working knowledge of the terminology, we will look at the roots of moral decision-making in our society by tracing the factors contributing to the Western societal framework. We will examine the many characteristics, including inherent tensions, that determine individual morality and societal ethics while focusing on the inherent legacy and discussion in that interaction. At the end of this section, different conceptions of the more profound components of moral theory and its interaction in society will be introduced, with constructive and practical outcomes that will help us to determine how best to approach ethical outcomes. This will include suggestions on becoming more aware of moral decision-making and how to avoid potential problems organizations or leaders might face as they consider problems that we must address personally, professionally, and in a societal and/or global sense.

Key Definitions

What is Morality?

The constructs of human conduct and/or values.

What is Ethics?

The study of the constructs that determine what is good and evil in direct connection with moral principles and values

What is Moral Reasoning?

The factors, arguments, and thinking patterns that determine the constructs of human conduct and/or values

Let’s begin with basic definitions of the study of moral philosophy and “good” decision-making.  Morality is the term used to describe the constructs of human conduct and/or values.  At its base, morality is formulated on an understanding of preferred behavior, in both an individual and societal sense, depending on the context.  It is often in the interaction of personal and societal factors that thinkers have contemplated the depth and uniqueness of this study. Though many theorists differ in their interpretation of how morality is derived personally and collectively, experts generally agree that morality is a combination of reason and “sense” that we use or fall back on to determine right from wrong or our expectations of ourselves and others.  Using the writings of Plato in commentary on Socrates, the definition focuses on morality as the determination of “how we ought to live.”  This understanding of morality coincides with our beliefs about the future and how we conceive of how the world, the people, and the factors that determine that reality should come to be and the result we desire.

Ethics is the formal study of the personal and collective definitions of morality. Ethics focuses on how we, individually or collectively, conceive or determine morality. It represents the constant reevaluation and thinking behind the decisions that have led us to these conclusions.

“Ethics” is derived from the Greek term “ethos.” This term was most closely connected to the Greek concept of “proper character or manners.” The definition of ethics, whether used as a discipline or conceptually, is focused on pursuing objective truth to determine better outcomes daily for everyone, regardless of the factors or the results. Inherent in the study of ethics is a crucial understanding of the concept of objectivity.

Moral reasoning is the series of factors, arguments, and thinking patterns that humans use or engage in to determine what the basic values or constructs of proper moral judgments should be. Moral reasoning focuses on why and/or how we achieve the result of a proper way of living life.  Though this is complicated, we all engage in this reasoning daily and throughout our lives, whether we consciously know it or not.

Two questions are at the core of this evaluation:

  • What is the best course of thought and action required to improve our awareness of this reasoning?
  • How do we determine the best outcome personally and as we interact and build community with others?

These terms are crucial to consider as we work towards the conceptual goal of truth. It includes how to read individuals more carefully and diligently and how we know ourselves.  By paying more close attention to these constructs and studying them in greater depth, a good thinker can understand the factors that determine better decisions and, of course, avoid the prospect of decisions that could be very costly.

Crucial Moral Concepts

What is Virtue?

The concept of moral excellence or proper moral conduct

What are Values?

Characteristics of human thought and action that are intrinsically preferred or held in high esteem

Building on these definitions, we turn our attention to two concepts that are crucial to ethical study.  Virtue is defined as the concept of moral excellence or proper moral conduct.  This term is also applied to a field of ethical study called “virtue ethics.”  “Virtue” philosophical thinkers believe there is a core of attributes central to the human condition that we can determine or “call upon” as preferred attributes of human behavior.  These theories are most widely studied in the framework of Ancient Greek philosophers, including Socrates, Plato, and Aristotle, who is perhaps the most famous.  We will look at their views in the future.  Many of these thinkers believe that ethical thinking becomes self-evident as individuals learn more about themselves and their world.

From that wisdom development, concepts of preferred behavior emerge. A good example is courage. Using reason, it becomes clear that being courageous is always more desirable than being uncourageous. Though people can be “courageous” or “uncourageous,” conceptually, courageous behavior is often believed to be more virtuous or an element of proper moral conduct.  In our study, we pay particular attention or think about what determines a better or wiser thinker and what it requires of us.  In Damon Horowitz’s 3-minute talk on teaching philosophy in prison, listen to his assessment of the importance and power of philosophical training and thought as it relates to determining the practical nature of the definition of virtue.  What essential components of this conversation should we consider as we contemplate what it means to pursue ethical thinking?  Those components of wisdom or attributes observed in human experience often coincide with the concepts that thinkers of the ages have determined are central to the “virtue” ethical excellence elements.

The other term essential to the discussion of morality and decision-making is the definition of values.  The baseline definition of the term “values” pertains to human thought and action characteristics that are intrinsically preferred or held in high esteem.  For our purposes, consider the term “values” or “value” as the individual characteristics, like courage, that make up the concept of virtuous or proper moral conduct.  These two terms remind us that ideals or concepts may be present in our daily decision-making.  The key is to identify them, ensure those ideals are central to objective truth and not just what we want, and consciously allow them to guide us in all aspects of our lives.

Basic Constructs of Ethical Study

Descriptive Ethics + Analytical Ethics = Normative Ethics

The determination of values and/or virtues can be seen in the struggle of leaders and organizations over time.  In the Twentieth Century and the Twenty-First Century, unethical decisions have dominated the world, resulting in unethical outcomes.  In the wake of such damaging outcomes, people are more astutely focusing on ethics and ethical practice.  In doing so, they consider greater thoughtful procedures as they scope through risk management, organizational function and productivity, market positioning, and civic responsibility. What has emerged in greater clarity is the understanding that profits and ethical decision-making, at all levels, can be integrated partners if consistent and committed to long-term success is kept at the forefront of individual consciousness.

At the beginning of the study of ethical options, we need to define a framework to understand how to study ethics.  In doing this, it becomes clear that ethics is complicated and not merely a formulation of what is only “wrong” or “right” but a concentrated and in-depth study of the various segments of human thought and behavior.  I term this complexity the equation of ethical study.  There are three components:

  • Descriptive ethics is the branch of ethical study that considers ethical analysis in the context of a neutral representation of the perceptions or facts of any ethical situation.  It involves a lengthy and careful attempt to identify the ethical issues and values inherent in the evaluation process.
  • Analytical ethics centers on the argument and logic in the ethical opinions and assessments used to determine the ethical issues, values, or outcomes.  This approach builds on descriptive ethics by considering the construct of ethical determination in greater depth.  Analytical ethics considers the ethical outcome based upon other decisions, especially those decisions that are disconnected from others and the impact such decisions or outcomes might have in that consideration.
  • Normative ethics approaches the study of ethics with the belief, according to Kitson and Campbell in Case Studies in Business Ethics (2001), of seeking “to develop and defend judgments of right and wrong, good and bad, and virtue and vice, to arrive at an understanding of truth.”  This final evaluation tool process focuses on determining the best possible outcome after solid and productive consideration of descriptive and analytical components.  Normative is usually the stage of the ethical evaluation process that most people are familiar with, as it often leads to a decision or determination of what is “right” or “wrong” for an individual, group, organization, or society.

As May describes in Case Studies in Organizational Communication , these three layers make up the many different conceptualizations inherent in ethical analysis.  All are equally important, but we must consider the ethical layers when considering descriptive and analytical ethical standing to make the best possible decision.

Prominent Ethical Tensions

Foundational vs. Situational Tensions

Individual vs. Community Tensions

Beyond these layers of ethical study, good critical thinkers must be aware of inevitable tensions between individuals crucial to ethical study evaluation.  Such tensions exist in our world and are at the root of ethical dilemmas.

The first tension focuses on the interaction between foundational and situational arguments.

  • Foundational ethical arguments are built upon the idea that proper ethical formulation is based upon “universal” constructs of ethical thinking or objective conceptualization. From this standpoint, ethical evaluation is determined by an objective assessment that the individual or organization using this approach deems accurate, regardless of context or situation.
  • Situational ethical arguments are formulated on the belief that ethical thinking is a product of consistent change and subjective conceptualizations based upon unique circumstances or each instance in which an ethical evaluation must occur. This presents tension as each perspective can often be at the root of ethical differences and misunderstandings.

The other tension highlights the moral stances of ideologies linked to individualism and collectivism.

  • I ndividualistic ideology argues that proper ethical evaluation and determination are inherently formulated on the individual, entity, and responsibility.
  • The collective ethical perspective argues the opposite.

Ethical decision-making is best constructed through understanding the soundest course of thought and evaluation through group affiliation and agreement. Thinkers must consider the interplay of the rights and responsibilities of individuals with the rights and responsibilities of communities found in any society or organization (of people).  A better understanding of the framework of ethical interaction allows us to contemplate productive outcomes more deeply for some of our most difficult moral problems. Awareness of these tensions is a start to becoming more productive in arriving at more ethical outcomes and defraying possible misunderstandings around the thoughts and behavior of those involved.

Moral Reasoning and Determination are not only…  A matter of opinion or personal taste.

This essential question is central to the discussion of moral decision-making:  isn’t morality simply a matter of opinion or personal taste?  This question represents a standard assumption on the part of many.  Other people view morality, ethical thinking, moral reasoning, virtue, and value or values, as simply relativistic or subjective. “ Relativistic ” refers to the belief that our understanding of truth (or what we believe in) is based on our evaluation or perspective. It can be argued that truth comes from a subjective conception, and this viewpoint carries great merit as we understand perception, thinking, and uniqueness. It is also true that moral reasoning or morality must probe more deeply than simply a belief or opinion we possess.  Good thinking requires that we investigate, process and evaluate as many components of possible ethical dilemmas and not only the use of our background, quick assessments, or sole emotional reactions to determine better practices or outcomes.

Relativistic statements of individuals must go further than a simple assertion that they might have on a subject; instead, as the philosopher Dr. James Rachels explains in The Elements of Moral Philosophy , we must employ moral reasoning and virtuous decision-making solidly and constructively, building on the reasoning that is supported by the soundness of thought and consistency of action.  This Starburst candy advertisement demonstrates how important it is to determine when an opinion or personal taste should lead us to evaluate the Truth and how we might begin to use reason to help us transcend evaluations that might be problematic or untrue.

Basic Ethical Constructs of the Western World

Though there are many codes of moral conduct and varying traditions of ethical perspective we could study, I have limited the scope of this course to a series of very strong contributors to our Western world to illustrate how ethical theory and conception have come to define our reality.  These factors have become prominent in some ethical determinations in the Western World and the world at large.  As we consider the climate of increasing globalized networks built upon some of these notions, it is increasingly essential to constructively understand and evaluate the roots of such basic conceptions of morality.

The long conversational history becomes apparent in tracing the background of morality and ethical conduct.  We can find those essential modern conceptions linked to the world of the ancient world of the Greeks and Romans.  Our presumptions of good business, proper conduct, and even the truth of reality have been shaped by the writings and beliefs of individuals predating the fourth century BC.  Central to the Greco-Roman world was the philosophical viewpoint that the meaning of life was somehow connected to this idea of creating a “better life” or moving towards a greater sense of “progress.”

This idea is still present in almost every aspect of our world and can be fundamentally seen in Western culture.  This concept of “ betterment ” or “good” living has impacted our decision-making, creating a society that focuses on growth and the belief that there are better ways to approach various subjects and our lives.

One key component of this Greek belief of “betterment” can be traced to their solid ethical notion of the citizenry and civic responsibility.  Citizens have rights given to them by circumstance or situation, but with rights come responsibilities required of those with privilege.  The Romans took this concept further, believing that the true notion of justice was steeped in ethical importance.  They attempted to set up courts and impartial authority figures connected with the Roman authorities who were tasked with helping those in conflict resolve their issues through productive and just outcomes.  The idea was that society only operates ethically when people are treated fairly and problems are solved to diminish conflict.

The second component is the influence of Christian values and virtues on the development of accepted social norms of thought and behavior in the Western world.  Regardless of one’s religious affiliation, the Western world has been developed using the beliefs Christian principles passed down since the Middle Ages by the Roman Catholic Church.

During this time, many social norms espoused by the Christian establishment became the backbone of European society. They laid the foundation for individual and organizational behavior through law or cultural expectations.  Many of those expectations often associated with Hebraic belief expectations, such as the Ten Commandments, were combined with the teachings of Jesus Christ found in the New Testament.  Those expectations became encapsulated in Christian creeds and lists of behavioral expectations, such as The Seven Deadly Sins , decided by Christian leaders through council decisions.  These decisions were often instituted as laws that kingdoms adopted.  Many concepts of societal values, such as true justice and characteristics of personal values, were taught, reinforced, and passed down from generation to generation, both societally and individually.  In addition, these values or moral expectations were also taught and reinforced in direct conjunction with the Church’s practices.

Beyond the first earlier Western influences we have discussed, there have also been economic ideologies that have come to shape moral thinking and evaluation.  Milton Friedman, one of the most prominent economists of the twentieth century, argued, in a famous 1970 essay termed Friedman’s Thesis , as well as his early text Capitalism and Freedom , “the social responsibility of business is to increase its profits” arguing that the role of a business should be to maximize profits and not to be concerned with elements of moral responsibility or participate in determining moral “rights” and “wrongs’ within society.  According to his evaluation, institutions, especially for-profit organizations, should only concern themselves with economic decisions that would increase the profitability of shareholders. In this way, morality and market interaction would dictate proper moral decision-making.

Organizations’ freedom to pursue their best interest, namely profit, should determine organizational attitude and behavior as long as they obey the law.  This belief functioned under the assumption that moral assessments should be reserved for the citizens who would make those decisions by purchasing the products or services presented and through the regulations created by legislators who represented those citizens.  This approach profoundly influenced how Western society determined the best moral course of action, arguing that the market would be the best assessor of moral attitude and behavior.

Another layer of this debate centers itself on the tension between philanthropy and charity.  Philanthropy, the offering of financial or resource help to an individual, organization, or society in need with some benefit for the giving organization or individual, has often been interpreted by many as a productive way to invest in a beneficial, moral manner to address critical ethical problems.  It has been argued as the best option for addressing moral and social needs.  In doing so, though, the belief is that what is beneficial for those who need the help should be linked to the benefit of the participating organization.  Charity, in contrast, is centered on the idea that benefits of any kind should be offered without the mutual requirement of exchange.  The debate over what is proper and productive “help” and the morality of how to best offer it as we consider economic results have been at the crux of moral evaluation in the Western world and linked to the debate around Friedman’s Thesis.  Some of that debate has been influenced by moral presumptions connected with the value of work and individual responsibility.  This also includes the assumption that profitability is most important and should influence how we evaluate the most moral course of action.

The moral complexity of individual and society in Western society…Pluralism, dualism, and monism

As alluded to in the last section, the complexity of the interaction of individual and societal beliefs is critical in understanding the context of Western ethical thinking.  Western society has consistently attempted, through the institution of such documents as the Magna Carta, the English Bill of Rights 1688, and The Constitution of the United States, to define the relationship more clearly between what is individually acceptable behavior and what is collectively accepted as permissible

Rousseau, the famous French-Swiss philosopher of the eighteenth century, referred to this as the concept of a social contract.  At the heart of this interaction is a societal moral value called justice.  Perhaps one of the most controversial legal thinkers of the twentieth century, John Rawls, a prominent professor of law and philosophy at Harvard, asserts that we must know the role of “institutions” or groups of people in moral decision-making.  At the root of the interaction of personal, institutional (which is best defined as any group of individuals), and society at large is the philosophical conception of whether Truth, defined objectively, can be found in a dualistic thinking framework or a monistic thinking conceptualization.

Dualism is the belief that two concepts, ideals, or factors determine truth by their interplay or lack therein, while monism refers to the belief that truth reflects one concept, ideal, or factor.  For a thinker, it is imperative to determine whether problem-solving considers a more dualistic, or perhaps even pluralistic, or multiple-factor approach or a more monistic framework.  As we struggle with proper judgments, one will inevitably conclude that proper conduct and decision-making, as well as good critical thinking, must incorporate a solid and reliable set of rules of conduct or expectations that is inclusive of as many approaches or perspectives as possible while considering the need to determine ethical goals or ideals to progress towards.

Dilemmas at the Heart of Ethical Thinking…

  • Justice vs. mercy
  • Truth vs. Loyalty
  • Individual vs. community
  • Short-term goals vs. long-term goals

To further our understanding of ethical thinking, it is useful to dissect moral problems within the context of “value pairings.”  To highlight some of the more critical Western societal values, Rushworth Kidder breaks ethical issues into four major categories that should help us assess moral decision-making.  In How Good People Make Tough Choices:  Resolving the Dilemmas of Ethical Living (1995), he argues that all complex ethical dilemmas have, at their core, many of the following series of troubling pairings that make it challenging to determine the best moral outcome.

Justice versus mercy forces us to consider how we should uphold proper expectations for attitudes and behaviors, emphasizing that everyone should receive what they deserve within society.  This includes the belief that taking responsibility for oneself is important while balancing the belief that it is valuable to consider when to offer leniency to those who might not deserve it or someone who hasn’t taken responsibility.

Truth (objective) versus loyalty presents the dilemma of determining when we or society should adhere to the truth regardless of loyalty and when loyalty to ourselves, others, or institutions might be the most moral course of action.

The construct of individual versus community tension compels us to consider the varied interests of the individual versus the needs and/or desires of a greater community.  This moral dilemma can be present in many different facets of society.

Finally, Kidder iterates that the final dilemma we should consider is the clash between short-term and long-term goal-setting .  There are often compelling cases for when we should choose short-term over long-term goals or vice versa, but knowing when to make the right decision in the right situation is often difficult to determine.  Considering these four dilemmas can not be understated when we evaluate the importance of better critical thinking with the result of more ethical outcomes.  Listen to Patrick Awuah’s discussion (17 minutes) as he uses his experience to emphasize the importance of being a “thinking, moral” leader and how one should look for opportunities to encourage those traits in others.  Making ethical decisions is not easy, but it is necessary.

The Origin of Ethical Determination

Differing perspectives on moral determination have been considered in Western society for centuries.  As a result, many different viewpoints have emerged over time.  It is essential to contemplate the thoughts of some of the greatest thinkers to analyze what is truly at the core of proper moral reasoning and understand what many people today might conclude.

David Hume , a Scottish philosopher of the Eighteenth Century, espoused the viewpoint that people determine what is “right” or “wrong” through experiences filtered by their senses.  Hume’s famous statement that humans are nothing more than “a bundle of perceptions” claims that the core of who we are as individuals is directly tied to our perceptions or how we interpret the world.  Though perception, as a process, may be considered universal in the sense that we, as humans, all participate in it or employ the phenomena, he is quick to point out that each one of us is diverse in those experiences.

Karl Marx, the famous Mid-Nineteenth Century philosopher, is known primarily for his work The Communist Manifesto. Marx wrote that the root of ethical thinking is humans’ economic constraints. The struggle over material goods between those who have and those who have not and how that relationship is worked in society outlines and determines ethical thinking or morality.

The last and most controversial is the work of psychologist and psychoanalyst Sigmund Freud .  Freud believed that ethical thinking is directly tied to our subconscious and found that we find the real motivations for ethical decision-making in the interaction of the id, ego, and superego.  Therefore, morality is based upon our “hidden desires” or “what we really want” when played out against other influences such as societal expectations and/or the interests of others; this reflects the central feature of who we are as individuals but also the weighs that interest against the interest of others.

By contemplating the potential motivating factors that determine ethical thinking in people, we become much more careful in thinking through matters pertaining to decision-making.  In Western society, many thinkers have come to radically different perspectives on what determines ethical thought and action.

Questions at the Root of the Ethical Decision-Making Process

At the root of ethical decision-making are four initial questions that must be contemplated to find answers.

  • What does it mean to be good?
  • What makes a life a good life?
  • What characteristics make up a good human?
  • What duties do we have to each other and ourselves?

Critical thinkers may use the following suggestions when confronted with questions.

First, there are no easy answers; attaining satisfactory answers is ongoing.  These questions must be revisited to gain insight and enhance growth over time.

Second, strong and solid reasons require significant thought and the ability to continually question notions that might even be held dear.  Process and result must both be considered. Last, these questions require us to keep ourselves in check by considering the interest of others.

Tough Outcomes May Emerge

Several potential outcomes emerge when important ethical questions are asked.  These questions can cause people to come to certain conclusions that may be unnerving. Additionally, the answers that people often struggle with produce actions and outcomes that present obstacles to moving along with better ethical thinking and problem-solving.

The first problem is the issue of blame .  At the root of blame is the shared realization that change is needed.  Change is often scary and threatening.  As individuals think about ethical issues, they are often confronted by their conscience or reason, prompting them to feel troubled by their thoughts or behaviors.  When integrated with the need to enact some form of change, discomfort can cause people to feel unmotivated or agitated.  It is probably safe to say that most humans do not like change, and this factor alone can cause uncomfortable situations or outcomes, but when we add the topic of moral assessment, there is added pressure and stress.  This video of a dog  illustrates the humorous interaction between the dog “Denver” and his master.  This is a microcosm (in a more humorous manner) of what people might experience.

The second potential problem centers on the issue of obligation or duty.   Ethical issues naturally imply that the change required might dictate a strong sense of obligation that may cause people not to think and act unfairly. Think of a person who has a renewed view of an issue or problem and throws themselves completely into that new approach without realizing that that renewed perspective may not solve ethical issues.  The complexities of obligation can create a crisis as people, in their new understanding, might be torn between loyalties to multiple viewpoints or viewpoints—thus causing even more potential dismay.

The third factor to consider is the issue of the emotional investment of those involved.  Ethical issues often carry with them inherent strong viewpoints and feelings that can surface and may cause individuals to avoid an accurate understanding of the outcomes present.  This emotional investment may lead to false admiration for those involved in the decision or leaders who enact what is perceived to be the better moral decision or process.  This can lead to an inaccurate result or view of the situation.

Last might be the ethical dilemma of not knowing the result that an ethical decision might produce .  How does one truly know that they are correct, or what we have come to think is the proper outcome will indeed yield that result?  The prospect of this can be frightening for many people.  The more we know how people react in circumstances linked to ethical tensions and outcomes, the better we identify these tendencies in ourselves and others and work to allay those fears.  This is perhaps one of the most important factors to consider and why an ethics-based education is essential.

Awuah, P. (2007, June). How to educate leaders? Liberal arts. Retrieved from https://www.ted.com/talks/patrick_awuah_on_educating_leaders

Denver Official Guilty Dog Video. (2011, March 08). Retrieved from https://www.youtube.com/watch? v=B8ISzf2pryI

Horowitz, D. (2011, March). Philosophy in prison. Retrieved from https://www.ted.com/talks/damon_horowitz_philosophy_in_prison

Kitson, A., & Campbell, R. (2001). Case studies in business ethics. In A. Malachowski (Ed.), Business ethics: Critical perspectives on business and management (Vol. IV, pp. 7–12). London: Routledge.

May, S. (2012). Case studies in organizational communication: Ethical perspectives and practices. Thousand Oaks: SAGE Publications.

Rachels, S., & Rachels, J. (2019). The elements of moral philosophy. New York, NY: McGraw-Hill Education.

Ronda, N. (2011, June 19). Starburst- Commercial [funny]. Retrieved from https://www.youtube.com/watch?v=jodb9lkwnd8

Chapter 2--Morality and Decision Making Copyright © 2018 by Christopher Brooks is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

How Large Is the Role of Emotion in Judgments of Moral Dilemmas?

* E-mail: [email protected]

Affiliation Department of Psychology, University of Illinois at Urbana-Champaign, Urbana, Illinois, United States of America

Affiliation Department of Psychology, University of California Los Angeles, Los Angeles, California, United States of America

  • Zachary Horne, 
  • Derek Powell

PLOS

  • Published: July 6, 2016
  • https://doi.org/10.1371/journal.pone.0154780
  • Reader Comments

Table 1

Moral dilemmas often pose dramatic and gut-wrenching emotional choices. It is now widely accepted that emotions are not simply experienced alongside people’s judgments about moral dilemmas, but that our affective processes play a central role in determining those judgments. However, much of the evidence purporting to demonstrate the connection between people’s emotional responses and their judgments about moral dilemmas has recently been called into question. In the present studies, we reexamined the role of emotion in people’s judgments about moral dilemmas using a validated self-report measure of emotion. We measured participants’ specific emotional responses to moral dilemmas and, although we found that moral dilemmas evoked strong emotional responses, we found that these responses were only weakly correlated with participants’ moral judgments. We argue that the purportedly strong connection between emotion and judgments of moral dilemmas may have been overestimated.

Citation: Horne Z, Powell D (2016) How Large Is the Role of Emotion in Judgments of Moral Dilemmas? PLoS ONE 11(7): e0154780. https://doi.org/10.1371/journal.pone.0154780

Editor: Elisabeth Hildt, Illinois Institute of Technology, UNITED STATES

Received: July 21, 2015; Accepted: April 19, 2016; Published: July 6, 2016

Copyright: © 2016 Horne, Powell. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Data are available at https://osf.io/dp4we/ .

Funding: These authors have no support or funding to report.

Competing interests: The authors have declared that no competing interests exist.

Introduction

How do we decide whether an act is morally right or wrong? Though this question has a long history, the nature of the controversies surrounding moral decision-making has not fundamentally changed. Historically, there has been debate between philosophers who stressed the role of reason and deliberation in moral judgment (e.g., [ 1 ]) and those who argued that moral judgments are driven by emotional processes (e.g., [ 2 – 3 ]). These contrasting emphases are also evident in the course of psychological research on moral judgment. Early investigations were chiefly concerned with how morality was shaped through cognitive development (e.g., [ 4 ]). More recently however, a great deal of research has focused on the role of emotion in moral decision-making (e.g., [ 5 – 12 ]). These and other recent insights into the psychological processes involved in moral judgment have reinvigorated normative ethical debates about our moral obligations to ourselves and others (e.g., [ 9 , 13 – 15 ]).

Theories of moral judgment have tended to emphasize the influence of either reason or emotion to moral judgment. However, it is quite likely that both of these capacities play a role in everyday moral evaluation. The roles of both of reason and emotion are integrated in the dual-process theory of moral judgment [ 7 , 16 ]. According to the dual-process theory, cold reasoning processes are recruited when making utilitarian moral judgments, but these judgments can be preempted by hot affective processes that lead people to make deontological moral judgments. Contemplating the violation of a moral rule elicits a strong negative emotional reaction that tends to elicit disapproval toward the violation. However, when violating the rule would bring about a better moral outcome, this prepotent response can be overridden by deliberative processes, leading to utilitarian approval for the action. The signatures of these two processes are thought to be evident in the so-called personal-impersonal distinction: researchers have found that people are less likely to approve of sacrificing one person to save others if a dilemma requires an “up-close-and-personal” action, such as physically pushing someone to their death, than if a dilemma requires an action that operates at greater distance, such as flipping a switch that leads to someone’s death. The dual-process theory has become very influential within the field of moral psychology over the last decade, and it is now widely accepted that people are less likely to approve of personal violations because they evoke strong emotional reactions compared to impersonal actions (e.g., [ 6 , 7 , 17 – 21 ], but see [ 22 ]).

Reexamining the relationship between emotion and judgment in moral dilemmas

The largest and most widely-cited body of evidence for the role of emotion in judgments of moral dilemmas, and for the dual-process theory, has come from research examining people’s judgments about a single battery of moral dilemmas (henceforth, the standard battery; e.g., [ 6 , 7 , 17 , 19 , 20 ]). Most prominently, several neuroimaging studies have examined people’s judgments about dilemmas taken from the standard battery. For instance, in two studies, Greene et al. [ 6 , 7 ] found increased activation in brain areas associated with emotion when participants made judgments about personal dilemmas, and increased activation in areas associated with reasoning processes when they considered impersonal dilemmas (but see [ 23 ]). In other studies, researchers demonstrated similar effects using psychophysiological measures of affect (e.g.,[ 20 ]) and when examining clinical populations with ventromedial prefrontal cortex lesions (an area of the brain thought to be critically involved in emotion and emotion regulation; e.g., [ 17 , 19 ]).

However, there are problems with the standard battery [ 24 – 27 ]. Of particular concern is the fact that personal dilemmas in the standard battery more often involve physically harming a moral patient than do impersonal dilemmas. In fact, it appears that all of the personal dilemmas in the standard battery involve physical harm, whereas only half of the impersonal moral dilemmas do. This is potentially problematic, as the harmfulness of an action ought to be orthogonal to its up-close-and-personal nature. What’s more, it appears that personal dilemmas in the standard battery tend to involve more graphic and grisly descriptions of harm than do impersonal dilemmas, even when focusing only on impersonal dilemmas involving harm. For example, personal dilemmas ask participants to consider cutting off a man’s head, smothering a baby, or subjecting children to painful medical experiments. In contrast, impersonal dilemmas ask participants to consider venting deadly fumes into a room or voting for a new environmental policy that will harm people.

Researchers using the standard battery have often argued that the “closeness” of personal moral actions elicits a strong negative emotional reaction that in turn leads participants to make deontological moral judgments. Yet, one possibility is that researchers have observed stronger emotional reactions to personal dilemmas because the personal dilemmas in the standard battery more often involved grisly and harmful actions than did the impersonal dilemmas, and not because of the closeness of personal actions. If so, then prior studies may have only shown that graphic descriptions of harmful acts are emotionally salient. Thus, many studies taken to provide evidence for the dual-process theory may not provide a strong test of the central claim of the theory. That is, it is unclear whether emotional responses explain the difference in people’s judgments about personal and impersonal dilemmas or whether the observed differences are due to confounds in the stimuli.

Setting aside concerns about the standard battery, there is little work on precisely which emotions are involved in judgments of moral dilemmas. Further, there is no work to our knowledge demonstrating the causal strength of these emotional reactions. There is compelling evidence that disgust and anger are elicited in judgments about norm violations such as committing incest or suicide [ 5 , 8 – 12 , 28 ], but the neuroimaging studies that provide evidence for the role of emotional processes in moral judgments have not directly measured which emotions are involved in judgments of moral dilemmas.

In addition, very little work has examined the extent to which emotional processes are causally related to the moral judgments of neurotypical individuals (but see [ 19 ]). Whereas a number of researchers have argued that incidental emotional states can affect judgments of simple norm violations (e.g., [ 5 , 12 ] but see [ 28 ]), attempts to demonstrate these effects on judgments about moral dilemmas have produced inconsistent results (e.g., [ 29 – 32 ]). Meanwhile, reaction time data (e.g., [ 6 , 7 ; 33 ]) and experiments examining speed-pressure [ 34 , 35 ] and cognitive load manipulations [ 16 , 35 ] suggest that deliberative reasoning is crucial for utilitarian judgments—utilitarian judgments are sometimes slower, and seem to be impaired by speed-pressure and increased cognitive load—however, these findings cannot determine whether characteristically emotional processes produce deontological judgments. Rather, the current data only suggest that deontological judgments are produced by some sort of automatic or intuitive process. If the automatic processes involved in moral judgment are truly affective processes, then their operation ought to be accompanied by the qualitative experience of emotion, which is most easily measured by asking participants to report their emotional experiences.

Altogether, the current state of the literature suggests that more research is necessary to establish the role of emotion in judgments of moral dilemmas. If confounds in the standard battery affect participants’ emotional reactions, then it needs to be determined whether prior findings—that is, those demonstrating a connection between emotions and judgments of moral dilemmas—were a result of these confounds or of genuine affective differences between personal and impersonal dilemmas. In addition, existing research does not tell us what types of emotional responses are involved in judgments of moral dilemmas, nor does it inform us as to the strength of the relationship between people’s emotional responses and their moral judgments. Thus, we aim to reassess the extent to which emotional responses explain the difference between personal and impersonal judgments when the confounds in the standard battery are eliminated, and importantly, what specific emotions explain this difference.

We should point out that, as Greene et al. [ 36 ] have noted, the value of the dual-process theory of moral judgment does not depend on its ability to explain the personal-impersonal distinction. Furthermore, there are a variety of dual-process models that make no particular claims about the role of emotion in deontological moral judgment, either because they do not associate specific processes with specific moral judgments (e.g., as discussed by [ 35 ]), or because they dichotomize moral judgments along other lines (e.g., [ 37 – 41 ]). Nonetheless, one highly influential dual-process theory originally argued for by Greene et al. [ 7 ] and still frequently discussed by many other researchers, predicts that people’s emotional reactions are causally related to deontological moral judgments and that the signatures of this effect is evident in the personal-impersonal distinction. This is the dual-process theory we aim to test.

The Present Research

Our goal was to examine the role of emotions in judgments of moral dilemmas using a self-report emotion measure. Self-report measures afford two important benefits: they allow us to identify the specific emotions involved in judgments of moral dilemmas and to assess the strength of the connections between these emotions and moral judgments. There are, of course, limitations to using self-report measures as well. Chief among them is the fact that self-report measures are only suitable when the emotions of interest are available to conscious introspection.

To begin to address this concern, in Experiment 1 we validated our self-report emotion measure by examining people’s emotional responses to the standard battery. Based on the findings of prior neuroimaging studies that have examined people’s judgments about the standard battery, we expected to find significant differences between the emotions elicited by personal and impersonal dilemmas in this battery.

In Experiment 2, we conducted a norming study and confirmed that participants rated personal dilemmas from the standard battery as both more harmful and graphic than impersonal dilemmas. On the basis of these findings, we revised a battery of matched personal and impersonal dilemmas originally created by Moore et al. [ 26 ]. We then experimentally confirmed that these dilemmas were matched for harm and graphicness (though they were actually more graphic and harmful on average than the standard battery). Finally, in Experiment 3 we reexamined the role of emotion in moral judgments of dilemmas in the revised battery.

General Methods

Participants.

Participants were recruited online from Amazon’s Mechanical Turk work distribution website. After recruitment, participants were redirected to a Qualtrics website where the experiment was administered. Before advancing to the experiment, participants indicated consent by clicking a checkbox and could only continue in the experiment if they consented. These experiments were approved by the UCLA Institutional Research Board, IRB 12–000063. Participants were paid $0.60 to participate in Experiments 1 and 3, and $0.75 to participate in Experiment 2. Attention checks were conducted and timings were recorded to ensure participants paid attention while completing the study.

Materials: Standard Battery

The standard battery, first examined by Greene et al. [ 7 ], is composed of 44 moral vignettes describing situations in which a moral decision must be made. Each vignette describes an action which must be taken to avoid an undesirable outcome, but which comes at the cost of another undesirable outcome. These vignettes are divided into two groups: personal moral dilemmas (25 dilemmas) and impersonal moral dilemmas (19 dilemmas). Personal moral dilemmas involve more intimate and direct moral actions than impersonal dilemmas. Within and between these conditions, the vignettes range over a wide variety of situations and actions, from donating to a charitable organization, to pushing a man in front of a train.

Materials: Revised Battery

We compiled a new set of moral dilemmas to address the concerns we raised about the standard battery (see S1 File ). Most of these scenarios were created by modifying the materials originally created by Moore et al. [ 26 ]. We modified their materials to further improve the match between personal and impersonal versions of a given dilemma. The revised battery included eight different scenarios (one personal and one impersonal vignette for each scenario for a total of 16 vignettes). In these dilemmas, a moral patient must be harmed in order to maximize utility. For each scenario, a pair of personal and impersonal vignettes were matched as closely as possible, save for the intimacy and directness of the action considered in the vignette. For instance, in the “Space Station” scenario, a fire threatens to break out in the international space station unless a module is vented of oxygen. Unfortunately, an astronaut is trying to exit the module, and his presence in the doorway will prevent the fire safety system from activating. In the personal version of this scenario, the astronaut is stuck in the doorway, and participants must consider whether or not to push him back into the module so that the fire system will be activated. This action will kill him but will save the others on the station. In the impersonal version of the scenario, participants must consider whether to press a switch that will seal the doorway before the astronaut reaches it, with the same consequences as the personal scenario [ 25 ].

Emotion Measure: Positive and Negative Affect Schedule—Expanded Form

In Experiments 1 and 3 we measured participants’ emotional responses to moral dilemmas using scales from the Positive and Negative Affect Schedule expanded form (PANAS-X), a comprehensive emotional state, trait, and mood self-report measure [ 42 , 43 ]. In our study, we used the PANAS-X as a state emotion measure. The PANAS-X is among the most commonly used self-report measures of emotion (according to Google Scholar, Watson et al.’s paper [ 43 ] has been cited over 15,000 times at the time of this writing). The PANAS-X asks participants to rate the extent to which they are experiencing a number of different emotions, on a scale from 1 (very slightly or not at all) to 5 (extremely). The measure is composed of subscales, each of which is composed of several emotion words. We presented participants with the positive and negative affect scales, as well as the guilt, hostility, and joviality scales. Table 1 provides more detail about the variety of emotion words that participants rated.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0154780.t001

The Hostility scale includes emotion words like “anger” and “disgust,” which we anticipated would be relevant to moral decisions based on prior research examining norm violations (e.g., [ 5 , 8 – 12 , 28 ]). We also chose to measure emotional responses using the Guilt scale because the experience of guilt may serve adaptive functions in deterring moral violations and in regulating relationships affected by norm violations (e.g., [ 44 – 47 ]). Finally, the Positive Affect and Joviality scales were used along with the Negative affect scale as general measures of positive and negative affect, respectively.

Filler Task

We used a filler task to reduce any memory effects caused by repeated administrations of the PANAS-X. Participants were presented with three images taken from the Where’s Waldo book series. Participants were asked to search for Waldo, and to click on him when they found him.

Catch Questions

All experiments included catch questions to identify participants who were not paying attention, or were clicking through the study. These questions were embedded in the response scales, and instructed participants to choose a specific response option. For instance, a catch question embedded in the emotion measure was, “For this item please respond ‘a little’.”

Experiment 1

As discussed, in Experiment 1 we sought to validate our emotion measure by reproducing prior results demonstrating that personal dilemmas from the standard battery elicit stronger emotional reactions than impersonal dilemmas from the standard battery.

We recruited 266 participants to participate in Experiment 1. Of these, 141 participants were female and 125 were male, with mean age of 32.8 years old ( SD = 11.51).

In Experiment 1, moral dilemmas were assigned between-subjects; each participant read one of the 44 moral dilemma vignettes from the standard battery, reproduced verbatim from Greene et al. [ 7 ]. Miller & Cushman [ 48 ] have argued that action-directed emotions, in particular, are most strongly connected to moral judgments. Consequently, we added an additional sentence to the end of each moral dilemma to direct participants’ attention to the moral action they needed to consider before rating their emotions. For instance, for the “Standard Trolley” dilemma this sentence read, “You are thinking about flipping the switch in order to save the five workmen.”

After collecting demographic information, participants were directed to complete an emotion pre-test. Participants were asked to rate how they were feeling at the present moment using the PANAS-X. This established a baseline for each participant’s emotional state upon entering the study, however this procedure is not necessary to achieve the results we will report henceforth (see [ 49 ]). A catch item was included within the emotion scale to ensure participants paid attention. After completing the pre-test, participants completed the filler task (described above).

Participants were then randomly assigned to read a personal or impersonal moral dilemma (participants only read one dilemma). Participants assigned to the personal condition read one of the 25 personal moral dilemmas, and participants in the impersonal condition read one of the 19 impersonal moral dilemmas. Between four and seven participants were assigned to each dilemma (median = 6 per dilemma). After reading a moral dilemma, participants completed the emotion post-test, rating their emotions using the PANAS-X scales a second time. Participants were given specific instructions to rate their emotions as they were currently experiencing them. These instructions read, “Having read the story, how do you feel right now? Please indicate how you actually feel, not how you think you might have felt if you were actually in the situation.” Then participants were presented with the PANAS-X scales. Participants generally spent approximately one-minute completing PANAS-X scales each time (pre-test median = 62.5s; post-test median = 64.5s). After rating their emotions, participants were asked to make a moral judgment. They responded using a six-point labeled scale that ranged from “completely inappropriate” to “completely appropriate.” Finally, we asked a final attention check question in which participants were asked whether they took the experiment seriously. The median completion time for the entire experiment was 6 minutes 53 seconds.

Results and Discussion

Five participants were excluded for missing at least one catch question, leaving 261 participants in the final analyses. The exclusion of these participants did not affect the results of the study.

First, we examined participants’ moral judgments, and replicated prior moral psychological findings. Participants made more deontological moral judgments for personal dilemmas (Mean = 2.82, SD = 1.915) than for impersonal dilemmas (mean = 3.47, SD = 2.00), t (259) = -2.67, p = .009, 95% CI of the difference[.165 to 1.121], d = .33). Next, we examined the effect of reading moral dilemmas on participants’ emotional states by computing an emotional reaction score . We calculated an emotion reaction score for each subscale by subtracting participants’ pre-test emotion ratings from their post-test emotion ratings on each scale. Mean emotional reaction scores for each condition are shown in Fig 1 . Reading both personal and impersonal moral dilemmas led to increased negative emotions (negative affect, guilt, hostility), and decreased positive emotions (positive affect, joviality), when compared to pre-test emotional states.

thumbnail

Error bars represent ±1 standard error.

https://doi.org/10.1371/journal.pone.0154780.g001

We conducted a series of 2 x 2 ANOVAs, one for each emotion subscale (summarized in Table 2 ). We examined two factors with these ANOVAs: a within-subjects Emotional Reaction factor comparing pre-test scores and post-test scores, and a between-subjects Condition factor comparing personal and impersonal dilemmas. Within each ANOVA, the main effect of the Emotional Reaction factor tested whether reading a moral dilemma affected participants’ emotional state, and the interaction between Emotional Reaction and Condition factors tested whether the change in participants’ emotional state was greater for personal dilemmas than for impersonal dilemmas. We observed a significant Emotional Reaction effect for every subscale, indicating that both personal and impersonal dilemmas elicited emotional reactions. We also observed significant interactions between the Emotional Reaction factor and Condition for the negative affect, guilt, hostility, and joviality subscales.

thumbnail

The Emotional Reaction factor has been abbreviated as “Reaction.”

https://doi.org/10.1371/journal.pone.0154780.t002

These interactions indicate that considering a personal dilemma led to significantly greater emotional changes than considering an impersonal dilemma, reproducing the widely reported personal-impersonal emotion effect using a self-report emotion measure. However, we also observed that reading a moral dilemma simpliciter led to increased negative emotions and decreased positive emotions across both personal and impersonal dilemmas, and this effect accounted for the greatest proportion of variance among the effects we examined.

We also tested for correlations between participants’ emotion reaction scores for each emotion scale and their moral judgments. We observed a small but reliable correlation between changes in Hostility and people’s moral judgments ( r = —.178, p = .004). Surprisingly, we observed no reliable correlations between moral judgments and Positive Affect ( r = .110, p = .077), Negative Affect ( r = —.065, p = .294), Guilt ( r = -.089, p = .152), or Joviality ( r = .063, p = .307). If decreased approval for actions in personal dilemmas are a result of stronger emotional reactions, we may expect this relationship to be reflected in correlations between moral judgments and emotional reactions. However, caution should be exercised in interpreting these correlations because, as we have noted, there are other systematic differences between personal and impersonal moral dilemmas in the standard battery that may have attenuated this relationship.

Experiment 2

Many psychologists and philosophers have argued that people’s stronger emotional responses to personal dilemmas explain the differences in their judgments about personal and impersonal moral dilemmas. However, this interpretation may be unwarranted if emotional differences between these dilemmas can be explained by uncontrolled factors in the standard battery. The most notable difference we observed between personal and impersonal moral dilemmas is the amount of physical harm and graphicness of the descriptions in personal dilemmas. To determine whether this issue was as problematic as we suspected, we performed a norming study on the standard battery and the revised battery.

It should be noted that the findings of this study does not bear on the veracity of the dual-process theory of moral judgment, nor on the role of emotion in the personal-impersonal distinction, nor on the role of emotion in moral judgment more generally. However, identifying confounds in the standard battery may impugn some of the evidence taken to support the dual-process theory of moral judgment.

We collected responses from 256 participants for this experiment. Of these, 100 were female and 156 were male. Their mean age was 31.90 years old ( SD = 10.90).

Materials and Procedure

After responding to demographic questions, participants were randomly assigned to read four personal and four impersonal vignettes from either the standard battery or from our revised battery of moral dilemmas.

Participants were instructed to first read each vignette and then to rate their agreement with each of the statements that composed the Harm (α = .80) and Graphicness (α = .81) scales (both scales can be found in S1 File ). A catch item was included in both scales to ensure participants were paying attention. After participants made these ratings, the vignette remained on the screen while they were asked to make a moral judgment about the vignette. Vignettes were presented in a random order and the ordering of the harm and graphicness scales, as well as the items that compose these scales, were randomized.

For both the harm and graphicness scales, participants were asked to rate their agreement with five statements, two of which were reverse coded. The harm scale included statements such as, “The situation is violent.” The graphicness scale included statements such as, “The language used to describe the situation evokes disturbing images.”

Participants who missed a catch question were excluded from analyses, leaving 215 participants. The decision to exclude these participants did not impact the results of the study. Participants' ratings were averaged to compute harm and graphicness scores for each of the 44 vignettes in the standard battery (mean number of ratings = 29.4) and for the 16 vignettes in the revised battery (mean number of ratings = 26.5).

The results of Experiment 2 are shown in Fig 2 . First, we conducted a pair of one-way ANOVAs to determine if the personal dilemmas in the standard battery were viewed as more harmful and more graphic than the impersonal dilemmas. Confirming our predictions, we found that personal dilemmas were viewed as significantly more harmful than impersonal dilemmas, F (1,43) = 25.80, p < .001, η 2 = .38. Personal dilemmas were also viewed as more graphic than impersonal dilemmas, F (1,43) = 33.66, p < .001, η 2 = .45. In contrast, we observed no significant differences between harm ratings for personal and impersonal dilemmas in the revised battery, F (1,15) = .307, p = .588, η 2 = .021. Personal dilemmas tended to be rated as more graphic than impersonal dilemmas, but this trend was not statistically significant, F (1,15) = 3.98, p = .066, η 2 = .22.

thumbnail

https://doi.org/10.1371/journal.pone.0154780.g002

We also compared the norming ratings for the revised battery with norming ratings of items from the standard battery using a 2 x 2 (condition x battery) ANOVA for each scale (see Fig 2 ). The harm ratings of both the revised personal and impersonal dilemmas were comparable to the personal dilemmas in the standard battery. We observed significant differences between personal and impersonal moral dilemmas ( F (1,59) = 6.97, p = .011, η 2 = .111) and between the two batteries ( F (1,59) = 8.35, p = .005, η 2 = .130). Most importantly, we observed a significant interaction ( F (1,59) = 10.19, p = .002, η 2 = .154), indicating that the differences between personal and impersonal dilemmas were stronger among the moral dilemmas from the standard battery than among those in the revised battery.

A comparison of graphicness ratings revealed a significant main effect of battery ( F (1,59) = 93.00, p < .001, η 2 = .624), indicating that our items were more graphic than those in the standard battery. An ANOVA also revealed a significant effect of condition ( F (1,59) = 14.38, p < .001, η 2 = .204), and a significant interaction ( F (1,59) = 6.48, p = .014, η 2 = .10). The significant interactions in the ANOVAs conducted on harm and graphicness ratings demonstrate that the differences between personal and impersonal dilemmas from the standard battery are not driven by inherent differences in the interpretation of personal and impersonal moral dilemmas, but are more likely due to confounds in the dilemmas of the standard battery.

Correlational Analyses

As we hypothesized, personal and impersonal dilemmas in the standard battery were not matched for two potentially confounding factors—how harmful the actions were and how graphically the actions were described. In fact, we suspect that these confounding factors are at least partially responsible for participants’ differing emotional reactions to personal and impersonal dilemmas. To test this, we conducted an item-level analysis correlating averaged emotional responses from Experiment 1 with the norming ratings from Experiment 2. Across dilemmas, graphicness scores were significantly correlated with averaged scores for Negative Affect ( r (42) = .424, p = .004, 95% CI [.15 to .64]), Hostility ( r (42) = .450, p = .002, 95% CI [.18 to .66]), Guilt ( r (42) = .356, p = .018, 95% CI [.07 to .59]), and Joviality ( r (42) = -.391, p = .009, 95% CI [-.62 to -.11]). Of course, many of the items in the graphicness scale ask about the emotionality of a given moral dilemma, which may partially explain these correlations. However, harm scores also appear to be more weakly correlated with affective responses, although these correlations were not significant across items for Negative Affect ( r (42) = .277, p = .069, 95% CI [-.02 to .53]), Hostility ( r (42) = .213, p = .165, 95% CI [-.09 to .479]), Guilt ( r (42) = .233, p = .129, 95% CI [-.07 to .50]), and Joviality ( r (42) = -.189, p = .218, 95% CI [-.46 to .11]).

To overcome the limited power provided by the item analysis, we also examined participant-level correlations among each individual participants’ affective responses in Experiment 1 with the normed ratings for the item that participant was assigned to read. Across participants, harm score ratings were correlated with Negative Affect ( r (259) = .155, p = .011, 95% CI [.04 to .27]), Hostility ( r (259) = .143, p = .020, 95% CI [.02 to .26]), and Guilt ( r (259) = .124, p = .043, 95% CI [.003 to .24]). These correlations were weaker than the item-level correlations, likely because of the additional variability among participants. Finally, we observed significant correlations between an item’s graphicness score and participants’ emotional difference scores on Negative Affect ( r (259) = .219, 95% CI [.10 to .33], p < .001), Hostility ( r (259) = .272, 95% CI [.38 to .16], p < .001), Guilt ( r (259) = .170, 95% CI [.05 to .29], p = .005), and Joviality ( r (259) = -.165, 95% CI [-.28 to -.05], p = .007).

Altogether, the results of Experiment 2, which demonstrate that stronger emotional reactions to personal compared to impersonal moral dilemmas are partially explained by differences in the harm and graphicness of personal dilemmas, suggest caution in drawing strong conclusions from prior studies that used the standardized battery.

Experiment 3

Experiment 2 confirmed that the personal and impersonal moral dilemmas from the revised battery do not differ in their degree of harm or graphicness. In Experiment 3 we examined participants’ reactions to these revised dilemmas, allowing us to test whether personal dilemmas elicit stronger emotional reactions than impersonal dilemmas, as well as the extent to which differences in participants’ emotional responses is predictive of their moral judgments. Recently, some researchers have moved away from explaining differences in people’s responses to personal and impersonal moral dilemmas (e.g., [ 16 , 50 ]). Instead, their research examines differences in participants’ responses to high-conflict personal moral dilemmas. High-conflict dilemmas are thought to be very emotionally evocative and exhibit high levels of disagreement. Given that dilemmas in the revised battery exhibit high levels of disagreement [ 26 ] and are also emotionally evocative, in Experiment 3 we are also able to address the role of specific emotions in high-conflict dilemmas.

In Experiment 3, we conducted a priori power analyses to determine the necessary sample sizes for ANOVA and correlational analyses. We wanted to ensure adequate power to detect effects similar to those observed in Experiment 1. Among the ANOVAs conducted in Experiment 1, the smallest significant effect was observed in participants’ emotion ratings for the Guilt subscale. A power analysis conducted using G*power [ 51 ] indicated that a total of 284 participants would be required to achieve 99% power to detect this effect. The Hostility subscale was the only subscale to correlate significantly with participants’ moral judgments in Experiment 1 ( r = -.178). To detect similar correlations with 99% power, we determined that 568 participants were needed.

In Experiment 3, we recruited 654 participants, anticipating that we might need to remove some participants from our analyses for missing attention check questions. Of these participants, 359 were female and 295 were male. Their mean age was 36.02 years old ( SD = 13.01).

In Experiment 3 we used the revised battery, which includes eight moral scenarios, for a total of 16 dilemmas (personal vs. impersonal x 8 scenarios). These dilemmas were assigned between-subjects (for approximately 40 participants per vignette). All other materials and procedures were identical to those used in Experiment 1. Participants spent approximately one-minute completing the PANAS-X scales (pre-test median = 67.0 s ; post-test median = 71.7 s ). After completing these procedures, some participants were also asked to answer an additional set of questions about their awareness of their emotional states using the Trait Meta-Mood Measure [ 52 ]. We had hoped that this measure might identify those participants for whom emotion and moral judgments would be most strongly connected. However, no differences emerged in these analyses. Therefore, we have omitted discussion of these analyses. The median completion time for Experiment 3 was 10 minutes 46 seconds.

Thirty-eight participants were excluded for missing at least one catch question, or for indicating that they had not paid attention when participating, leaving 616 participants in the final analysis. Our results are unaffected by including these participants in our analyses.

As in Experiment 1, we examined participants’ emotional reactions to the revised battery of moral dilemmas. We calculated an emotional reaction score for participants by subtracting their pre-test emotion ratings from their post-test emotion ratings. Mean differences for participants in each condition are shown in Fig 3 . Both personal and impersonal moral dilemmas produced increased negative emotions (Negative Affect, Hostility, and Guilt) and decreased positive emotions (Positive Affect and Joviality).

thumbnail

https://doi.org/10.1371/journal.pone.0154780.g003

Just as in Experiment 1, we performed a series of 2 x 2 (Emotional Reaction x Condition) ANOVAs for each emotion subscale (summarized in Table 3 ). We observed a significant emotional change from pre-test to post-test for every emotion subscale. Then, we tested whether personal dilemmas elicited stronger negative emotional reactions than impersonal dilemmas after matching the dilemmas on graphicness and harm dimensions. To test this, we examined the interaction between the Emotional Reaction and Condition (i.e., personal vs. impersonal) factors. We observed significant interactions between the change in participants’ emotional state and their assigned condition for all subscales except the positive affect and joviality scales, indicating that personal dilemmas still elicited stronger emotional reactions than impersonal dilemmas. These results indicate that personal dilemmas do indeed elicit greater emotional reactions than do impersonal dilemmas, even for the revised battery of moral dilemmas.

thumbnail

https://doi.org/10.1371/journal.pone.0154780.t003

Although we observed significant differences between the emotions elicited by personal and impersonal dilemmas, these effects were smaller than the effects we observed in Experiment 1. In Experiment 1, the strongest interaction effect we observed was for the Hostility subscale, which accounted for approximately 29% as much variance as the main effect, or a ratio of approximately 3:1. In Experiment 3, the variance accounted for by the Hostility interaction term was less than 3% of that accounted for by the main effect, or a ratio of approximately 35:1. We compared participants’ Hostility reaction scores across Experiments 1 and 3 using a 2 x 2 ANOVA (Condition x Experiment) and found a significant interaction between these experiments ( F (1, 873) = 10.61, p = .001, η 2 = .011). This significant effect indicates that differences in the amount of anger and disgust elicited by personal versus impersonal dilemmas were greater in Experiment 1 than 3.

Thus, when examining matched personal and impersonal dilemmas, participant’s emotional states depended much more on whether they had read a moral dilemma than on whether that dilemma was personal or impersonal in nature. How would this affect participant’s moral judgments about these dilemmas? Suggesting a dissociation between participants’ emotional reactions and their moral judgments, we found that participants’ moral judgments were still significantly less approving for personal dilemmas (mean = 2.98, SD = 1.750) than impersonal dilemmas (mean = 3.75, SD = 1.863), t (614) = 5.294, p < .001, 95% CI [.485 to 1.057], d = .42. That is, consistent with prior findings, the personal-impersonal distinction continued to have an important influence on participants’ moral judgments, even though participants’ emotional reactions to these dilemmas were extremely similar. Comparing participants’ moral judgments across Experiments 1 and 3 in a 2 x 2 ANOVA (Condition x Experiment) revealed no significant main effect of Experiment ( F (1, 873) = 2.467, p = .117, η 2 = .002) nor any interaction ( F (1, 873) = .221, p = .639, η 2 = .0002).

Thus, eliminating the confounds in the standard battery did not significantly affect the pattern of moral judgments we observed among personal and impersonal dilemmas. These results suggest that the confounds in the standard battery amplified the differences in emotions elicited by personal and impersonal dilemmas, but had little effect on participants’ moral judgments of these dilemmas.

We examined the relationship between each participant’s emotional states and their moral judgments. First, we correlated participants’ difference scores for each emotion scale and their moral judgments (see Table 4 ). We found significant correlations between participants’ moral judgments and the change in their Positive Affect and Hostility scores. The Hostility scale measures emotions like disgust and anger—alarm-bell emotions that researchers have hypothesized are important to moral judgments [e.g., 36 ]. Thus, our findings lend some support to researchers’ claims about the importance of these emotions relative to emotions like sadness or guilt. However, the size of these correlations are conventionally small, suggesting that the dual-process theory may have placed too great an emphasis on the role of emotion in judgments of personal moral dilemmas. Clearly, more research is necessary to draw firmer conclusions on this matter.

thumbnail

For correlations between moral judgments and the emotion scales, 95% confidence intervals are enclosed in brackets (below the diagonal). Gender is included as a point of reference.

https://doi.org/10.1371/journal.pone.0154780.t004

Mediation analysis

We observed significant changes in participants’ feelings of Hostility between personal and impersonal conditions, and we found that participants’ feelings on this subscale were reliably correlated with their moral judgments. Accordingly, we performed a mediation analysis [ 53 ] to test whether differences in moral judgments for personal and impersonal dilemmas can be attributed to increased anger and disgust in response to personal dilemmas. We found a significant indirect effect of condition on moral judgments (10,000 Bootstrapped Samples; Effect: -.0295; 95% CI : lower bound = -.0846, upper bound = -.0017). However, the direct effect of condition remained significant (Effect: -.7150; 95% CI : lower bound = -1.004, upper bound = -.4258) indicating that emotional reactions only partially mediated the effect of condition on moral judgments. The size of the mediating effect relative to the direct effect is also revealing: although differences in feelings of anger and disgust did account for differences in participants’ moral judgments about personal and impersonal dilemmas (as the correlation suggests), much of this difference is likely still attributable to other factors.

General Discussion

The present experiments constitute a direct investigation of the role of emotion in people’s judgments of moral dilemmas. Our findings indicate that, as with simple norm violations, anger and disgust play a role in judgments of moral dilemmas. We found that moral dilemmas elicited strong emotional reactions, and that personal dilemmas elicited significantly stronger emotional responses than did impersonal dilemmas. In addition, we found that participants’ experience of anger and disgust were significantly correlated with their moral judgments. However, our findings also suggest that the relationship between emotional reactions and judgments of moral dilemmas is weaker than initially hypothesized. Although the relationship between anger and moral judgment was statistically significant, the correlations between participants’ emotional responses and their moral judgments are conventionally considered small, as was emotion’s mediating effect on participants’ moral judgments.

Clearly, our findings are not the last word on the role of emotion in people’s judgments of moral dilemmas. As discussed, there have been several investigations that seem to demonstrate the role of emotion in moral judgments about simple norm violations (e.g., [ 5 , 8 – 12 ]). In addition, recent investigations claim to show a link between emotion and people’s judgments of moral dilemmas using new and more tightly controlled materials that may not suffer from the confounds present in the standard battery (e.g., [ 50 ]). Likewise, there is evidence that impairments in empathy can lead to abnormal moral judgments (e.g., [ 54 – 56 ]), suggesting that proper affective functioning is a necessary, but perhaps not a sufficient condition, for making moral judgments. However, our findings still offer important insights on the role of emotion in moral judgment. First, extant research in moral psychology makes it difficult to interpret the strength of the relationship between specific emotional responses and judgments of moral dilemmas. Our findings are novel in this respect, demonstrating a link between emotions like disgust and hostility and moral disapproval in dilemmatic contexts. However, our findings also suggest that this relationship is weaker than anticipated by many researchers. Thus, our findings make room for the possibility that dual-process theories—in particular, those that attempt to explain differences in people’s judgments of personal and impersonal dilemmas by appealing to the emotions these dilemmas elicit—are incomplete.

Limitations and Future Directions

Self-report emotion measures afford two advantages for assessing the role of emotion in judgments of moral dilemmas. First, these measures allow us to identify the specific emotions experienced during judgments of moral dilemmas. Until now, it was unclear that anger and disgust, rather than guilt, for example, support people’s judgments of moral dilemmas. Second, self-report emotion measures allow us to assess the strength of the relationship between particular emotional responses and people’s moral judgments. These measures also have some clear limitations.

Awareness and sensitivity of self-report measures.

Self-report measures require participants to be consciously aware of their emotional state and be able to accurately report those states. This raises two potential concerns: First, this limitation prevents our experiments from addressing how unconscious emotional processing may have influenced moral judgments (e.g. [ 57 ]). It is clear that further investigation on the link between unconscious emotional responses and judgments of moral dilemmas is warranted, and not at all addressed by the present studies.

Second, even if the relevant emotions are consciously experienced, one might worry whether the PANAS-X are sufficiently sensitive to detect the emotional states that drive moral judgments. Along these lines, one might worry that we used wrong subscales to measure the connection between emotions and moral judgments. Several points speak against the second concern: For one, we not only tested differences between the emotions elicited by personal and impersonal dilemmas, but also tested for differences in participants’ emotions before and after reading a moral dilemma. We observed that reading a moral dilemma had a strong effect on participants’ emotional states. These main effects indicate that the emotions we measured are induced during moral decision-making and that the PANAS-X scales are sufficiently sensitive to detect these effects. Moreover, we provided further validation of our measure by replicating the original emotion differences found in prior research using the items from the standard battery.

Disruption of normal operations of emotion and judgment.

Another disadvantage to self-report emotion measures is that they explicitly direct participants’ attention toward their emotional responses, whereas in more naturalistic settings participants may experience and be influenced by emotions without explicitly attending to them. This potentially threatens to disrupt the normal connections between emotional responses and moral judgments. If participants are made aware of their emotions they may work to discount them in making their judgments, potentially weakening their connection. Alternatively, participants might feel a demand pressure to make judgments that align with their emotion ratings. Either situation is undesirable from the researcher's perspective. We recognize that ruling out these concerns may require implicit emotion measures that do not direct participants’ attention toward their emotional reactions.

Future directions.

A number of future directions are suggested by the limitations of prior research, as well as the limitations of the present studies. First, we identified potentially serious confounds in the standard battery of dilemmas that have been used in moral judgment research. As our norming study demonstrates, the revised battery (originally developed by Moore et al. [ 26 ]) avoids these confounds and so may be better suited for examining the factors that influence people’s judgments of moral dilemmas.

Future studies might also employ methods capable of measuring unconscious emotional experiences, such as facial expression coding [ 58 ] or the measurement of facial muscle activity using electromyography (e.g., [ 59 ]). Like galvanic skin response measures (GSR), these methods allow researchers to examine unconscious emotional experiences, yet they also allow researchers to differentiate between different types of emotions. Coupled with carefully controlled and normed materials, these methods might reveal a greater role of unconscious emotions in judgments of moral dilemmas. In addition, these methods would allow researchers to measure emotional reactions without affecting participants’ attention during the decision-making process.

Finally, our findings suggest that occurrent emotions (those experienced during the process of judgment) have only a relatively small role in judgments of moral dilemmas. Still, moral judgments might be more strongly influenced by people’s anticipated emotions, or how people imagine they would feel having taken one or another action (e.g., [ 60 ]). Anticipated emotions play an important role in many judgment and decision contexts (e.g., [ 61 – 65 ]), so we might expect that they also influence moral judgments. However, we also think that this idea departs from the claim that alarm bell emotions lead to deontological judgments in personal moral dilemmas (or high-conflict personal dilemmas). Nevertheless, the influence of anticipated emotions in the context of moral dilemmas warrants further examination.

There has never really been any question as to whether emotions play some role in moral decision-making—even Kant [ 1 , 66 ] recognized that “sympathies” and “sentiments” are integral to proper moral functioning. Rather, the more substantive concerns are the relative contribution of emotion to people’s moral judgments and whether or not emotion plays an important role in people’s judgments of moral dilemmas. Our findings suggest that emotions (especially anger and disgust) are involved in judgments of moral dilemmas, but that their role in producing these judgments may be weaker than we once thought.

Supporting Information

S1 file. experimental instructions, materials, and scale items..

https://doi.org/10.1371/journal.pone.0154780.s001

Author Contributions

Conceived and designed the experiments: ZH DP. Performed the experiments: ZH DP. Analyzed the data: ZH DP. Contributed reagents/materials/analysis tools: ZH DP. Wrote the paper: ZH DP.

  • 1. Kant I. Groundwork for the metaphysics of morals. Yale University Press; 2002.
  • 2. Hume D. An enquiry concerning human understanding. Broadview Press; 2011.
  • 3. Hume D. A treatise of human nature. Courier Dover Publications; 2012.
  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 9. Prinz J. The emotional construction of morals. Oxford University Press; 2007.
  • 43. Watson D, Clark LA. The PANAS-X Manual for the Positive and Negative Affect Schedule—Expanded Form. 1994;
  • 49. Horne Z, Powell D. More than a feeling: When emotional reactions don’t predict moral judgments. In: Knauf M, Pauven M, Sebanz N, Wachsmuth I, editors. Proceedings of the 35th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2013.
  • 52. Salovey P, Mayer JD, Goldman SL, Turvey C, Palfai TP. Emotional attention, clarity, and repair: Exploring emotional intelligence using the Trait Meta-Mood Scale. In: Pennebaker JW, editor. Emotion, Disclosure, and Health. Washington, D. C.: American Psychological Association; 1995. pp. 125–154.
  • 53. Hayes AF. PROCESS: A versatile computational tool for observed variable mediation, moderation, and conditional process modeling. 2012.
  • 58. Ekman P, Friesen WV. Facial action coding system: A technique for the measurement of facial movement. Palo Alto, CA: Consulting Psychologists Press; 1976.
  • 66. Kant I. Anthropology from a pragmatic point of view. Cambridge University Press; 2006.

McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Ethics Defined UT Star Icon

Moral Emotions

Moral Emotions are the feelings and intuitions–including shame, disgust, and empathy–that play a major role in most of the ethical judgments and decisions people make.

Emotions – that is to say feelings and intuitions – play a major role in most of the ethical decisions people make. Most people do not realize how much their emotions direct their moral choices. But experts think it is impossible to make any important moral judgments without emotions.

Inner-directed negative emotions like guilt, embarrassment, and shame often motivate people to act ethically.

Outer-directed negative emotions, on the other hand, aim to discipline or punish. For example, people often direct anger, disgust, or contempt at those who have acted unethically. This discourages others from behaving the same way.

Positive emotions like gratitude and admiration, which people may feel when they see another acting with compassion or kindness, can prompt people to help others.

Emotions evoked by suffering, such as sympathy and empathy, often lead people to act ethically toward others. Indeed, empathy is the central moral emotion that most commonly motivates prosocial activity such as altruism, cooperation, and generosity.

So, while we may believe that our moral decisions are influenced most by our philosophy or religious values, in truth our emotions play a significant role in our ethical decision-making.

Related Terms

Altruism

Altruism is when we behave selflessly and value the welfare of others.

Behavioral Ethics

Behavioral Ethics

Behavioral Ethics studies why and how people make the choices that they do.

Moral Reasoning

Moral Reasoning

Moral Reasoning is the branch of philosophy that attempts to answer questions with moral dimensions.

Morals

Morals are society’s accepted principles of right conduct that enable people to live cooperatively.

Values

Values are society’s shared beliefs about what is good or bad and how people should act.

Stay Informed

Support our work.

  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, valence of emotions and moral decision-making: increased pleasantness to pleasant images and decreased unpleasantness to unpleasant images are associated with utilitarian choices in healthy adults.

  • 1 Department of Personality, Assessment and Psychological Treatment, University of Granada, Granada, Spain
  • 2 School of Health Sciences, University of Granada, Granada, Spain
  • 3 Centro de Investigación Mente, Cerebro y Comportamiento, University of Granada, Granada, Spain
  • 4 Centro de Investigación Biomédica en Red de Salud Mentaiversity of Granada, Granada, Spain
  • 5 Institute of Neuroscience Federico Olóriz, University of Granada, Armilla, Spain
  • 6 Red de Trastornos Adictivos, Instituto Carlos III, University of Granada, Spain
  • 7 School of Psychology and Psychiatry, Monash University, Melbourne, VIC, Australia

Moral decision-making is a key asset for humans’ integration in social contexts, and the way we decide about moral issues seems to be strongly influenced by emotions. For example, individuals with deficits in emotional processing tend to deliver more utilitarian choices (accepting an emotionally aversive action in favor of communitarian well-being). However, little is known about the association between emotional experience and moral-related patterns of choice. We investigated whether subjective reactivity to emotional stimuli, in terms of valence, arousal, and dominance, is associated with moral decision-making in 95 healthy adults. They answered to a set of moral and non-moral dilemmas and assessed emotional experience in valence, arousal and dominance dimensions in response to neutral, pleasant, unpleasant non-moral, and unpleasant moral pictures. Results showed significant correlations between less unpleasantness to negative stimuli, more pleasantness to positive stimuli and higher proportion of utilitarian choices. We also found a positive association between higher arousal ratings to negative moral laden pictures and more utilitarian choices. Low dominance was associated with greater perceived difficulty over moral judgment. These behavioral results are in fitting with the proposed role of emotional experience in moral choice.

Introduction

Moral decision-making is an essential asset for humans’ integration in social contexts. Emotional processes contribute to moral judgment by assigning affective value to the moral decision-making scenarios, thus guiding the distinction between acceptable and inacceptable behaviors ( Haidt, 2001 ). The presentation of hypothetical scenarios involving moral violations typically generate subjective unpleasantness and increased arousal, which are thought to guide subsequent moral appraisals and decisions ( Moll et al., 2002a ; Ostrosky-Solís et al., 2003 ; Vélez-García et al., 2003 ; Harenski and Hamann, 2006 ). Moreover, the presentation of different types of moral stimuli, including moral-laden pictures ( Moll et al., 2002a ; Harenski and Hamann, 2006 ), moral statements ( Moll et al., 2002b ) or moral dilemmas ( Greene et al., 2001 , 2004 ; Heekeren et al., 2003 , 2005 ; Blair, 2007 ) evoke significant changes in brain networks specialized in emotional processing, such as the ventromedial prefrontal cortex. Conversely, individuals with ventromedial prefrontal dysfunction (by virtue of psychopathology or brain lesions) and emotion processing deficits are typically more prone to endorse utilitarian choices, which maximize the aggregate welfare at the expense of the emotional implications of harming an innocent person ( Greene et al., 2001 ; Koenigs et al., 2007 ; Carmona-Perera et al., 2012 ; Young et al., 2012 ).

According to the dual process theory ( Greene, 2007 ; Greene et al., 2008 ) utilitarian choices are associated with higher order cognitive control, as illustrated by the impact of cognitive biasing factors on this type of judgments, including reasoning styles ( Amit and Greene, 2012 ), cognitive load ( Greene et al., 2008 ; Moore et al., 2008 ), priming reflection ( Paxton et al., 2011 ), or attentional bias ( Van Dillen et al., 2012 ). By contrast, deontological choices are preferentially supported by aversive emotional processing ( Greene et al., 2001 , 2004 ; Koenigs et al., 2007 ; Moretto et al., 2010 ; Carmona-Perera et al., 2013a , b ). Recent studies have demonstrated that transient manipulation of specific emotions can bias moral decision-making toward utilitarian or deontological choices in response to moral dilemmas. Specifically, several studies have demonstrated that the induction of positively valenced emotions (e.g., happiness, humorous) favors the tendency to endorse utilitarian choices, whereas the induction of negatively valenced emotions (e.g, sadness, disgust) favors the tendency to endorse deontological choices ( Wheatley and Haidt, 2005 ; Valdesolo and DeSteno, 2006 ; Schnall et al., 2008 ; Pastötter et al., 2012 ; Van Dillen et al., 2012 ). Complementarily, several studies have shown that the motivational tendency primed by the specific emotion induced is significantly associated with utilitarian vs. deontological choices in moral dilemmas. Specifically, the induction of approach-related emotions (e.g., anger) fosters the tendency to endorse utilitarian choices, whereas the induction of avoidance-related emotions (e.g., disgust) fosters the tendency to endorse deontological choices ( Harlé and Sanfey, 2010 ; Ugazio et al., 2012 ). Although these studies elegantly show how transient manipulations of particular emotions can bias moral decision-making in different directions, considerably less is known about how more stable individual differences in emotional experience (in response to a range of emotionally competent stimuli) are associated with decision-making patterns in moral vs. non-moral scenarios. The Lang bio-informational model of emotion assumes that individual differences in emotional experience can be reliably and efficiently tracked using the subjective responses to emotional stimuli on three relevant aspects of emotion: valence (pleasantness/unpleasantness of the experience), arousal (activation generated by the experience), and dominance (degree of control that one is able to exert over the emotional experience induced; Greenwald et al., 1989 ; Lang et al., 1993 ). In this dimensional system, categorical emotions are quantitatively represented; for example, anger would be linked to high unpleasantness, high arousal and high dominance, whereas fear would be linked to high unpleasantness, high arousal but low dominance.

In this study we aimed to investigate whether individual differences in emotional experience, based on the Lang model, are associated with individual differences in moral decision-making patterns, as measured by a battery of moral (and non-moral) dilemmas ( Greene et al., 2001 ). Specifically, we examined whether individual differences in subjective reactivity to affective stimuli is specifically associated with moral (vs. non-moral) decision-making in healthy adults, and whether individual variations in the valence, arousal and dominance subjective emotional ratings are associated with specific utilitarian vs. deontological choice patterns. Based on the previous literature, we hypothesize that (1) individual differences in emotional experience will be specifically correlated with decision-making in moral but not in non-moral scenarios; (2) subjective ratings indexing greater unpleasantness, high arousal and low dominance emotional experience will be associated with predominantly deontological choice patterns, whereas subjective ratings indexing lower experience of unpleasantness, low arousal and high dominance will be associated with predominantly utilitarian choice patterns.

Materials and Methods

Participants.

The sample consisted of 95 healthy adults (49 males and 46 females). All participants were of European-Caucasian origin and were recruited from local community and recreational centers during the first semester of 2011 through flyers-based advertisement and word-of-mouth communication. Eligibility criteria were defined as follows: (i) to be literate enough to ensure reading comprehension, and correctly complete the tests; (ii) not having lifetime use of illegal drugs in more than five occasions or current or past diagnoses of substance dependence (with the exception of nicotine); (iii) not having history of head injury or neurological disorders; (iv) and not having clinically significant psychiatric symptoms. The Interview for Research on Addictive Behaviour (IRAB; Verdejo-García et al., 2005 ) was used to assess compliance with the absence of drug use/dependence criterion, and the Symptom Checklist-90-Revised (SCL-90-R; Derogatis, 1977 ) was used to assess compliance with the absence of significant psychiatric symptoms criterion. The sample had a mean (standard deviation) of 49 years-old (10.67) and 18 years of education (2.38). Socioeconomic status was assessed through occupation prestige and mean family income (through self-reports). We classified the participants into three socioeconomic categories: low level (17.9%), average level (63.2%) and high level (17.9%). None of these demographic variables affected moral decision-making (all p > 0.05).

Instruments

Emotional experience task.

We used a set of 40 picture stimuli extracted from the International Affective Picture System (IAPS; Lang et al., 1988 ) and other sources such as the internet. Based on the IAPS norms ( Lang et al., 1988 ), we defined four picture categories or conditions of interest: (i) neutral (10 pictures displaying landscapes, household objects), (ii) pleasant (10 pictures displaying sexual and radical sports scenes), (iii) unpleasant non-moral laden (10 pictures displaying accident-related casualties or mutilations), and (iv) unpleasant moral laden (10 pictures displaying poverty or one to one violence scenes). Since moral content is not addressed in the IAPS norms, we conducted a pilot study ( n = 83 undergraduate students) to evaluate “perceived moral content” in an initial pool of 22 images with suitable contents for the unpleasant moral laden category. The 10 images with higher “perceived moral content” ratings (>7.5 in a 1–10 range) were finally included in this (iv) category. As a further check the 40 images included in the emotional experience task were also evaluated for “perceived moral content” by the study sample ( n = 95), and we confirmed that the 10 images included in this category significantly differed from those included in the other categories on “perceived moral content” [ F (3, 282) 721.20, p < 0.001]. The main dependent measure for each of the picture categories were the subjective ratings of valence (from 1 –unpleasant– to 9 –pleasant–), arousal (from 1 –relaxed– to 9 –aroused–), and dominance (from 1 –dominant– to 9 –dominated–). The responses were recorded using the Self-Assessment Manikin (SAM; Lang, 1980 ). As dependent variables we used the mean of valence, arousal and dominance scores for each of the four categories of pictures.

Moral Decision-Making Task

We used a subset of 32 hypothetical dilemmas extracted from Greene battery ( Greene et al., 2001 ). The original Greene battery was adapted to Spanish language through a back-translation process. The ensuing items were evaluated using Rasch analysis to obtain a briefer construct-valid measure of moral decision-making (Carmona-Perera et al., under review). We used the calibration and item fit tests to remove redundant and low quality items impacted by commonly confounding variables outside moral decision-making (e.g., socio-demographic factors). We also excluded those moral dilemmas that fell at the tails (>95%) of the deontological or utilitarian response distributions, since they are less likely to constitute an actual decision dilemma. The final 32-item Spanish version has demonstrated adequate psychometric properties (Cronbach’s alpha = 0.78, Spearman Brown coefficient = 0.76; Carmona-Perera et al., under review). This task is composed by eight non-moral dilemmas involving a rational decision without moral content (e.g., to travel by train or bus given certain time constraints, or to buy a new camera or to have your old camera repaired for the same price), and 24 moral dilemmas which concern the appropriateness of moral violations for a higher benefit (e.g., smothering a baby to save a group of people, or throwing a dying person into the sea to keep a lifeboat of survivors afloat). These moral dilemmas involve different degrees of emotional salience based on the extent of personal involvement and the ensuing severity of harm ( Greene et al., 2001 , 2004 ). Therefore, the task included both Personal dilemmas (16 items) which involve higher emotional salience and Impersonal dilemmas (8 items) which involve lower emotional salience. Participants were asked to provide “choice” (affirmative vs. negative) and “perceived difficulty” responses (from 1 –low– to 10 –high–). For moral dilemmas affirmative answers were considered “utilitarian” (e.g., to kill someone to save a group of people), and negative answers “deontological” (e.g., to refuse the harmful action regardless the aggregate well-being). For non-moral dilemmas affirmative answers were considered “efficient” (e.g., to travel by the fastest transport to arrive on time), and negative answers “non-efficient” (e.g., to travel by the preferred transport despite off to arrive late). The proportion of affirmative choices and the mean of perceived level of difficulty for moral and non-moral scenarios were computed as main dependent variables.

This study was approved by the Ethics Committee for Human Research of the University of Granada. Before testing, all participants were informed about the study protocols and they signed a written informed consent to certificate their voluntary collaboration. The information sheet included the following information: “We are interested in exploring how you make decisions in relation to a set of moral and non-moral hypothetical scenarios, and how you experience emotions in relation to a set of affective pictures. We will ask you to decide whether you would accept or refuse to take a proposed action concerning moral and non-moral scenarios. In a separate task, we will ask you to report your subjective emotional experience in response to both pleasant and unpleasant stimuli.” To describe each SAM scale we used the standardized guidelines of Lang et al. (2001) . Participants were assessed individually in a single session that lasted approximately 90 min. The emotional experience and the moral decision-making tasks were administered in computerized format using two different orders, such that half of the sample performed first the dilemmas and then the pictures and half of the sample did it in the reverse sequence. In the emotional experience task categories were presented in a counter-balanced order across participants. In all cases, each picture was presented during 6 s, followed by a 2 s black screen with a fixation cross. Participants were instructed to stare at the picture and rate their emotional experience using the SAM scales of valence, arousal and dominance, with no time limits established. In the moral dilemmas task the different subsets of dilemmas (moral personal, moral impersonal, non-moral) were also presented in a counter-balanced order. Each was presented through three subsequent computer screens. The first screen described the dilemma (presentation); the second screen presented the response options and requested the choice (decision-making), and the third screen presented the difficulty scale and requested the perceived difficulty rating. Each screen continued with no time limit as the participants read and responded to the dilemmas.

Statistical Analysis

We used repeated-measures ANOVAs to test the main effects of picture categories on subjective valence, arousal and dominance ratings in the emotional experience task, and of type of dilemma on the affirmative choices and perceived difficulty in the moral decision-making task. Pairwise Bonferrroni post-hoc tests were used to examine specific effects driven by the different picture categories and types of dilemmas. To test our main assumptions, we conducted Pearson product-moment correlation analyses between the valence, arousal, and dominance ratings to the images included in the different picture categories, and the choices and difficulty ratings to the moral and non-moral dilemmas. The Bonferroni correction was used to adjust the significance levels of correlation coefficients for multiple comparisons ( Curtin and Schulz, 1998 ). Results are presented reporting the corrected p values.

Subjective Reactivity to Emotional Stimuli

Results showed the expected significant differences between the valence, arousal and dominance ratings evoked by the images grouped in the different picture categories (see Table 1 ). Pairwise comparisons showed significant effects in all contrasts, with the exception of the contrast between unpleasant non-moral laden and unpleasant moral laden categories, which had similar valence, arousal and dominance ratings. Moreover, the unpleasant pictures (non-moral and moral) yielded higher unpleasantness and arousing ratings, and lower dominance ratings, than all other conditions.

www.frontiersin.org

TABLE 1. Descriptive scores, ANOVA and post-hoc comparisons for emotional valence, arousal, and dominance.

Decision-Making and Difficulty Ratings to Moral and Non-Moral Dilemmas

ANOVA analyses showed significant differences between moral and non-moral dilemmas in terms of affirmative choices [ F (1, 94) = 593.82, p < 0.001], and difficulty [ F (1, 94) 346.34, p < 0.001]. Moral dilemmas yielded less affirmative choices ( M = 58.72, SD = 13.78) and higher perceived difficulty ( M = 4.08, SD = 1.38) than non-moral dilemmas ( M = 97.24, SD = 5.82, and M = 1.58, SD = 0.64, respectively). We also found significant differences between personal and impersonal moral dilemmas on utilitarian choices [ F (1, 94) 765.56, p < 0.001] and difficulty ratings [ F (1, 94) 51.81, p < 0.001]. Personal dilemmas yielded less utilitarian choices (personal: M = 28.62, SD = 20.01; impersonal: M = 88.82, SD = 14.29) and higher perceived difficulty (personal: M = 4.69, SD = 1.70; impersonal: M = 3.48, SD = 1.50).

Association between Subjective Reactivity to Emotional Stimuli and Utilitarian Choices and Difficulty Ratings to Dilemmas

Results showed that the subjective ratings evoked by the emotional stimuli were specifically associated with decision-making in moral, but not in non-moral, scenarios. The proportion of affirmative (utilitarian) choices in moral dilemmas (merging both personal and impersonal dilemmas) correlated with both valence and arousal ratings (see Figure 1 ). However, we found non-significant correlations between the proportion of affirmative (efficient) choices in non-moral dilemmas and the subjective ratings of valence, arousal or dominance (all p ≥ 0.170). Separate correlations between personal or impersonal moral dilemmas and the emotional experience task failed to show significant effects in all picture categories for valence ( p ≥ 0.181), arousal ( p ≥ 0.096) and dominance ratings ( p ≥ 0.235).

www.frontiersin.org

FIGURE 1. Dispersion graph of correlation between proportion of utilitarian choices in moral dilemmas and subjective reactivity of valence (A) and arousal (B) to affective stimuli.

Significant correlations between moral (personal and impersonal) decisions and emotional experience indicated that moral choices were associated with valence ratings to both unpleasant (moral: r = -0.29, p = 0.016; non-moral: r = -0.26, p = 0.043) and pleasant images ( r = 0.26, p = 0.047); experiencing less unpleasantness in response to unpleasant images (both moral and non-moral), and more pleasantness in response to pleasant images were associated with more utilitarian choices (see Figure 1A ). Moral choices were also associated with arousal ratings to unpleasant moral laden images ( r = 0.34, p = 0.004); higher arousal responses correlated with more utilitarian choices (see Figure 1B ). The perceived difficulty ratings to the moral dilemmas were negatively correlated with dominance ratings across the moral and non-moral negative picture categories (unpleasant non-moral, r = -0.26; p = 0.043; and unpleasant moral, r = -0.29; p = 0.016). Conversely, moral choices or difficulty ratings failed to correlate with the perceived moral content of the images.

The main findings of this study are the following: (1) individual differences in self-reported emotional experience correlate with decision-making in moral scenarios, but not in non-moral scenarios; (2) lower experience of unpleasantness to both moral and non-moral unpleasant images and higher experience of pleasantness to pleasant images are associated with utilitarian choice patterns; (3) higher experience of arousal (specifically in response to moral laden images) are associated with more utilitarian choices; and (4) lower dominance over emotions is significantly associated with higher perceived difficulty to make decisions in moral scenarios. In agreement with our initial hypotheses, these findings support the specific association between emotional experience and moral decision-making, and support the notion that diminished experience of unpleasantness favors utilitarian choice patterns. The association between higher arousal to unpleasant moral laden pictures and utilitarian choices, and between low dominance and higher moral difficulty were not originally predicted and may warrant further research.

In agreement with previous findings, we showed that decision-making in healthy populations is sensitive to the impact of moral vs. non-moral content scenarios ( Moll et al., 2001 ; Harenski and Hamann, 2006 ; Tassy et al., 2013 ; Van Bavel et al., 2013 ), and to the impact of personal vs. impersonal involvement within these moral scenarios ( Greene et al., 2001 , 2004 ; Moretto et al., 2010 ; Koenigs et al., 2007 ; Carmona-Perera et al., 2013a ). Difficulty of judgment may also contribute to describe the emotional weight attached to these choices. For example to push a button to kill someone (low emotional salience) is considered easier than to push a person to the train tracks (high emotional salience). Therefore, participants demonstrated sensitivity to moral content, and to the degree of emotional salience associated with this content.

Correlation analyses showed that moral-related patterns of choice (including both personal and impersonal dilemmas) correlate with subjective emotional experience, at difference with non-moral related decisions. Because separate consideration of personal and impersonal dilemmas did not result in significant correlations with emotional experience, our results can only speak of the association between moral-related decisions and emotional experience. In this respect, our findings are in agreement with those of previous studies that have demonstrated associations between the processing of moral (vs. non-moral) contents and emotional reactivity ( Moll et al., 2001 ; Harenski and Hamann, 2006 ; Tassy et al., 2013 ; Van Bavel et al., 2013 ). The direction of the significant correlations between higher subjective valence ratings and higher proportion of utilitarian choices in moral dilemmas are in agreement with the specific role of emotional processes in moral decision-making ( Greene et al., 2001 ; Haidt, 2001 ). The dual process model of moral judgment posits that a decreased sensitivity to the negative emotional input attached to moral violations may foster utilitarian choice patterns ( Greene, 2007 ; Greene et al., 2008 ). Therefore, it is expectable that those individuals with less ability to experience unpleasantness are more prone to endorse utilitarian choices. The findings can also be theoretically accounted by the “undoing hypothesis,” which proposes that positive moods can “undo” the cognitive and physiological effects of negative emotions, thus decreasing experience of unpleasantness and increasing utilitarian biases ( Fredrickson et al., 2000 ; Fredrickson and Branigan, 2005 ). These findings are also in agreement with a plethora of previous evidence demonstrating that induction of positive emotions reliably bias moral decision-making toward utilitarian patterns ( Pastötter et al., 2012 ; Valdesolo and DeSteno, 2006 ).

In partial disagreement with our initial hypothesis (lower arousal associated with utilitarian choices) we found a positive correlation between higher arousal ratings to unpleasant moral laden pictures and higher proportion of utilitarian choices. These findings can be accounted by the inverted U-shaped association between arousal and decision-making, whereby moderate levels of arousal are optimal to process the emotional input that is relevant for decision-making, but too much or too little arousal become disrupting ( Miu et al., 2008 ). Specifically, it has been demonstrated that high levels of arousal are associated with reduced ability to detect the relevant aspects of emotional input in the context of emotion regulation for dilemmas-solving ( Blair et al., 2012 ). Therefore, we tentatively suggest that higher arousal sensitivity may be associated with greater influence of the emotional information that is irrelevant to address the moral dilemmas. Alternatively, these findings could be interpreted in the context of attentional control models of emotions, which postulate an attentional interference effects due to a higher arousal levels ( Schimmack and Derryberry, 2005 ; Jefferies et al., 2008 ). Decreased attentional control has been recently linked to utilitarian choices ( Van Dillen et al., 2012 ). Therefore, attentional process may also account for these findings, playing a moderator role between emotional experience and utilitarian choices.

An additional interesting finding was the association between lower dominance over emotions and higher perceived difficulty to decide about the moral dilemmas. Previous cognitive neuroscience studies have associated individual differences in emotional regulation with moral decision-making, identifying that lower emotional control increase the difficulty to decide, since the individual is driven into a more exhaustive appraisal process ( Harenski et al., 2009 ; Bartels and Rips, 2010 ; Koven, 2011 ). Lower dominance ratings are associated with lower emotional control in the perceived situation ( Bradley and Lang, 1994 ), such that our results agree with the notion that lower emotional control associates with more complex (more highly difficult-perceived) appraisal processes.

In summary, we provide novel evidence about the association between subjective emotional experience and moral decision-making in a community sample. The strengths of the study include the use of a representative community sample from the healthy population, the use of well-validated quantitative measures of emotional experience and moral decision-making, and the potential relevance of our findings for clinical implications. Because we show that variations in emotional experience, but not in subjective perceptions of moral content, are associated with utilitarian biases, we reason that the interventions for individuals with moral judgment problems should focus on training and shaping emotional response, rather than working on the “rules” characterizing moral violations. These type of emotional interventions may be useful to restore social decision-making in patients with acquired brain injuries ( Koenigs et al., 2007 ; Moretto et al., 2010 ), psychopathy ( Blair, 2007 ; Young et al., 2012 ) or drug addictions ( Carmona-Perera et al., 2012 ; Khemiri et al., 2012 ). Our results should be also interpreted in the context of its relevant limitations. First, personal and impersonal moral dilemmas (differing on emotional salience) did not differentially correlate with emotional experience. Hence, future studies are warranted to explore whether our findings can be replicated in more heterogeneous samples allowing further variance within personal and impersonal categories. Second, since emotional input impacts not only on moral choice but also on a range of other decision-making processes ( Paulus and Yu, 2012 ) future studies are also warranted to determine whether reported associations apply only to utilitarian vs. deontological moral decision-making choices, or to a wider spectrum of decision-making scenarios. An additional limitation is the choice to base the emotional measurement only on subjective responses, disregarding complementary physiological or external behavioral indices ( Lang et al., 1993 ) that should be included in future studies; and the non-measurement of some potential moderators of the link between emotion and moral decision-making – e.g., cognitive processes ( Greene et al., 2008 ; Moore et al., 2008 ; Paxton et al., 2011 ; Amit and Greene, 2012 ; Van Dillen et al., 2012 ), personality traits ( Bartels and Pizarro, 2011 ), or desirability to social and experimental demands ( Lumer, 2010 ; Hess and Kotter-Grühn, 2011 ; Caravita et al., 2012 ).

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This research was supported by the “Red de Trastornos Adictivos,” RETICS Program, Instituto de Salud Carlos III, Spanish Ministry of Health (PI: AVG) and the Junta de Andalucía under the Research Project P07.HUM 03089 (PI: MPG). MCP is funded by FPU predoctoral research grant (AP 2008-01848) from Spanish Ministry of Education and Science. We would like to thank all participants involved in the study for his collaboration.

Amit, E., and Greene, J. D. (2012). You see, the ends don’t justify the means: visual imagery and moral judgment. Psychol. Sci. 23, 861–868. doi: 10.1177/0956797611434965

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bartels, D. M., and Pizarro, D. A. (2011). The mismeasure of morals: antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition 121, 154–161. doi: 10.1016/j.cognition.2011.05.010

Bartels, D. M., and Rips, L. J. (2010). Psychological connectedness and intertemporal choice. J. Exp. Psychol. Gen . 139, 49–69. doi: 10.1037/a0018062

Blair, K. S., Geraci, M., Smith, B. W., Hollon, N., DeVido, J., Otero, M., et al. (2012). Reduced dorsal anterior cingulate cortical activity during emotional regulation and top-down attentional control in generalized social phobia, generalized anxiety disorder, and comorbid generalized social phobia/generalized anxiety disorder. Biol. psychiatry 72, 476–482. doi: 10.1016/j.biopsych.2012.04.013

Blair, R. J. (2007). The amygdala and ventromedial prefrontal cortex in morality and psychopathy. Trends Cogn. Sci. 11, 387–392. doi: 10.1016/j.tics.2007.07.003

Bradley, M. M., and Lang, P. J. (1994). Measuring emotion: the self-assessment Manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25, 49–59. doi: 10.1016/0005-7916(94)90063-9

Caravita, S. C., Gini, G., and Pozzoli, T. (2012). Main and moderated effects of moral cognition and status on bullying and defending. Aggress. Behav. 38, 456–468. doi: 10.1002/ab.21447

Carmona-Perera, M., Clark, L., Young, L., Pérez-García, M., and Verdejo-García, A. (2013a). Impaired decoding of fear and disgust predicts utilitarian moral judgment in alcohol-dependent individuals. Alcohol. Clin. Exp. Res. (in press).

Carmona-Perera, M., Reyes del Paso, G. A., Pérez-García, M., and Verdejo-García, A. (2013b). Heart rate correlates of utilitarian moral decision-making in alcoholism. Drug Alcohol Depend. doi: 10.1016/j.drugalcdep.2013.06.023 [Epub ahead of print].

Carmona-Perera, M., Verdejo-García, A., Young, L., Molina-Fernández, A., and Pérez-García, M. (2012). Moral decision-making in polysubstance dependent individuals. Drug Alcohol Depend. 126, 389–392. doi: 10.1016/j.drugalcdep.2012.05.038

Curtin, F., and Schulz, P. (1998). Multiple correlations and Bonferroni’s correction. Biol. Psychiatry 44, 775–777. doi: 10.1016/S0006-3223(98)00043-2

Derogatis, L. R. (1977). SCL-90–R: Administration, Scoring and Procedures Manual I for the Revised Version of Other Instruments of the Psychopathology Rating Scale Series. Baltimore: John Hopkins University.

Fredrickson, B. L., and Branigan, C. A. (2005). Positive emotions broaden the scope of attention and thought–action repertoires. Cogn. Emot. 19, 313–332. doi: 10.1080/02699930441000238

Fredrickson, B. L., Mancuso, R. A., Branigan, C., and Tugade, M. M. (2000). The undoing effect of positive emotions. Motiv. Emot. 24, 237–258. doi: 10.1023/A:1010796329158

Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends Cogn. Sci. 11, 322–323; author reply 323–324. doi: 10.1016/j.tics.2007.06.004

Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., and Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition 107, 1144–1154. doi: 10.1016/j.cognition.2007.11.004

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., and Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron 44, 389–400. doi: 10.1016/j.neuron.2004.09.027

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science 293, 2105–2108. doi: 10.1126/science.1062872

Greenwald, M. K., Cook, E. W., and Lang, P. J. (1989). Affective judgment and psychophysiological response: dimensional covariation in the evaluation of pictorial stimuli. J. Psychophysiol. 3, 51–64

Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol. Rev. 108, 814–834. doi: 10.1037/0033-295X.108.4.814

Harenski, C. L., and Hamann, S. (2006). Neural correlates of regulating negative emotions related to moral violations. Neuroimage 30, 313–324. doi: 10.1016/j.neuroimage.2005.09.034

Harenski, C. L., Kim, S. H., and Hamann, S. (2009). Neuroticism and psychopathy predict brain activation during moral and non-moral emotion regulation. Cogn. Affect. Behav. Neurosci. 9, 1–15. doi: 10.3758/CABN.9.1.1

Harlé, K. M., and Sanfey, A. G. (2010). Effects of approach and withdrawal motivation on interactive economic decisions. Cogn. Emot. 24, 1456–1465. doi: 10.1080/02699930903510220

CrossRef Full Text

Heekeren, H. R., Wartenburger, I., Schmidt, H., Prehn, K., Schwintowski, H. P., and Villringer, A. (2005). Influence of bodily harm on neural correlates of semantic and moral decision-making. Neuroimage 24, 887–897. doi: 10.1016/j.neuroimage.2004.09.026

Heekeren, H. R., Wartenburger, I., Schmidt, H., Schwintowski, H.-P., and Villringer, A. (2003). An fMRI study of simple ethical decision-making. Neuroreport 14, 1215–1219. doi: 10.1097/00001756-200307010-00005

Hess, T. M., and Kotter-Grühn, D. (2011). Social knowledge and goal-based influences on social information processing in adulthood. Psychol. Aging 26, 792–802. doi: 10.1037/a0023775

Jefferies, L. N., Smilek, D., Eich, E., and Enns, J. T. (2008). Emotional valence and arousal interact in attentional control. Psychol. Sci. 19, 290–295. doi: 10.1111/j.1467-9280.2008.02082.x

Khemiri, L., Guterstam, J., Franck, J., and Jayaram-Lindström, N. (2012). Alcohol dependence associated with increased utilitarian moral judgment: a case control study. PLoS ONE 7:e39882. doi: 10.1371/journal.pone.0039882

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., et al. (2007). Damage to the prefrontal cortex increases utilitarian moral judgments. Nature 446, 908–911. doi: 10.1038/nature05631

Koven, N. S. (2011). Specificity of meta-emotion effects on moral decision-making. Emotion 11, 1255–1261. doi: 10.1037/a0025616

Lang, P. J. (1980). “Behavioral treatment and bio-behavioral assessment: computer applications,” in Technology in Mental Health Care Delivery , ed. T. A. Williams (Norwood, NY: Ablex), 119–137.

Lang, P. J., Bradley, M. M., and Cuthbert, B. N. (2001). International Affective Picture System (IAPS): Instruction Manual and Affective Ratings. Technical Report A-5, The Center for Emotion and Motivation . Gainesville, FL: University of Florida, Center for Research in Psychophysiology.

Lang, P. J., Greenwald, M. K., Bradley, M. M., and Hamm, A. O. (1993). Looking at pictures: affective, facial, visceral, and behavioral reactions. Psychophysiology 30, 261–273. doi: 10.1111/j.1469-8986.1993.tb03352.x

Lang, P. J., öhman, A., and Vaitl, D. (1988). The International Affective Picture System [Photographic Slides] . Gainesville, FL: University of Florida, Center for Research in Psychophysiology.

Lumer, C. (2010). Moral desirability and rational decision. Ethical Theory Moral Pract. 13, 561–584. doi: 10.1007/s10677-010-9227-x

Miu, A. C., Heilman, R. M., and Houser, D. (2008). Anxiety impairs decision-making: psychophysiological evidence from an Iowa Gambling Task. Biol. Psychol. 77, 353–358. doi: 10.1016/j.biopsycho.2007.11.010

Moll, J., De Oliveira-Souza, R., Bramati, I. E., and Grafman, J. (2002b). Functional networks in emotional moral and non-moral social judgments. Neuroimage 16, 696–703. doi: 10.1006/nimg.2002.1118

Moll, J., De Oliveira-Souza, R., Eslinger, P. J., Bramati, I. E., Mourão-Miranda, J., Andreiuolo, P. A., et al. (2002a). The neural correlates of moral sensitivity: a functional magnetic resonance imaging investigation of basic and moral emotions. J. Neurosci. 22, 2730–2736.

Pubmed Abstract | Pubmed Full Text

Moll, J., Eslinger, P. J., and De Oliveira-Souza, R. (2001). Frontopolar and anterior temporal cortex activation in a moral judgment task: preliminary functional MRI results in normal subjects. Arq. Neuropsiquiatr. 59, 657–664. doi: 10.1590/S0004-282X2001000500001

Moore, A. B., Clark, B. A., and Kane, M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychol. Sci. 19, 549–557. doi: 10.1111/j.1467-9280.2008.02122.x

Moretto, G., Làdavas, E., Mattioli, F., and Di Pellegrino, G. (2010). A psychophysiological investigation of moral judgment after ventromedial prefrontal damage. J. Cogn. Neurosci. 22, 1888–1899. doi: 10.1162/jocn.2009.21367

Ostrosky-Solís, F., Chayo-Dichy, R., Castillo-Parra, G., Vélez-García, A. E., and Arias-García, N. (2003). Valencia, activación, dominancia y contenido moral, ante estímulos visuales con contenido emocional y moral: un estudio en población mexicana. Rev. Esp. Neuropsicol. 5, 213–225.

Pastötter, B., Gleixner, S., Neuhauser, T., and Bäuml, K. H. T. (2012). To push or not to push? Affective influences on moral judgment depend on decision frame. Cognition 126, 373–377. doi: 10.1016/j.cognition.2012.11.003

Paulus, M. P., and Yu, A. J. (2012). Emotion and decision-making: affect-driven belief systems in anxiety and depression. Trends Cogn. Sci. 16, 476–483. doi: 10.1016/j.tics.2012.07.009

Paxton, J. M., Ungar, L., and Greene, J. D. (2011). Reflection and reasoning in moral judgment. Cogn. Sci. 36, 163–177. doi: 10.1111/j.1551-6709.2011.01210.x

Schimmack, U., and Derryberry, D. (2005). Attentional interference effects of emotional pictures: threat, negativity, or arousal? Emotion 5, 55–66. doi: 10.1037/1528-3542.5.1.55

Schnall, S., Haidt, J., Clore, G. L., and Jordan, A. H. (2008). Disgust as embodied moral judgment. Pers. Soc. Psychol. Bull. 34, 1096–1109. doi: 10.1177/0146167208317771

Tassy, S., Deruelle, C., Mancini, J., Leistedt, S., and Wicker, B. (2013). High levels of psychopathic traits alters moral choice but not moral judgment. Front. Hum. Neurosci. 7:229. doi: 10.3389/fnhum.2013.00229

Ugazio, G., Lamm, C., and Singer, T. (2012). The role of emotions for moral judgments depends on the type of emotion and moral scenario. Emotion 12, 579–590. doi: 10.1037/a0024611

Valdesolo, P., and DeSteno, D. (2006). Manipulations of emotional context shape moral judgment. Psychol. Sci. 17, 476–477. doi: 10.1111/j.1467-9280.2006.01731.x

Van Bavel, J. J., Packer, D. J., Haas, I. J., and Cunningham, W. A. (2013). The importance of moral construal: moral versus non-moral construal elicits faster, more extreme, universal evaluations of the same actions. PLoS ONE 7:e48693. doi: 10.1371/journal.pone.0048693

Van Dillen, L. F., Van der Wal, R. C., and Van den Bos, K. (2012). On the role of attention and emotion in morality: attentional control modulates unrelated disgust in moral judgments. Pers. Soc. Psychol. Bull. 38, 1222–1231. doi: 10.1177/0146167212448485

Vélez-García, A. E., Chayo-Dichy, R., García, N. A., Castillo-Parra, G., and Ostrosky-Solís, F. (2003). Emociones morales, una batería para su medición. Rev. Neuropsicol. Neuropsiquiatr. Neurosci. 5, 189–199.

Verdejo-García, A., López-Torrecillas, F., Aguilar de Arcos, F., and Pérez-García, M. (2005). Differential effects of MDMA, cocaine, and cannabis use severity on distinctive components of the executive functions in polysubstance users: a multiple regression analysis. Addict. Behav. 30, 89–101. doi: 10.1016/j.addbeh.2004.04.015

Wheatley, T., and Haidt, J. (2005). Hypnotic disgust makes moral judgments more severe. Psychol. Sci. 16, 780–784. doi: 10.1111/j.1467-9280.2005.01614.x

Young, L., Koenigs, M., Kruepke, M., and Newman, J. P. (2012). Psychopathy increases perceived moral permissibility of accidents. J. Abnorm. Psychol. 121, 659–667. doi: 10.1037/a0027489

Keywords : moral-decision making, utilitarian choices, moral emotions, valence, arousal

Citation: Carmona-Perera M, Martí-García C, Pérez-García M and Verdejo-García A (2013) Valence of emotions and moral decision-making: increased pleasantness to pleasant images and decreased unpleasantness to unpleasant images are associated with utilitarian choices in healthy adults. Front. Hum. Neurosci. 7 :626. doi: 10.3389/fnhum.2013.00626

Received: 30 April 2013; Accepted: 10 September 2013; Published online: 26 September 2013.

Reviewed by:

Copyright © 2013 Carmona-Perera, Martí-García, Pérez-García and Verdejo-García. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Antonio Verdejo-García, School of Psychology and Psychiatry, Monash University, 3800 Wellington Road, Melbourne, VIC, Australia e-mail: [email protected]

This article is part of the Research Topic

Interactions between emotions and social context: Basic, clinical and non-human evidence

Moshe Ratson MBA, MFT

Decision-Making

The power of emotions in decision making, how to use emotions constructively in decision making..

Updated August 7, 2023 | Reviewed by Monica Vilhauer

  • Emotions play a significant role in decision-making.
  • Without emotions to motivate and push us, we would be passive and do nothing.
  • Make sure to balance and integrate emotional insights with logical reasoning.
  • Practice emotional intelligence skills to better your decisions.

Source: Moshe Ratson

Many would consider emotions to be a hindrance to decision-making and, therefore, think that they would be better off without them. They may avoid or suppress them, rather than feel, process and understand their meaning. When it comes to the decision-making process, they would prefer to be rational rather than emotional.

However, emotions have value. It appears that without emotions to motivate and push us, we would be passive and do nothing. Decisions are very much informed by our emotional state since this is what emotions are designed to do. Emotions quickly condense an experience, and evaluate it to inform our decision, so we can rapidly respond to the situation.

While emotions serve to direct us, they are driven by our automatic survival nature. As such, most of the time emotions communicate their messages below our level of awareness. It is important to note that because of their speed and survival purpose, emotions are not particularly accurate. Their speed and effectiveness compensate for what they lack in being specific and detailed. This is why the emotional system provides many false alarms, which requires us to reevaluate our response and check if it is appropriate to the particular situation.

The latest research has established that emotion is crucial in a rational decision-making process. Antonio Damasio and his colleagues concluded that in the absence of emotional markers, decision making is virtually unattainable. Our emotions will drive the conclusions we make, and our well-being may depend upon our ability to understand and interpret them while integrating them with a rational mind to make an appropriate decision. While it is important to consider and process emotional signals, we need to evaluate our responses and see if they are proper to the relevant situation.

How to use emotions to make effective decision-making?

Here are some steps to effectively use emotions for successful decision-making:

Welcome your emotions

Don’t repress or ignore your emotions. Start by identifying and understanding your emotions. Take a moment to recognize what you are feeling and why you are feeling. This mindful process of self-examination is critical to healthy decisions, since emotions can influence our views and judgments.

Remember “emotional bias”

Because of their survival nature, emotions can create biases that affect how we perceive information and interpret situations. Remember that the emotional brain cares more about being safe than about being correct. Listen to its alarm signal, and at the same time question its message.

Regulate your emotions

Emotions, especially at a high intensity, impact our ability to make rational decisions. Strong emotions can impair our judgment and make it challenging to think objectively and critically. This is why it is important to temper our emotions to be balanced and proportional to the situation.

Utilize emotions as a guide

Emotions can act as a compass, pointing you toward what matters most to you and/or what aligns with your values. However, it is essential to avoid letting emotions dictate your decision-making. Make sure to balance emotional insights with logical reasoning.

Enlist your rational mind

It is important to enlist the help of the rational mind. By doing so, you move from a system that operates quickly, intuitively, and unconsciously to a system that is slower and more controlled, rational, and conscious. You move beyond an impulsive, reactive emotional system to one that is contemplative, flexible, and strategic.

feelings and moral decision making essay

Consider the context

Evaluate the situation at hand and consider that emotions may be influenced by the context. Emotions that arise from past experiences or personal biases might cloud your judgment. Separate the present situation from the past and focus on the relevant factors.

Assemble relevant information

Emotions can provide valuable insights, but they should be complemented with factual information. Take your time to gather crucial information before making important decisions. Analyze the pros and cons of your options to make the best possible decisions.

Mindfulness is key to harmonizing the mind. The unregulated mind can become deluded, allowing passions, urges, and wild emotions to take over. Mindfulness allows us to notice our emotions and engage the rational mind to interpret their message. The goal is to treat your emotions as a gateway to a greater level of awareness.

Cultivate compassion

Cultivating compassion in decision making is a powerful way to make more empathetic , ethical, and balanced choices that consider the well-being of all. Compassion helps us soothe the emotional mind and choose actions that will benefit ourselves and others.

Practice emotional intelligence

Emotional intelligence is the ability to recognize and manage your emotions effectively. Key elements of emotional intelligence are self-awareness, self-regulation , motivation , empathy, and social skills. By developing emotional intelligence skills, you can use your emotions to inform your decisions without being controlled by them.

Reframe the situation

Reframing means consciously changing your way of thinking about the meaning of an emotionally charged situation in order to reduce negative feelings. You shift your interpretation of an event by specifically having loving thoughts and extending compassion to yourself and to other people.

Expand your perspective

When you see the big picture and are focused on your highest purpose, you are not distracted by smaller issues and impulses. Figuring out your deepest long-term goals and pursuing them will channel your emotions toward peace and harmony. It will allow you to recognize that if the decision is driven by your values, it’s the best decision regardless of the outcome.

To sum up, emotions play a significant role in decision-making and, when used properly, they can enhance the effectiveness of the decision-making process. Remember, emotions are a natural part of being human, and they can be a valuable asset in decision-making. By combining emotional insights with rational thinking, you can make more effective and well-rounded decisions.

Keltner D, Lerner JS.( 2010). Emotion. In The Handbook of social psychology, ed. DT Gilbert, ST Fiske, G Lindzey, pp. 317-52. New York, NY: Wiley

Damasio, A.R. (1990). Individuals with sociopathic behavior caused by frontal damage fail to respond autonomically to social stimuli". Behavioural brain research, 41 , 81-94

Moshe Ratson MBA, MFT

Moshe Ratson, MBA, MFT, is a psychotherapist and executive coach in NYC. He specializes in personal and professional development, anger management, emotional intelligence, infidelity issues, and couples and marriage therapy.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience
  • Ethics Home Page
  • Business Ethics
  • Campus Ethics
  • Character Education
  • Government Ethics
  • Leadership Ethics
  • Ethics Articles
  • Ethics Cases
  • Ethical Decision Making
  • Ethics Blogs
  • Center News
  • E-letter/Subscribe
  • Make a Gift

feelings and moral decision making essay

Featured Materials

  • Ethics Home
  • About the Center
  • © 2014 Markkula Center for Applied Ethics

To read this content please select one of the options below:

Please note you do not have access to teaching notes, chapter 5 feelings about ethical decisions: the emotions of moral residue.

Emotions, Ethics and Decision-Making

ISBN : 978-1-84663-940-1 , eISBN : 978-1-84663-941-8

Publication date: 25 July 2008

Scholars in philosophy have proposed that individuals who choose among two equally ethical alternatives will experience regret as a result of the “moral residue” that remains from not having been able to select both alternatives. Although posed and often discussed by philosophers, the veracity of this proposition has not been empirically tested. This chapter proposes a theoretical framework which synthesizes propositions from Philosophy with theory and research on emotions in the workplace to address questions concerning how the characteristics of ethical dilemmas give rise to different emotions, how the characteristics of employees affect the experience of emotions, and the consequences of the experience of emotions as a result of ethical decision making.

Zerbe, W.J. (2008), "Chapter 5 Feelings about ethical decisions: the emotions of moral residue", Zerbe, W.J. , Härtel, C.E.J. and Ashkanasy, N.M. (Ed.) Emotions, Ethics and Decision-Making ( Research on Emotion in Organizations, Vol. 4 ), Emerald Group Publishing Limited, Leeds, pp. 109-129. https://doi.org/10.1016/S1746-9791(08)04005-4

Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited

We’re listening — tell us what you think

Something didn’t work….

Report bugs here

All feedback is valuable

Please share your general feedback

Join us on our journey

Platform update page.

Visit emeraldpublishing.com/platformupdate to discover the latest news and updates

Questions & More Information

Answers to the most commonly asked questions here

  • Open access
  • Published: 14 May 2024

Pulling the lever in a hurry: the influence of impulsivity and sensitivity to reward on moral decision-making under time pressure

  • Fiorella Del Popolo Cristaldi 1 ,
  • Grazia Pia Palmiotti 2 ,
  • Nicola Cellini 1 , 3 &
  • Michela Sarlo 4  

BMC Psychology volume  12 , Article number:  270 ( 2024 ) Cite this article

331 Accesses

1 Altmetric

Metrics details

Making timely moral decisions can save a life. However, literature on how moral decisions are made under time pressure reports conflicting results. Moreover, it is unclear whether and how moral choices under time pressure may be influenced by personality traits like impulsivity and sensitivity to reward and punishment.

To address these gaps, in this study we employed a moral dilemma task, manipulating decision time between participants: one group ( N  = 25) was subjected to time pressure (TP), with 8 s maximum time for response (including the reading time), the other ( N  = 28) was left free to take all the time to respond (noTP). We measured type of choice (utilitarian vs. non-utilitarian), decision times, self-reported unpleasantness and arousal during decision-making, and participants’ impulsivity and BIS-BAS sensitivity.

We found no group effect on the type of choice, suggesting that time pressure per se did not influence moral decisions. However, impulsivity affected the impact of time pressure, in that individuals with higher cognitive instability showed slower response times under no time constraint. In addition, higher sensitivity to reward predicted a higher proportion of utilitarian choices regardless of the time available for decision.

Conclusions

Results are discussed within the dual-process theory of moral judgement, revealing that the impact of time pressure on moral decision-making might be more complex and multifaceted than expected, potentially interacting with a specific facet of attentional impulsivity.

Peer Review reports

Making timely moral decisions is a real challenge, as emerged during the COVID-19 pandemic where physicians and nurses were forced to quickly choose which patients to treat first under limited healthcare resources.

Sacrificial moral dilemmas are reliable experimental probes to study the contribution of cognitive and emotional processes to moral decision-making [ 1 ]. In these studies, participants are confronted with life-and-death hypothetical scenarios where they have to decide whether to endorse or reject the utilitarian choice of killing one person to save more lives. In the classic Trolley dilemma, the utilitarian option requires pulling a lever to redirect a runaway trolley, which would kill five workmen, onto a sidetrack where it will kill only one person; in the Footbridge version, it requires pushing one large man off an overpass onto the tracks to stop the runaway trolley. Research consistently showed that most people respectively endorse and reject the utilitarian resolution in trolley- and footbridge-like dilemmas, despite the identical cost-benefit trade-off [ 1 , 2 , 3 ].

According to the dual-process model of moral judgement [ 1 ], responses to moral dilemmas are driven by the outcomes of a competition between cognitive and emotional processes. In the Footbridge case, a strong emotional aversive reaction to causing harm to one person overrides a cognitive-based analysis of saving more lives, driving toward the rejection of the utilitarian resolution because harming someone is perceived as an intended means to an end. Instead, in the Trolley case, a lower emotional engagement allows the deliberate cost-benefit reasoning to prevail and drive toward the utilitarian choice since harming someone is perceived as an unintended side effect. Therefore, dilemma resolutions vary depending on how much each dilemma type elicits aversive emotions, so that the more emotional processes are engaged the higher the likelihood of rejecting utilitarian choices. Unsurprisingly, in scenarios where the decision-maker’s own life is at stake (“personal involvement”), this pattern reverses, so that a strong negative emotional reaction to self-sacrifice pushes towards utilitarian, self-protective behaviour [ 4 ].

Time is a key feature of high-stakes human choices. Time pressure alters decision-making by increasing reliance on emotional states [ 5 ]. Previous research in moral decision-making has demonstrated that time pressure affects the outcomes and the processes involved in moral judgement, as it is assumed to reduce the time for the cost-benefit calculation letting emotional processes prevail. This led to a reduced proportion of utilitarian choices [ 6 , 7 , 8 , 9 , 10 ], and a decreased willingness to self-sacrifice in dilemmas with personal involvement [ 11 ]. However, evidence remains mixed, with some studies suggesting that reduced decision times are associated with a higher proportion of utilitarian choices [ 12 , 13 ], and other studies finding null results [ 14 ]. Moreover, very few studies have investigated if these phenomena are influenced by personality traits known to affect how people make decisions. Among these, impulsivity and motivational drives towards action/inhibition seem particularly relevant.

Impulsivity involves multiple cognitive and behavioural domains (e.g., inability to reflect on choices’ outcomes, to defer rewards, and to inhibit prepotent responses; [ 15 ]) that are strongly involved in decision-making. Beyond research on psychopathy, studies investigating the role of impulsivity in moral dilemmas are surprisingly scarce. Within moral judgments, higher impulsivity should reduce the engagement of deliberative processes, thereby allowing emotional processes to prevail. Nonetheless, previous studies measuring impulsivity in moral judgement tasks have found no effects of impulsivity on the type of resolutions taken [ 16 , 17 , 18 ], and to our knowledge, no study has manipulated decision times.

Motivational drives towards action/inhibition, namely the Behavioural Inhibition and Activation Systems (BIS/BAS), are worthy of investigation, too. Indeed, the BIS is sensitive to signals of punishment, inhibiting behaviours leading to negative outcomes or potential harm; whereas the BAS is sensitive to reward, driving to behaviours resulting in positive outcomes [ 19 ]. Within moral dilemmas, “reward” corresponds to the maximisation of lives saved, thereby driving towards utilitarian resolutions. Consistently, previous research [ 20 ] showed that higher BAS individuals tended to make an overall higher number of utilitarian choices, while higher BIS participants tended to reject utilitarian resolutions, particularly in footbridge-like dilemmas. Notably, without time constraints no effects of BIS-BAS emerged on response times.

In summary, there is strong evidence that cognitive-emotional conflict drives moral decisions in sacrificial dilemmas and that reducing decision time can further affect moral choices. However, the direction of this effect is still unclear, as well as if impulsivity and BIS-BAS sensitivity might influence these processes. To address these gaps, in our study, we used a standardised set of moral scenarios to investigate the effect of impulsivity and BIS-BAS sensitivity on moral decision-making under time pressure. We manipulated decision time between participants, as it has been successfully done by the majority of studies manipulating time pressure in a moral dilemma task (e.g [ 6 , 7 , 8 , 9 ]). A within-subjects design, conversely, may not have been appropriate because it could have generated a sequential effect in the responses (cf [ 21 , 22 ]): the speeding effect of the time pressure condition could have extended to the condition with no time pressure, potentially undermining the effectiveness of the manipulation. Moreover, we measured impulsivity and BIS/BAS sensitivity as well as self-reported valence and arousal experienced during decision-making. Consistently with the dual-process model, in the time pressure group, we expected to find faster response times, higher arousal and unpleasantness ratings, and lower proportions of utilitarian choices [ 6 , 7 , 8 , 9 , 10 ]. As for the effect of impulsivity and BIS-BAS sensitivity, the literature is less conclusive in guiding stringent confirmatory hypotheses. Within the dual-process framework, we might hypothesise that individuals with higher impulsivity would exhibit a greater tendency towards emotionally-driven responses, particularly under time pressure. Time constraints might hinder a careful evaluation of different options and decision outcomes by increasing emotional activation or depleting the cognitive resources available for decision-making. This might lead to a lower endorsement of utilitarian choices and/or to an increase in self-protective behaviours in dilemmas involving personal involvement. In line with [ 20 ] Moore et al. (2011), individuals with higher BAS sensitivity might show an overall propensity towards utilitarian resolutions, while BIS-reactive individuals might show the opposite, and this trend should be reversed in dilemmas with personal involvement.

Participants

Sixty healthy university students (37 F) were recruited to voluntarily participate in the study. They had no history of psychiatric or neurological disorders, nor prior knowledge of moral dilemmas. The sample size was based on previous studies manipulating time pressure in moral dilemma tasks [ 6 , 8 ], and allowed to reach a 96% post-hoc power (α = 0.05, f = 0.50).

Participants were randomly assigned to either the time pressure (TP, N  = 30) or no time pressure (noTP, N  = 30) group. Data from 6 participants were discarded because of deviations from instructions during data collection (e.g., reversed response scales, not keeping the fingers on the computer keys during the task). Data from 1 participant was discarded according to the a-priori criterion of missing responses in more than 20% of the trials. The final sample included 53 participants (TP group = 25, F = 15, age M = 22 years, SD = 1.55 years, range = 20–25; noTP group = 28, F = 17, age M = 21.9 years, SD = 1.77 years, range = 19–25).

All participants gave written consent before participation. The study was submitted and approved by the Ethical Committee for the Psychological Research of the University of Padua (protocol n. 2105) and conducted in accordance with the Declaration of Helsinki.

Stimulus material

A set of 75 moral dilemmas [ 4 ] was administered to each participant. This consisted of 60 experimental dilemmas and 15 filler dilemmas. Experimental dilemmas included 30 trolley- and 30 footbridge-like dilemmas, of which 15 with personal involvement and 15 without personal involvement. Filler dilemmas were similar to experimental dilemmas but described non-deathly moral issues (e.g., stealing, lying, being dishonest), and were included to avoid automaticity in responding due to habituation to deathly scenarios. This condition was not analysed and will not be discussed further here.

Dilemmas were presented randomly within 3 blocks of 25 trials each (10 footbridge-like, 10 trolley-like, and 5 filler dilemmas). Each dilemma was presented as text, in white type against a grey background, through a series of two screens. The first described the scenario, in which some threat is going to cause the death of a group of people; the second described the hypothetical action (utilitarian option, namely saving more lives), in which the agent kills one individual to save the group of people. Participants had to choose whether or not enacting this behaviour by pressing the corresponding key on the computer keyboard.

Stimuli were presented on a 19-inch computer screen at 100 cm distance. Stimuli were presented with E-prime software [ 23 ].

Upon arrival, participants were given information about the experiment, and they signed the informed consent. Then, they were asked to fill out the State-Trait Anxiety Inventory (STAI Form Y-2) [ 24 ] and the Beck Depression Inventory (BDI-II) [ 25 ]. Since anxiety and depression interact with emotional reactivity and with decision-making under time pressure [ 26 ], we decided to measure (and control for them) in our experiment.

Afterwards, participants sat in a sound-attenuated room where instructions for the task were given. Specifically, they were asked to identify with the main character of the scenarios. Each trial began with the scenario, that participants could read at their own pace. After pressing the spacebar, the utilitarian option was presented for a maximum of 8 s in the TP group and for an unlimited time in the noTP group. Participants were asked to read the proposed action and decide whether to choose it or not by pressing one of two computer keys marked as “YES” or “NO”.

In the TP group participants had a limited time to respond, as indicated by a white bar located on the upper side of the screen above the text, decreasing in size every second and disappearing when time ran out. Instructions stressed to respond within the limited time indicated by the bar. If participants failed to respond within the allotted time the next scenario would appear. In the noTP group participants were instructed to respond when they reached a decision, having as much time as they wanted to decide. In both groups, response times were recorded from the onset of the utilitarian option on the screen.

After their response, participants were required to rate how they felt while they were deciding using a computerised version of the Self-Assessment Manikin (SAM) [ 27 ], displaying the 9-point scales of valence (unpleasantness/pleasantness) and arousal (calm/activation), with higher scores indicating higher pleasantness and higher arousal. Then, the next scenario was presented. After each block of trials, participants could take a break to avoid fatigue. Before starting the experimental session, each participant familiarised with the task through two practice trials to check that they understood the instructions properly. After the experimental session, participants were asked to fill out the Barratt Impulsiveness Scale (BIS-11) [ 28 ] and the BIS-BAS Scales [ 29 ].

The BIS-11 is a 30-item self-report questionnaire measuring impulsivity. It is rated on a 4-point Likert scale from 1 = rarely/never to 4 = almost always/always. The total scores range from 30 to 120, with higher total scores reflecting higher levels of impulsivity. The BIS-11 comprises six first-order subscales of attention (e.g., “focusing on the task at hand”), motor impulsiveness (e.g., “acting on the spur of the moment”), self-control (e.g., “planning and thinking carefully”), cognitive complexity (e.g., “enjoy challenging mental tasks”), perseverance (e.g., “a consistent lifestyle”), and cognitive instability (e.g., “thought insertions and racing thoughts”). These fell under three second-order subscales: attentional impulsiveness (attention and cognitive instability), motor impulsiveness (motor impulsiveness and perseverance), and non-planning impulsiveness (self-control and cognitive complexity).

The BIS-BAS scales are a self-report measure of BIS-BAS sensitivity containing a 7-item BIS subscale and a 13-item BAS factor comprising 3 subscales. It is rated on a 5-point Likert scale from 1 = “does not describe me at all” to 5 = “describes me completely”, with higher scores indicating higher BIS-BAS sensitivity. The BIS subscale includes items regarding reactions to the anticipation of punishment. The BAS factor assesses how people respond to potentially rewarding events and comprises three subscales: Reward Responsiveness (5 items regarding the positive responses to anticipated or actual reward), Drive (4 items pertaining to pursuing desired goals), and Fun Seeking (4 items referring to desiring new rewards and willing to approach a current potentially rewarding event).

Data analysis

The study has a 2 ( group , between-subjects: TP vs. noTP) x 2 ( dilemma type , within-subjects: trolley-like vs. footbridge-like) x 2 (personal involvement , within-subjects: no involvement vs. involvement) mixed design. We measured as dependent variables (DVs): type of choice (utilitarian vs. non-utilitarian), choice response times (in msec), valence , and arousal ratings. We chose not to include the type of choice (utilitarian vs. non-utilitarian) as an additional fixed factor in our analysis, although we acknowledge that it may be a factor of interest, because in our sample the number of trials where participants opted for utilitarian resolutions was not comparable to the number of trials where participants rejected utilitarian resolutions within each dilemma type. Thus, a statistical comparison between the two types of choice would have been unreliable. However, for the sake of completeness, we provide in the Supplementary Material descriptive statistics (Table S1 ) and plots (Figure S1 ) regarding choice response times, valence and arousal ratings as a function of group (TP vs. noTP), dilemma type (trolley- vs. footbridge-like) and type of choice (utilitarian vs. non-utilitarian).

Data were pre-processed according to the following a-priori criteria: trials with missing values and with response times ≤ 150 msec were discarded ( ∼  23%), response times were log-transformed to account for their skewed distribution [ 30 ], and questionnaire scores were mean-centred.

Analyses were performed using R software. Outliers were detected through median absolute deviation values (MAD > 3) computed on choice, choice response times, valence, and arousal ratings. We identified 6 univariate outliers. However, visual inspection of their ratings showed that they were characterised only by slightly different values than other participants. Since none of them significantly impacted the models’ estimates (as assessed through Cook’s distance, see below), we decided to keep them in data analysis. Data from 53 participants entered data analysis.

For each DV we fitted a (Generalised) Linear Mixed-effects Model ((G)LMM) with individual random intercept, group , dilemma type , personal involvement , and their interaction as fixed factors. We used a binomial family for the GLMM on choice and a Gaussian family for the LMMs on the remaining DVs. BIS-11 and BIS-BAS scores were added as covariates in separate models, controlling for STAI and BDI-II scores in each model. When a significant effect of a questionnaire predictor was found, additional models testing the slopes of questionnaire trends for each level of the fixed factors (group and dilemma type) were performed.

Influential cases ( N  = 0) were evaluated through Cook’s distance (> 1). GLMMs effects were tested through Type II Analysis of Deviance, while LMMs effects were tested by means of F -test and p -values calculated via Satterthwaite’s degrees of freedom method (α = 0.05). All post-hoc pairwise comparisons were tested through estimated marginal means or trends contrasts, adjusted for multiple comparisons with the False Discovery Rate (FDR) method. For each model, in the Supplementary Material Tables S2 - S5 we report the estimated parameters with 95% CI, marginal, and conditional R 2 .

Descriptive statistics

Descriptive statistics are summarised in Table  1 .

Proportion of utilitarian choices

The model on choices (R 2 marginal = 0.277; R 2 conditional = 0.506; Fig.  1 A; Supplementary Material Table S2 ) did not show significant group effects (χ 2 (1) = 1.05, p  = .777). A main effect of dilemma type (χ 2 (1) = 647.064, p  < .001) was observed, with trolley-like dilemmas eliciting a higher proportion of utilitarian choices than footbridge-like dilemmas (trolley vs. footbridge: 2.68, SE = 0.104, z  = 25.63, p  < .001). We also found a main effect of involvement (χ 2 (1) = 9.36, p  = .002), better specified by a significant dilemma type × involvement interaction (χ 2 (1) = 10.13, p  = .002; Fig.  1 B). Dilemmas with personal involvement elicited a higher proportion of utilitarian choices than dilemmas without personal involvement only in footbridge-like dilemmas (trolley no involvement vs. involvement: -0.026, SE = 0.126, z = -0.207, p  = .836; footbridge no involvement vs. involvement: -0.610, SE = 0.138, z = -4.424, p  < .001). Lastly, we found significant effects of BIS-BAS scores: regardless of group and dilemma type , and controlling for STAI and BDI-II scores, higher Reward Responsiveness subscale scores predicted a higher proportion of utilitarian choices (R 2 marginal = 0.310; R 2 conditional = 0.497; χ 2 (1) = 9.35, p  = .002; β = 0.181, SE = 0.06, z  = 3.27, p  = .001; Fig.  1 C).

figure 1

( A ) Utilitarian choices as a function of the Dilemma type. ( B ) Utilitarian choices as a function of Personal Involvement. ( C ) Relation between utilitarian choices and BIS-BAS Reward Responsiveness subscale scores in the whole sample. Error bars (and grey area) represent standard errors of the means

Choice response times

The model on choice response times (R 2 marginal = 0.250; R 2 conditional = 0.588; Fig.  2 A; Supplementary Material Table S3 ) showed significant main effects of group ( F (1, 51) = 32.53, p  < .001) and dilemma type ( F (1, 3038) = 270.41, p  < .001). Response times were faster in the TP than in the noTP group (noTP vs. TP: 0.463 in log scale and 4266 in msec, SE = 0.081, t (51) = 5.7, p  < .001), and in footbridge- than trolley-like dilemmas (trolley vs. footbridge: 0.191 in log scale and 1169 in msec, SE = 0.012, t (3038) = 16.44, p  < .001). We also found a main effect of involvement ( F (1, 3038) = 4.56, p  = .033), better specified by a significant dilemma type × involvement interaction ( F (1, 3038) = 4.66, p  = .031; Fig.  2 B). Dilemmas with personal involvement elicited faster response times than dilemmas without personal involvement only in footbridge-like dilemmas (trolley no involvement vs. involvement: 0.00 in log scale and 26 in msec, SE = 0.016, t (3038) = -0.017, p  = .987; footbridge no involvement vs. involvement: 0.05 in log scale and 210 in msec, SE = 0.016, t (3038) = 3.039, p  = .002).

We also found an effect of BIS-11 Cognitive Instability score, that remained significant controlling for STAI and BDI-II scores. From the model (R 2 marginal = 0.298; R 2 conditional = 0.605) testing the slopes of BIS-11 Cognitive Instability trend for each level of the fixed factors, a significant interaction emerged between BIS-11 Cognitive Instability scores, group , and dilemma type ( F (1, 3039) = 4.40, p  = .036; Fig.  2 C). The slope analysis showed that higher BIS-11 Cognitive Instability scores predicted slower response times in the noTP group to both dilemma types (trolley: β = 0.062, SE = 0.030, CI = [0.001, 0.122]; footbridge: β = 0.077, SE = 0.030, CI = [0.016, 0.138]), whereas slopes in the TP group were not statistically different from 0 (trolley: β = -0.016, SE = 0.034, CI = [-0.084, 0.051]; footbridge: β = -0.034, SE = 0.034, CI = [-0.102, 0.034]).

figure 2

( A ) Choice response times as a function of the Dilemma type. ( B ) Choice response times as a function of the Dilemma type and Personal Involvement. ( C ) Relation between choice response times and BIS-11 Cognitive Instability scores as a function of Dilemma type. Error bars (and shaded areas) represent standard errors of the means. TP: Time Pressure group. noTP: no Time Pressure group

Valence ratings

The model on valence ratings (R 2 marginal = 0.006; R 2 conditional = 0.485; Fig.  3 ; Supplementary Material Table S4 ) showed only a significant main effect of dilemma type ( F (1, 3038) = 10.402, p  = .001), with trolley-like dilemmas eliciting higher unpleasantness than footbridge-like dilemmas (trolley vs. footbridge: -0.148, SE = 0.046, t (3042) = -3.22, p  = .001). Neither personal involvement nor questionnaire scores showed any significant effects.

figure 3

Valence ratings as a function of the Dilemma type. Error bars represent standard errors of the means

Arousal ratings

The model on arousal ratings (R 2 marginal = 0.004; R 2 conditional = 0.540; Fig.  4 A; Supplementary Material Table S5 ) showed a main effect of dilemma type ( F (1, 3038) = 4.38, p  = .036): trolley-like dilemmas elicited higher arousal ratings than footbridge-like dilemmas (trolley vs. footbridge: 0.111, SE = 0.054, t (3038) = 2.09, p  = .036). A main effect of involvement ( F (1, 3038) = 20.161, p  < .001) also emerged, with dilemmas with personal involvement eliciting higher arousal ratings than dilemmas without personal involvement (no involvement vs. involvement: -0.24, SE = 0.053, t (3038) = -4.49, p  < .001). No questionnaire scores significantly modulated arousal ratings.

figure 4

( A ) Arousal ratings as a function of the Dilemma type. ( B ) Arousal ratings as a function of Personal Involvement. Error bars represent standard errors of the means

Discussion and conclusions

Making timely moral decisions can be crucial in saving lives (e.g., physicians and nurses during surgeries, airline pilots during turbulent flights). However, little is known about the processes underlying moral decision-making under time pressure, and their interaction with individual differences in impulsivity or sensitivity to reward and punishment. With this study, we aimed to cover these gaps by investigating the influence of these trait dimensions on moral decision-making under time pressure.

In line with the dual-process model [ 1 ], we found that trolley-like dilemmas elicited a higher proportion of utilitarian choices and slower response times, suggesting that rational cost-benefit analysis required additional time and cognitive effort. Moreover, contrary to the dual-process framework, but consistent with prior work using the present dilemma set [ 31 ], higher unpleasantness and arousal were reported in trolley-like dilemmas. We can interpret this result as due to the higher proportion of utilitarian choices in trolley-like dilemmas. Indeed, from the qualitative analysis of descriptive statistics about valence and arousal ratings as a function of the type of choice (see Supplementary Material Table S1 and Figure S1 ), it seems that in both groups and dilemma types higher unpleasantness is related to a higher proportion of utilitarian choices. This suggests that sacrificing one person, even when perceived as a side effect of maximising the number of lives saved, still carries an ongoing emotional cost. Given that utilitarian choices are more numerous in trolley-like dilemmas, we can reasonably speculate that the higher unpleasantness and arousal ratings found in trolley-like dilemmas are due to the higher number of choices in which participants faced the emotional cost of utilitarian resolutions. Consistent with the dual-process model, we also found that dilemmas with personal involvement (especially the footbridge-like ones) elicited a higher proportion of utilitarian choices, faster response times, and heightened arousal. In these dilemmas, where the utilitarian option implies saving one’s own life, greater emotional engagement results in clear-cut and prompt utilitarian decisions.

However, in contrast with our hypothesis, time pressure per se did not affect moral decisions or emotional experience, as evidenced by the lack of a group effect on type of choice, valence, and arousal ratings. In particular, time pressure did not induce a heightened state of arousal, as might be expected (e.g [ 32 ]). We might speculate that emotional arousal was primarily influenced by the task of resolving dilemmas. The strong emotional engagement elicited during dilemma resolutions may have limited the impact of time pressure on subjective arousal, such that the additional stress of time constraints did not lead to a significant incremental effect. Indeed, arousal ratings were consistently high (i.e., > 6.2) across all types of dilemmas and involvement conditions in both groups.

Overall, we can reasonably rule out that these results are due to a failure in our experimental manipulation to induce time pressure. Indeed, the TP group showed faster decision times than the noTP group. It is possible that the 8-sec constraint we employed was not stringent enough to affect the type of choice (cf. 4.4-sec and 1-sec in [ 6 , 10 ]). However, as in [ 8 ], our 8-sec time constraint included the reading time for the utilitarian option ( ∼  6.5 s, see [ 3 ]), and the decision time was constant across dilemmas since number of words and text characters of utilitarian options was fully balanced throughout (see [ 4 ]). This meant that participants actually had only 1.5 s, on average, to make a decision. Still, it could also be conceivable that such a duration was inadequate to induce either a heightened state of arousal (as noted above) or a significant reduction in cognitive resources available for engaging controlled processes during decision-making. Indeed, previous research on moral dilemmas [ 33 ] has demonstrated that a moderate cognitive load induced by a secondary task (i.e., a concurrent digit-search task) increased response times for utilitarian judgments, while not affecting the type of judgement. Interestingly, a higher degree of cognitive load (i.e., performing an extremely difficult dot memory task) was found to be effective in reducing the number of utilitarian responses in high-conflict moral dilemmas [ 34 ]. These findings suggest that stronger experimental manipulations are needed to impact effortful cognitive processing during the resolution of moral dilemmas, either by employing a higher cognitive load or imposing stricter temporal constraints.

Nonetheless, in our view, what may explain our unexpected result of a null group effect lies mainly in the moral task employed. While previous studies [ 6 , 7 , 8 , 9 , 10 , 12 ] used a moral acceptability question format, thus measuring moral judgments, we asked participants whether they would actually perform the proposed action. Prior research highlighted a dissociation between moral judgement and choice of action, so that the latter is more closely tied to emotional experience and personal responsibility, whereas judgement mainly relies on cognitive perspective-taking [ 35 ]. It is thus plausible that time pressure interferes more with moral judgement, which additionally requires shifting from a first- to a third-person perspective [ 36 ].

However, in our study time pressure indirectly influenced moral choices, as demonstrated by the interaction effects on a specific facet of attentional impulsivity. Higher scores of cognitive instability, as indexed by the BIS-11 subscale, predicted slower response times in the noTP group. Although such a result may seem counterintuitive and is in contrast with our hypothesis, previous research showed that impulsive individuals tend to be slower in choice reaction time tasks [ 37 ] and Go/NoGo tasks [ 38 ], especially when information-processing demands and response complexity are increased [ 39 ], while other studies (e.g [ 40 ]), reported an increase in time taken to resolve interference. Cognitive instability involves intrusive thoughts and rapid shifts in attention and thinking, which can lead to difficulties in maintaining a consistent approach to complex problems. Therefore, when faced with moral dilemmas, individuals with high cognitive instability might find it challenging to decide on a course of action. This could result in longer response times especially when there are no time constraints, as they might re-evaluate available choices and decision outcomes multiple times. Conversely, when the time available for decision-making was constrained (TP group), cognitive instability exerted no influence. Thus, it can be speculated that time pressure might override the (disturbing) influence of cognitive instability by promoting focus on the task at hand, minimising the impact of internal distractions, and ensuring attention is maintained on relevant information. Interestingly, these effects appear to be specific to this facet of attentional impulsivity, as other dimensions of impulsivity showed no associations with decision times. Furthermore, our analysis accounted for symptoms of depression and anxiety.

With regards to BIS-BAS, a heightened tendency to anticipate and desire immediate reward (as indexed by the BAS Reward Responsiveness subscale) predicted a higher proportion of utilitarian choices in both groups, consistent with Moore and colleagues (2011) [ 20 ]. Therefore, regardless of the time available for decision-making, individuals who are more sensitive to rewards may be more inclined towards utilitarian responses as they may prioritise the maximum overall positive outcome (i.e., saving the majority of people). This result might seem at odds with the idea that higher reward responsiveness should be related to a clear-cut prioritisation of self-interest, which, in this context, pertains to personal survival. Indeed, a number of studies on both healthy (e.g [ 41 ]), and clinical populations (e.g [ 42 ]), have highlighted that reward sensitivity plays a significant role in increasing the propensity for immoral behaviour (e.g., voluntary deception for one’s own benefit). Concurrently, research has demonstrated that individuals high in the psychopathy trait, which is associated with alterations of the neural reward system (e.g [ 43 ]), show an increased willingness to endorse utilitarian choices (e.g [ 35 , 44 ]). This propensity, though, may be attributed in psychopathy to a weaker sensitivity to consequences and a reduced concern for inflicting harm [ 45 ]. In our view, in the case of sacrificial moral dilemmas, where each choice involves the death of human beings and no choice is truly “right” or definitively moral, individuals high in reward sensitivity might still find it rewarding to help others, thus pursuing social rewards. Our findings contribute to the understanding of the complex interplay between reward sensitivity and moral behaviour, highlighting the significance of a specific contextual reward condition in which the lives of other people are at stake. Interestingly, while impulsivity has been also suggested to involve a tendency to prioritise immediate over delayed rewards (e.g [ 46 ]), this trait did not influence responses or decision times specifically related to rewards, whether perceived as the saving of a greater number of lives or as personal survival. This suggests that, at least in the context of sacrificial moral dilemmas, reward-reactivity might be related yet distinct from trait impulsivity (see [ 47 ]). Contrary to Moore et al. (2011) [ 20 ], we did not find any significant effect of BIS sensitivity. This discrepancy could once again be attributed to the different processes involved in formulating a moral judgement (as in [ 20 ]) vs. in deciding to undertake an action, as in our case. The BIS, being more focused on avoiding negative outcomes, might not have been as influential in this context, where either dilemma choice had aversive implications from a first-person perspective.

Summarising, our study revealed that the impact of time pressure on moral decision-making might be more complex and multifaceted than expected, potentially interacting with a specific facet of attentional impulsivity. When dilemma resolutions are formulated as actions to be endorsed or rejected based on a first-person perspective, decision choices do not appear to be influenced by the time available for deliberation. This indicates marked stability in behavioural responses to footbridge- and trolley-like dilemmas, as well as in the respective underlying processes. However, time pressure seemed to counteract the slowing effects of individual cognitive instability, possibly by maintaining attentional focus and thus reducing the interference from cognitive-emotional conflicts. Interestingly, individual sensitivity to reward predicted overall utilitarian choices, indicating that within sacrificial moral dilemmas, the number of lives saved can be effectively reframed as a (social) reward to be pursued. As might be expected, this broad effect was not sensitive to time pressure.

Concluding, some limitations of our study are worth mentioning. First, our paradigm did not include a “question screen” (e.g [ 8 ]), that typically follows the option and to which decision times would be time-locked. This decision was based on the idea that decision-making encompasses dynamic, overlapping processes beginning as early as the reading of the option starts. However, this implied that decision times were strictly dependent on reading times. Therefore, shorter individual reading times in the TP groups might have prevented the allotted decision time from exerting sufficient pressure. Second, although our study focused on impulsivity and BIS-BAS sensitivity, and controlled for levels of anxiety and depression, we acknowledge that other temperamental or personality traits may affect the relation between time pressure and moral choices as well. We encourage further studies to better understand this complex, multifaceted phenomenon by overcoming the limitations of the present research.

Data availability

All the data and analyses cited in this manuscript have been made publicly available within the Open Science Framework (OSF) and can be accessed at the following permanent anonymous link: https://osf.io/23mwd/?view_only=ff1dafdf6c8f4e7a89bba29830d77910 .

Abbreviations

Behavioural Activation System

Beck Depression Inventory-II

Barratt Impulsiveness Scale 11

Behavioural Inhibition System

Dependent variable

False discovery rate

Generalised Linear Mixed-effects Model

Linear Mixed-effects Model

Median absolute deviation

No time pressure group

Response time

Self-Assessment Manikin

Standard deviation

Standard error

State-Trait Anxiety Inventory

Time pressure group

Greene JD, Sommerville RB, Nystrom LE, Darley JM, Cohen JD. An fMRI investigation of emotional engagement in moral judgment. Science. 2001;293(5537):2105–8.

Article   PubMed   Google Scholar  

Palmiotti GP, Del Popolo Cristaldi F, Cellini N, Lotto L, Sarlo M. Framing the outcome of moral dilemmas: effects of emotional information. Ethics Behav. 2020;30(3):213–29.

Article   Google Scholar  

Sarlo M, Lotto L, Manfrinati A, Rumiati R, Gallicchio G, Palomba D. Temporal dynamics of cognitive–emotional interplay in moral decision-making. J Cogn Neurosci. 2012;24(4):1018–29.

Lotto L, Manfrinati A, Sarlo M. A new set of moral dilemmas: norms for moral acceptability, decision times, and emotional salience. J Behav Decis Mak. 2014;27(1):57–65.

Finucane ML, Alhakami A, Slovic P, Johnson SM. The affect heuristic in judgments of risks and benefits. J Behav Decis Mak. 2000;13(1):1–17.

Cummins D, Cummins R. Emotion and Deliberative Reasoning in Moral Judgment. Frontiers in Psychology. 2012;3.

Kroneisen M, Steghaus S. The influence of decision time on sensitivity for consequences, moral norms, and preferences for inaction: time, moral judgments, and the CNI model. J Behav Decis Mak. 2021;34(1):140–53.

Suter RS, Hertwig R. Time and moral judgment. Cognition. 2011;119(3):454–8.

Trémolière B, Bonnefon JF. Efficient kill–save ratios ease up the cognitive demands on counterintuitive moral utilitarianism. Pers Soc Psychol Bull. 2014;40(7):923–30.

Yahoodik S, Samuel S, Yamani Y. Ethical decision making under time pressure: an online study. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2021;65(1):601–5.

Swann WB, Gómez Á, Buhrmester MD, López-Rodríguez L, Jiménez J, Vázquez A. Contemplating the ultimate sacrifice: identity fusion channels pro-group affect, cognition, and moral decision making. J Personal Soc Psychol. 2014;106(5):713–27.

Rosas A, Aguilar-Pardo D. Extreme time-pressure reveals utilitarian intuitions in sacrificial dilemmas. Think Reason. 2020;26(4):534–51.

Tinghög G, Andersson D, Bonn C, Johannesson M, Kirchler M, Koppel L, et al. Intuition and moral decision-making – the effect of time pressure and cognitive load on moral judgment and altruistic behavior. PLoS ONE. 2016;11(10):e0164012.

Article   PubMed   PubMed Central   Google Scholar  

Stenmark CK, Antes AL, Wang X, Caughron JJ, Thiel CE, Mumford MD. Strategies in forecasting outcomes in ethical decision-making: identifying and analyzing the causes of the problem. Ethics Behav. 2010;20(2):110–27.

Chamberlain SR, Sahakian BJ. The neuropsychiatry of impulsivity. Curr Opin Psychiatry. 2007;20(3):255.

Carmona-Perera M, Clark L, Young L, Pérez-García M, Verdejo-García A. Impaired decoding of fear and disgust predicts utilitarian moral judgment in alcohol-dependent individuals. Alcoholism: Clin Experimental Res. 2014;38(1):179–85.

Lainidi O, Karakasidou E, Montgomery A, Triad D. Impulsivity and honesty-humility and intended behavior in a prisoner’s dilemma game: a simulation study. In Review; 2021 Aug. https://www.researchsquare.com/article/rs?787616/v1

Young S, Gudjonsson GH, Goodwin EJ, Perkins D, Morris R. A validation of a computerised task of risk-taking and moral decision-making and its association with sensation-seeking, impulsivity and sociomoral reasoning. Pers Indiv Differ. 2013;55(8):941–6.

Gray JA. Brain systems that mediate both emotion and cognition. Cogn Emot. 1990;4(3):269–88.

Moore AB, Stevens J, Conway ARA. Individual differences in sensitivity to reward and punishment predict moral judgment. Pers Indiv Differ. 2011;50(5):621–5.

Dror IE, Basola B, Busemeyer JR. Decision making under time pressure: an independent test of sequential sampling models. Mem Cognit. 1999;27(4):713–25.

Kerstholt JH. The effect of time pressure on decision-making behaviour in a dynamic task environment. Acta Psychol. 1994;86(1):89–104.

Schneider W, Eschman A, Zuccolotto A. E-prime. Pittsburgh, PA: Psychology Software Tools; 2010.

Google Scholar  

Spielberger C, Gorsuch R, Lushene R, Vagg P, Jacobs G. Manual for the state-trait anxiety inventory (form Y1 – Y2). Palo Alto, CA: Consulting Psychologists Press;: Vol. IV; 1983.

Beck AT, Steer RA, Brown G. Beck Depression Inventory. 2nd ed. San Antonio, TX, USA: The Psychological Corporation; 1996.

Paulus MP, Yu AJ. Emotion and decision-making: affect-driven belief systems in anxiety and depression. Trends Cogn Sci. 2012;16(9):476–83.

Lang PJ, Bradley MM, Cuthbert BN. International affective picture system (IAPS): affective ratings of pictures and instruction manual. Technical Report A–8 University of Florida, Gainesville, FL. 2008.

Patton JH, Stanford MS, Barratt ES. Factor structure of the barratt impulsiveness scale. J Clin Psychol. 1995;51(6):768–74.

Carver CS, White TL. Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: the BIS/BAS scales. J Personal Soc Psychol. 1994;67(2):319–33.

Wilcox R, Peterson TJ, McNitt-Gray JL. Data analyses when sample sizes are small: modern advances for dealing with outliers, skewed distributions, and heteroscedasticity. J Appl Biomech. 2018;34(4):258–61.

Cellini N, Mercurio M, Sarlo M. Sleeping over moral dilemmas modulates utilitarian decision-making. Curr Psychol. 2023;42(10):8244–54.

Stiensmeier-Pelster J, Schürmann M. Information Processing in Decision Making under Time Pressure. In: Svenson O, Maule AJ, editors. Time Pressure and Stress in Human Judgment and Decision Making. Boston, MA: Springer US; 1993. p. 241–53. https://doi.org/10.1007/978-1-4757-6846-6_16

Greene JD, Morelli SA, Lowenberg K, Nystrom LE, Cohen JD. Cognitive load selectively interferes with utilitarian moral judgment. Cognition. 2008;107(3):1144–54.

Trémolière B, Neys WD, Bonnefon JF. Mortality salience and morality: thinking about death makes people less utilitarian. Cognition. 2012;124(3):379–84.

Tassy S, Oullier O, Mancini J, Wicker B. Discrepancies between judgment and choice of action in moral dilemmas. Front Psychol. 2013;4. https://doi.org/10.3389/fpsyg.2013.00250

Nichols S, Mallon R. Moral dilemmas and moral rules. Cognition. 2006;100(3):530–42.

Expósito J, Andrés-Pueyo A. The effects of impulsivity on the perceptual and decision stages in a choice reaction time task. Pers Indiv Differ. 1997;22(5):693–7.

Torres A, Catena A, Megías A, Maldonado A, Cándido A, Verdejo-García A, et al. Emotional and non-emotional pathways to impulsive behavior and addiction. Front Hum Neurosci. 2013 Feb 21;7. https://doi.org/10.3389/fnhum.2013.00043

Keilp JG, Sackeim HA, Mann JJ. Correlates of trait impulsiveness in performance measures and neuropsychological tests. Psychiatry Res. 2005;135(3):191–201.

Enticott PG, Ogloff JRP, Bradshaw JL. Associations between laboratory measures of executive inhibitory control and self-reported impulsivity. Pers Indiv Differ. 2006;41(2):285–94.

Hu X, Pornpattananangkul N, Nusslock R. Executive control- and reward-related neural processes associated with the opportunity to engage in voluntary dishonest moral decision making. Cogn Affect Behav Neurosci. 2015;15(2):475–91.

Ponsi G, Scattolin M, Villa R, Aglioti SM. Human moral decision-making through the lens of Parkinson’s disease. npj Parkinsons Dis. 2021;7(1):1–7.

Buckholtz JW, Treadway MT, Cowan RL, Woodward ND, Benning SD, Li R, et al. Mesolimbic dopamine reward system hypersensitivity in individuals with psychopathic traits. Nat Neurosci. 2010;13(4):419–21.

Pletti C, Lotto L, Buodo G, Sarlo M. It’s immoral, but I’d do it! Psychopathy traits affect decision-making in sacrificial dilemmas and in everyday moral situations. Br J Psychol. 2017;108(2):351–68.

Ng NL, Neumann CS, Luke DM, Gawronski B. Associations of aversive (‘dark’) traits and affiliative (‘light’) traits with moral-dilemma judgments: a preregistered exploratory analysis using the CNI model. J Res Pers. 2024;109:104450.

Schmidt B, Holroyd CB, Debener S, Hewig J. I can’t wait! Neural reward signals in impulsive individuals exaggerate the difference between immediate and future rewards. Psychophysiology. 2017;54(3):409–15.

Smillie LD, Jackson CJ, Dalgleish LI. Conceptual distinctions among Carver and White’s (1994) BAS scales: a reward-reactivity versus trait impulsivity perspective. Pers Indiv Differ. 2006;40(5):1039–50.

Download references

Acknowledgements

We would like to kindly thank Prof. Lorella Lotto for her valuable contribution to defining the experimental paradigm and Dr. Carolina Pletti for her assistance in software programming.

The authors have no funding to disclose.

Open access funding provided by Università degli Studi di Padova.

Author information

Authors and affiliations.

Department of General Psychology, University of Padua, Via Venezia 8, Padua, 35131, Italy

Fiorella Del Popolo Cristaldi & Nicola Cellini

WFI - Ingolstadt School of Management, Catholic University of Eichstätt-Ingolstadt, Auf d. Schanz 49, 85049, Ingolstadt, Germany

Grazia Pia Palmiotti

Padua Neuroscience Center (PNC), University of Padua, Via Orus 2/B, Padua, 35129, Italy

Nicola Cellini

Department of Communication Sciences, Humanities and International Studies, University of Urbino Carlo Bo, Via Aurelio Saffi 2, Urbino, 61029, Italy

Michela Sarlo

You can also search for this author in PubMed   Google Scholar

Contributions

Conception and design: MS; Collection, analysis and interpretation of data: FDPC, GPP, NC, MS; Article drafting and revising: FDPC, GPP, NC, MS. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Fiorella Del Popolo Cristaldi .

Ethics declarations

Ethics approval and consent to participate.

All procedures were submitted and approved by the Ethical Committee for the Psychological Research of the University of Padua (protocol n. 2105), and were conducted in accordance with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. All participants signed written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Del Popolo Cristaldi, F., Palmiotti, G.P., Cellini, N. et al. Pulling the lever in a hurry: the influence of impulsivity and sensitivity to reward on moral decision-making under time pressure. BMC Psychol 12 , 270 (2024). https://doi.org/10.1186/s40359-024-01773-y

Download citation

Received : 21 February 2024

Accepted : 08 May 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s40359-024-01773-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Moral dilemmas
  • Time pressure
  • Impulsivity
  • Decision-making

BMC Psychology

ISSN: 2050-7283

feelings and moral decision making essay

ScienceDaily

How do we make moral decisions?

New study shows how your moral behavior may change depending on the context.

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes?

Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University -- Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications .

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

To examine moral decision-making within the context of reciprocity, the researchers designed a modified trust game called the Hidden Multiplier Trust Game, which allowed them to classify decisions in reciprocating trust as a function of an individual's moral strategy. With this method, the team could determine which type of moral strategy a study participant was using: inequity aversion (where people reciprocate because they want to seek fairness in outcomes), guilt aversion (where people reciprocate because they want to avoid feeling guilty), greed, or moral opportunism (a new strategy that the team identified, where people switch between inequity aversion and guilt aversion depending on what will serve their interests best). The researchers also developed a computational, moral strategy model that could be used to explain how people behave in the game and examined the brain activity patterns associated with the moral strategies.

The findings reveal for the first time that unique patterns of brain activity underlie the inequity aversion and guilt aversion strategies, even when the strategies yield the same behavior. For the participants that were morally opportunistic, the researchers observed that their brain patterns switched between the two moral strategies across different contexts. "Our results demonstrate that people may use different moral principles to make their decisions, and that some people are much more flexible and will apply different principles depending on the situation," explained Chang. "This may explain why people that we like and respect occasionally do things that we find morally objectionable."

  • Consumer Behavior
  • Brain Injury
  • Resource Shortage
  • Social Issues
  • Kohlberg's stages of moral development
  • Anchoring bias in decision-making
  • Cooperation
  • Developmental psychology

Story Source:

Materials provided by Dartmouth College . Note: Content may be edited for style and length.

Journal Reference :

  • Jeroen M. van Baar, Luke J. Chang, Alan G. Sanfey. The computational and neural substrates of moral strategies in social decision-making . Nature Communications , 2019; 10 (1) DOI: 10.1038/s41467-019-09161-6

Cite This Page :

Explore More

  • Treating Cataracts and Other Eye Conditions
  • Early Arrival of Palaeolithic People On Cyprus
  • Networks Regulating Gene Function in Human Brain
  • Birth of Universe's Earliest Galaxies
  • Why the Brain Can Robustly Recognize B&W Images
  • Birth Control Pill for Men?
  • Intriguing World Sized Between Earth, Venus
  • Billions of Orphan Stars Revealed
  • Massive Catalog of Strange Worlds
  • Mental Disorders May Spread Thru Social Networks

Trending Topics

Strange & offbeat.

IMAGES

  1. FEELING AND MORAL DECISION MAKING.pptx

    feelings and moral decision making essay

  2. 1. Feelings and Moral Decision Making (1)

    feelings and moral decision making essay

  3. Personal Values and Decision Making Essay Example

    feelings and moral decision making essay

  4. FEELINGS AND MORAL DECISION MAKING.pptx

    feelings and moral decision making essay

  5. Feelings and Moral Decision-Making.pptx

    feelings and moral decision making essay

  6. Feelings and Moral Decision Making

    feelings and moral decision making essay

VIDEO

  1. List of Emotions and Feelings

  2. FEELINGS AND MORAL DECISION MAKING

  3. FEELINGS AND MORAL DECISION MAKING

  4. Feelings and Reason and Reason and Impartiality

  5. Module 6: Feelings and Moral Decision-Making

  6. FEELINGS AND MORAL DECISION MAKING (ETHICS)

COMMENTS

  1. Is There a Place for Emotions in Moral Decision-Making?

    Some say absolutely: Emotions, like our love for our friends and family, are a crucial part of what give life meaning, and ought to play a guiding role in morality. Some say absolutely not: Cold ...

  2. PDF Emotion and Decision Making

    scholarly papers on emotion and decision making doubled from 2004 to 2007 and again from 800 Lerner et al. Supplemental Material Annu. Rev. Psychol. 2015.66:799-823. Downloaded from www.annualreviews.org by Dr. Piercarlo Valdesolo on 01/08/15. ... feelings arising from a decision at hand, e.g., fear of losing money when deciding between ...

  3. Chapter 2-Morality and Decision Making

    Analyzing the basics of ethical thinking for leaders and organizations in society. This chapter will introduce the basic constructs of moral thinking. We will begin by defining the terms morality and ethics. After creating a working knowledge of the terminology, we will look at the roots of moral decision-making in our society by tracing the ...

  4. PDF Essays on Emotion and Decision Making

    Essays on Emotion and Decision Making Abstract Emotions serve as the foundation upon which humans make many of their most important life choices. Despite this fact, prior research has only begun to scratch the surface of the role of emotion in decision making. The present dissertation aims to help fill this gap. The

  5. PDF Emotion and Decision Making

    In recent years, the field has grown rapidly; yearly scholarly papers on emotion and decision making doubled from 2004 to 2007 and again from 2007 to 2011, and increased by an order of magnitude as a percentage of all scholarly publications on "decision making" (already a quickly growing field) from 2001 to 2013. Figure 1.

  6. How Large Is the Role of Emotion in Judgments of Moral Dilemmas?

    Moral dilemmas often pose dramatic and gut-wrenching emotional choices. It is now widely accepted that emotions are not simply experienced alongside people's judgments about moral dilemmas, but that our affective processes play a central role in determining those judgments. However, much of the evidence purporting to demonstrate the connection between people's emotional responses and their ...

  7. Emotions, Ethics and Decision-Making: Vol. 4

    In this essay, we address this shortcoming by proposing a conceptual model of strategic decision making that incorporates, at its core, the impact of affective states on cognitive processes that are integral to the decision outcome. ... (2008), "Chapter 5 Feelings about ethical decisions: the emotions of moral residue", Zerbe, W.J., Härtel, C ...

  8. Emotion and moral judgment

    Research in psychology and cognitive science has consistently demonstrated the importance of emotion in a wide range of everyday judgments, including moral judgment. Most current accounts of moral judgment hold that emotion plays an important role, but the nature and extent of this role are still debated. We outline three increasingly strong ...

  9. The Role of Emotions in Moral Case Deliberation: Theory, Practice, and

    This paper presents an Aristotelian view on emotions and describes its application in the practice of moral case deliberation. According to Aristotle, emotions are an original and integral part of (virtue) ethics. Emotions are an inherent part of our moral reasoning and being, and therefore they should be an inherent part of any moral deliberation.

  10. Moral Emotions

    Moral Emotions. Emotions - that is to say feelings and intuitions - play a major role in most of the ethical decisions people make. Most people do not realize how much their emotions direct their moral choices. But experts think it is impossible to make any important moral judgments without emotions. Inner-directed negative emotions like ...

  11. Feeling and Moral Decision Making

    Feelings play a major role in moral decision making according to David Hume. Hume believed that moral judgments are based on feelings rather than reason alone. Some philosophers disagreed and thought morality should be objective and not based on subjective feelings. There are difficulties in establishing universal moral decisions due to disagreements about what is right and wrong. When making ...

  12. Frontiers

    Moral decision-making is a key asset for humans' integration in social contexts, and the way we decide about moral issues seems to be strongly influenced by emotions. For example, individuals with deficits in emotional processing tend to deliver more utilitarian choices (accepting an emotionally aversive action in favor of communitarian well-being). However, little is known about the ...

  13. The Power of Emotions in Decision Making

    Regulate your emotions. Emotions, especially at a high intensity, impact our ability to make rational decisions. Strong emotions can impair our judgment and make it challenging to think ...

  14. A Framework for Moral Decision Making

    Thinking Ethically: A Framework for Moral Decision Making. Developed by Manuel Velasquez, Claire Andre, Thomas Shanks, S.J., and Michael J. Meyer. Moral issues greet us each morning in the newspaper, confront us in the memos on our desks, nag us from our children's soccer fields, and bid us good night on the evening news.

  15. Moral decision-making and moral development: Toward an integrative

    "Central to any discussion of developmental issues is the consideration of 'what develops'" (Crick & Dodge, 1994, p. 80).Table 1 displays the main component processes and factors suggested by various theories and perspectives to be involved in real-time moral decision-making or necessary for moral development. These components are grouped into broad categories of cognitive, affective ...

  16. The Psychology of Morality: A Review and Analysis of Empirical Studies

    Morality indicates what is the "right" and "wrong" way to behave, for instance, that one should be fair and not unfair to others (Haidt & Kesebir, 2010).This is considered of interest to explain the social behavior of individuals living together in groups ().Results from animal studies (e.g., de Waal, 1996) or insights into universal justice principles (e.g., Greenberg & Cropanzano ...

  17. Chapter 5 Feelings about ethical decisions: the emotions of moral

    Scholars in philosophy have proposed that individuals who choose among two equally ethical alternatives will experience regret as a result of the "moral residue" that remains from not having been able to select both alternatives. Although posed and often discussed by philosophers, the veracity of this proposition has not been empirically ...

  18. Feelings and Moral Decision Making PDF

    Feelings-and-Moral-Decision-Making (1).pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. This document discusses the role of feelings in moral decision making. It presents different views on whether feelings help or hinder ethical judgments. Some argue feelings can be rational if based on good judgments, while others claim feelings can become obstacles.

  19. PDF Emotion and Decision Making

    emotion and decision making. Collectively, they elucidate one overarching conclusion: Emotions powerfully, predictably, and pervasively influence decision making. Theme 1. Integral Emotions Influence Decision Making It is useful, when surveying the field, to identify distinct types of emotion. We start with emotions

  20. Emotion and Decision Making

    A revolution in the science of emotion has emerged in recent decades, with the potential to create a paradigm shift in decision theories. The research reveals that emotions constitute potent, pervasive, predictable, sometimes harmful and sometimes beneficial drivers of decision making. Across different domains, important regularities appear in the mechanisms through which emotions influence ...

  21. (PDF) Moral decision-making and moral development: Toward an

    develop a full picture of moral decision-making, moral development and moral behaviour it is. necessary to understand: (a) how real-time moral decisions are made (including relevant social. and ...

  22. Feelings AND Moral Decision Making Report

    This written report examines the role of emotions and sentiments in decision-making. Reason and emotions are both at work and are inextricably linked. This indicates that emotions are employed as an automatic reaction to moral quandaries. Feelings can sometimes prevent us from making good judgments, but they can also help us make good ones.

  23. Pulling the lever in a hurry: the influence of impulsivity and

    Making timely moral decisions can save a life. However, literature on how moral decisions are made under time pressure reports conflicting results. Moreover, it is unclear whether and how moral choices under time pressure may be influenced by personality traits like impulsivity and sensitivity to reward and punishment. To address these gaps, in this study we employed a moral dilemma task ...

  24. How do we make moral decisions?

    Summary: Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a new study on moral decision-making and ...