McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Case Studies UT Star Icon

Case Studies

More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and a bibliography.

A Million Little Pieces

A Million Little Pieces

James Frey’s popular memoir stirred controversy and media attention after it was revealed to contain numerous exaggerations and fabrications.

Abramoff: Lobbying Congress

Abramoff: Lobbying Congress

Super-lobbyist Abramoff was caught in a scheme to lobby against his own clients. Was a corrupt individual or a corrupt system – or both – to blame?

Apple Suppliers & Labor Practices

Apple Suppliers & Labor Practices

Is tech company Apple, Inc. ethically obligated to oversee the questionable working conditions of other companies further down their supply chain?

Approaching the Presidency: Roosevelt & Taft

Approaching the Presidency: Roosevelt & Taft

Some presidents view their responsibilities in strictly legal terms, others according to duty. Roosevelt and Taft took two extreme approaches.

Appropriating “Hope”

Appropriating “Hope”

Fairey’s portrait of Barack Obama raised debate over the extent to which an artist can use and modify another’s artistic work, yet still call it one’s own.

Arctic Offshore Drilling

Arctic Offshore Drilling

Competing groups frame the debate over oil drilling off Alaska’s coast in varying ways depending on their environmental and economic interests.

Banning Burkas: Freedom or Discrimination?

Banning Burkas: Freedom or Discrimination?

The French law banning women from wearing burkas in public sparked debate about discrimination and freedom of religion.

Birthing Vaccine Skepticism

Birthing Vaccine Skepticism

Wakefield published an article riddled with inaccuracies and conflicts of interest that created significant vaccine hesitancy regarding the MMR vaccine.

Blurred Lines of Copyright

Blurred Lines of Copyright

Marvin Gaye’s Estate won a lawsuit against Robin Thicke and Pharrell Williams for the hit song “Blurred Lines,” which had a similar feel to one of his songs.

Bullfighting: Art or Not?

Bullfighting: Art or Not?

Bullfighting has been a prominent cultural and artistic event for centuries, but in recent decades it has faced increasing criticism for animal rights’ abuse.

Buying Green: Consumer Behavior

Buying Green: Consumer Behavior

Do purchasing green products, such as organic foods and electric cars, give consumers the moral license to indulge in unethical behavior?

Cadavers in Car Safety Research

Cadavers in Car Safety Research

Engineers at Heidelberg University insist that the use of human cadavers in car safety research is ethical because their research can save lives.

Cardinals’ Computer Hacking

Cardinals’ Computer Hacking

St. Louis Cardinals scouting director Chris Correa hacked into the Houston Astros’ webmail system, leading to legal repercussions and a lifetime ban from MLB.

Cheating: Atlanta’s School Scandal

Cheating: Atlanta’s School Scandal

Teachers and administrators at Parks Middle School adjust struggling students’ test scores in an effort to save their school from closure.

Cheating: Sign-Stealing in MLB

Cheating: Sign-Stealing in MLB

The Houston Astros’ sign-stealing scheme rocked the baseball world, leading to a game-changing MLB investigation and fallout.

Cheating: UNC’s Academic Fraud

Cheating: UNC’s Academic Fraud

UNC’s academic fraud scandal uncovered an 18-year scheme of unchecked coursework and fraudulent classes that enabled student-athletes to play sports.

Cheney v. U.S. District Court

Cheney v. U.S. District Court

A controversial case focuses on Justice Scalia’s personal friendship with Vice President Cheney and the possible conflict of interest it poses to the case.

Christina Fallin: “Appropriate Culturation?”

Christina Fallin: “Appropriate Culturation?”

After Fallin posted a picture of herself wearing a Plain’s headdress on social media, uproar emerged over cultural appropriation and Fallin’s intentions.

Climate Change & the Paris Deal

Climate Change & the Paris Deal

While climate change poses many abstract problems, the actions (or inactions) of today’s populations will have tangible effects on future generations.

Cover-Up on Campus

Cover-Up on Campus

While the Baylor University football team was winning on the field, university officials failed to take action when allegations of sexual assault by student athletes emerged.

Covering Female Athletes

Covering Female Athletes

Sports Illustrated stirs controversy when their cover photo of an Olympic skier seems to focus more on her physical appearance than her athletic abilities.

Covering Yourself? Journalists and the Bowl Championship

Covering Yourself? Journalists and the Bowl Championship

Can news outlets covering the Bowl Championship Series fairly report sports news if their own polls were used to create the news?

Cyber Harassment

Cyber Harassment

After a student defames a middle school teacher on social media, the teacher confronts the student in class and posts a video of the confrontation online.

Defending Freedom of Tweets?

Defending Freedom of Tweets?

Running back Rashard Mendenhall receives backlash from fans after criticizing the celebration of the assassination of Osama Bin Laden in a tweet.

Dennis Kozlowski: Living Large

Dennis Kozlowski: Living Large

Dennis Kozlowski was an effective leader for Tyco in his first few years as CEO, but eventually faced criminal charges over his use of company assets.

Digital Downloads

Digital Downloads

File-sharing program Napster sparked debate over the legal and ethical dimensions of downloading unauthorized copies of copyrighted music.

Dr. V’s Magical Putter

Dr. V’s Magical Putter

Journalist Caleb Hannan outed Dr. V as a trans woman, sparking debate over the ethics of Hannan’s reporting, as well its role in Dr. V’s suicide.

East Germany’s Doping Machine

East Germany’s Doping Machine

From 1968 to the late 1980s, East Germany (GDR) doped some 9,000 athletes to gain success in international athletic competitions despite being aware of the unfortunate side effects.

Ebola & American Intervention

Ebola & American Intervention

Did the dispatch of U.S. military units to Liberia to aid in humanitarian relief during the Ebola epidemic help or hinder the process?

Edward Snowden: Traitor or Hero?

Edward Snowden: Traitor or Hero?

Was Edward Snowden’s release of confidential government documents ethically justifiable?

Ethical Pitfalls in Action

Ethical Pitfalls in Action

Why do good people do bad things? Behavioral ethics is the science of moral decision-making, which explores why and how people make the ethical (and unethical) decisions that they do.

Ethical Use of Home DNA Testing

Ethical Use of Home DNA Testing

The rising popularity of at-home DNA testing kits raises questions about privacy and consumer rights.

Flying the Confederate Flag

Flying the Confederate Flag

A heated debate ensues over whether or not the Confederate flag should be removed from the South Carolina State House grounds.

Freedom of Speech on Campus

Freedom of Speech on Campus

In the wake of racially motivated offenses, student protests sparked debate over the roles of free speech, deliberation, and tolerance on campus.

Freedom vs. Duty in Clinical Social Work

Freedom vs. Duty in Clinical Social Work

What should social workers do when their personal values come in conflict with the clients they are meant to serve?

Full Disclosure: Manipulating Donors

Full Disclosure: Manipulating Donors

When an intern witnesses a donor making a large gift to a non-profit organization under misleading circumstances, she struggles with what to do.

Gaming the System: The VA Scandal

Gaming the System: The VA Scandal

The Veterans Administration’s incentives were meant to spur more efficient and productive healthcare, but not all administrators complied as intended.

German Police Battalion 101

German Police Battalion 101

During the Holocaust, ordinary Germans became willing killers even though they could have opted out from murdering their Jewish neighbors.

Head Injuries & American Football

Head Injuries & American Football

Many studies have linked traumatic brain injuries and related conditions to American football, creating controversy around the safety of the sport.

Head Injuries & the NFL

Head Injuries & the NFL

American football is a rough and dangerous game and its impact on the players’ brain health has sparked a hotly contested debate.

Healthcare Obligations: Personal vs. Institutional

Healthcare Obligations: Personal vs. Institutional

A medical doctor must make a difficult decision when informing patients of the effectiveness of flu shots while upholding institutional recommendations.

High Stakes Testing

High Stakes Testing

In the wake of the No Child Left Behind Act, parents, teachers, and school administrators take different positions on how to assess student achievement.

In-FUR-mercials: Advertising & Adoption

In-FUR-mercials: Advertising & Adoption

When the Lied Animal Shelter faces a spike in animal intake, an advertising agency uses its moral imagination to increase pet adoptions.

Krogh & the Watergate Scandal

Krogh & the Watergate Scandal

Egil Krogh was a young lawyer working for the Nixon Administration whose ethics faded from view when asked to play a part in the Watergate break-in.

Limbaugh on Drug Addiction

Limbaugh on Drug Addiction

Radio talk show host Rush Limbaugh argued that drug abuse was a choice, not a disease. He later became addicted to painkillers.

LochteGate

U.S. Olympic swimmer Ryan Lochte’s “over-exaggeration” of an incident at the 2016 Rio Olympics led to very real consequences.

Meet Me at Starbucks

Meet Me at Starbucks

Two black men were arrested after an employee called the police on them, prompting Starbucks to implement “racial-bias” training across all its stores.

Myanmar Amber

Myanmar Amber

Buying amber could potentially fund an ethnic civil war, but refraining allows collectors to acquire important specimens that could be used for research.

Negotiating Bankruptcy

Negotiating Bankruptcy

Bankruptcy lawyer Gellene successfully represented a mining company during a major reorganization, but failed to disclose potential conflicts of interest.

Pao & Gender Bias

Pao & Gender Bias

Ellen Pao stirred debate in the venture capital and tech industries when she filed a lawsuit against her employer on grounds of gender discrimination.

Pardoning Nixon

Pardoning Nixon

One month after Richard Nixon resigned from the presidency, Gerald Ford made the controversial decision to issue Nixon a full pardon.

Patient Autonomy & Informed Consent

Patient Autonomy & Informed Consent

Nursing staff and family members struggle with informed consent when taking care of a patient who has been deemed legally incompetent.

Prenatal Diagnosis & Parental Choice

Prenatal Diagnosis & Parental Choice

Debate has emerged over the ethics of prenatal diagnosis and reproductive freedom in instances where testing has revealed genetic abnormalities.

Reporting on Robin Williams

Reporting on Robin Williams

After Robin Williams took his own life, news media covered the story in great detail, leading many to argue that such reporting violated the family’s privacy.

Responding to Child Migration

Responding to Child Migration

An influx of children migrants posed logistical and ethical dilemmas for U.S. authorities while intensifying ongoing debate about immigration.

Retracting Research: The Case of Chandok v. Klessig

Retracting Research: The Case of Chandok v. Klessig

A researcher makes the difficult decision to retract a published, peer-reviewed article after the original research results cannot be reproduced.

Sacking Social Media in College Sports

Sacking Social Media in College Sports

In the wake of questionable social media use by college athletes, the head coach at University of South Carolina bans his players from using Twitter.

Selling Enron

Selling Enron

Following the deregulation of electricity markets in California, private energy company Enron profited greatly, but at a dire cost.

Snyder v. Phelps

Snyder v. Phelps

Freedom of speech was put on trial in a case involving the Westboro Baptist Church and their protesting at the funeral of U.S. Marine Matthew Snyder.

Something Fishy at the Paralympics

Something Fishy at the Paralympics

Rampant cheating has plagued the Paralympics over the years, compromising the credibility and sportsmanship of Paralympian athletes.

Sports Blogs: The Wild West of Sports Journalism?

Sports Blogs: The Wild West of Sports Journalism?

Deadspin pays an anonymous source for information related to NFL star Brett Favre, sparking debate over the ethics of “checkbook journalism.”

Stangl & the Holocaust

Stangl & the Holocaust

Franz Stangl was the most effective Nazi administrator in Poland, killing nearly one million Jews at Treblinka, but he claimed he was simply following orders.

Teaching Blackface: A Lesson on Stereotypes

Teaching Blackface: A Lesson on Stereotypes

A teacher was put on leave for showing a blackface video during a lesson on racial segregation, sparking discussion over how to teach about stereotypes.

The Astros’ Sign-Stealing Scandal

The Astros’ Sign-Stealing Scandal

The Houston Astros rode a wave of success, culminating in a World Series win, but it all came crashing down when their sign-stealing scheme was revealed.

The Central Park Five

The Central Park Five

Despite the indisputable and overwhelming evidence of the innocence of the Central Park Five, some involved in the case refuse to believe it.

The CIA Leak

The CIA Leak

Legal and political fallout follows from the leak of classified information that led to the identification of CIA agent Valerie Plame.

The Collapse of Barings Bank

The Collapse of Barings Bank

When faced with growing losses, investment banker Nick Leeson took big risks in an attempt to get out from under the losses. He lost.

The Costco Model

The Costco Model

How can companies promote positive treatment of employees and benefit from leading with the best practices? Costco offers a model.

The FBI & Apple Security vs. Privacy

The FBI & Apple Security vs. Privacy

How can tech companies and government organizations strike a balance between maintaining national security and protecting user privacy?

The Miss Saigon Controversy

The Miss Saigon Controversy

When a white actor was cast for the half-French, half-Vietnamese character in the Broadway production of Miss Saigon , debate ensued.

The Sandusky Scandal

The Sandusky Scandal

Following the conviction of assistant coach Jerry Sandusky for sexual abuse, debate continues on how much university officials and head coach Joe Paterno knew of the crimes.

The Varsity Blues Scandal

The Varsity Blues Scandal

A college admissions prep advisor told wealthy parents that while there were front doors into universities and back doors, he had created a side door that was worth exploring.

Therac-25

Providing radiation therapy to cancer patients, Therac-25 had malfunctions that resulted in 6 deaths. Who is accountable when technology causes harm?

Welfare Reform

Welfare Reform

The Welfare Reform Act changed how welfare operated, intensifying debate over the government’s role in supporting the poor through direct aid.

Wells Fargo and Moral Emotions

Wells Fargo and Moral Emotions

In a settlement with regulators, Wells Fargo Bank admitted that it had created as many as two million accounts for customers without their permission.

Stay Informed

Support our work.

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

UPSC Coaching, Study Materials, and Mock Exams

Enroll in ClearIAS UPSC Coaching Join Now Log In

Call us: +91-9605741000

Ethical Dilemma: 10 Heartbreaking Case Studies

Last updated on April 2, 2024 by Alex Andrews George

Ethical Dilemma - Case Studies

In a small village in Maharashtra, a teacher named Ravi and his wife Maya, a nurse, faced a tough choice after an earthquake.

The only hospital in the village was damaged, and they could only save one life with the limited medical supplies: Maya’s critically injured mother or a young and bright boy from Ravi’s school, who also needed urgent surgery.

Choosing between saving Maya’s mother, who meant everything to her, or the young boy, who represented the village’s future, was heartbreakingly difficult.

This story highlights the painful decisions we sometimes must make, where saving one life means losing another, testing our deepest values and principles.

Based on this story, we dive into the complex world of ethical dilemmas and moral conflicts, where choices are never black and white, and every decision carries the weight of unforeseen consequences.

Table of Contents

What is an ethical dilemma?

An ethical dilemma occurs when a person is faced with a situation that requires a choice between two or more conflicting ethical principles or values .

UPSC CSE 2025: Study Plan ⇓

(1) ⇒ UPSC 2025: Prelims cum Mains

(2) ⇒ UPSC 2025: Prelims Test Series

(3) ⇒ UPSC 2025: CSAT

Note: To know more about ClearIAS Courses (Online/Offline) and the most effective study plan, you can call ClearIAS Mentors at +91-9605741000, +91-9656621000, or +91-9656731000.

In such dilemmas, no matter what choice is made, some ethical principle is compromised.

The essence of an ethical dilemma is that it involves a difficult decision-making process where, typically, a clear-cut right or wrong answer doesn’t exist, or if it does, it may carry significant negative consequences for someone involved.

Definition of ethical dilemma

An ethical dilemma is a complex situation that often involves an apparent mental conflict between moral imperatives, in which to obey one would result in transgressing another.

It’s characterized by:

  • Conflicting Values: Individuals or organizations must choose between competing ethical principles or values.
  • No Perfect Solution: Each choice involves a compromise or violation of an ethical principle.
  • Significant Consequences: The choices have significant potential impacts on the well-being or rights of individuals or groups.

5 Cases of Ethical Dilemma

Ethical dilemmas can arise across various fields and situations, reflecting the complexity of moral decisions in real-world scenarios. Here are more examples:

1. Loyalty to the employer vs. the moral obligation to protect the public and the environment

  • An employee discovers that their company is engaging in illegal activities, such as dumping toxic waste into a river, which is both environmentally damaging and a serious health hazard to nearby communities.
  • The employee faces an ethical dilemma between reporting the misconduct, potentially leading to legal action against the company and safeguarding public and environmental health, and remaining silent to protect their job and the livelihoods of their colleagues.
  • Ethical Dilemma: Loyalty to the employer vs. the moral obligation to protect the public and the environment.

2. Upholding academic integrity vs. loyalty to a friend.

  • A student witnesses a close friend cheating during an important exam.
  • If the friend is reported and found guilty, they could face severe consequences, including failing the course or expulsion, which might ruin their academic career and future prospects.
  • The student is torn between reporting the cheating, which is an honest action, and protecting their friend’s future.
  • Ethical Dilemma: Upholding academic integrity vs. loyalty to a friend.

3. The safety of passengers vs. the safety of pedestrians

  • Programmers of autonomous vehicles face an ethical dilemma in creating algorithms for unavoidable accidents.
  • For example, if an accident is inevitable and the choice is between altering the vehicle’s path to avoid hitting a pedestrian, thereby endangering the passengers, or protecting the passengers at the cost of the pedestrian’s life, how should the car be programmed to act?
  • Ethical Dilemma: The safety of passengers vs. the safety of pedestrians.

4. The duty to report news truthfully vs. the potential harm to public safety and societal peace

  • A journalist obtains exclusive footage of a terrorist group committing an atrocity.
  • Publishing the footage could inform the public about the severity of the situation and the threat posed by the terrorist group, but it could also spread fear, possibly lead to public panic, and serve the terrorists’ goal of gaining attention for their cause.
  • Dilemma: The duty to report news truthfully vs. the potential harm that such reporting might cause to public safety and societal peace.

5. Upholding the client-lawyer confidentiality vs. the moral responsibility to prevent future crimes.

  • A defence attorney knows their client is guilty of a serious crime and intends to commit similar crimes in the future.
  • The attorney faces an ethical dilemma between maintaining client confidentiality, a cornerstone of legal ethics, and the moral obligation to prevent future harm.
  • Ethical Dilemma: Upholding the client-lawyer confidentiality vs. the moral responsibility to prevent future crimes.

These examples highlight the range and depth of ethical dilemmas that individuals can face, requiring them to weigh competing values and principles against the backdrop of potential consequences for their actions or inactions.

Moral Conflicts

Ethical dilemmas and moral conflicts are closely related concepts that often overlap in discussions of ethics and morality, but they can be distinguished by their context and the nature of the choices they involve.

Ethical Dilemma

An ethical dilemma arises when a person must choose between two or more actions that have ethical implications, making it difficult to decide what is the right or wrong course of action.

Ethical dilemmas often involve a decision-making process where each option violates some ethical principle or value, leading to a situation where no choice is entirely free from ethical fault.

These dilemmas typically occur within a specific professional, societal, or organizational context and involve considering external codes of ethics, laws, or social norms.

Moral Conflict

Moral conflict, on the other hand, refers to a situation where an individual’s values, principles, or beliefs conflict, leading to an internal struggle about the right course of action.

Moral conflicts are deeply personal and subjective, focusing on an individual’s conscience and moral reasoning rather than external rules or codes.

While ethical dilemmas might require an individual to choose between competing external obligations or duties, moral conflicts involve a more introspective struggle with one’s values and beliefs.

Key Differences Between Ethical Dilemmas and Moral Conflict

  • Context: Ethical dilemmas often involve a choice between actions in a professional or social context, where external codes of conduct or laws must be considered. Moral conflicts are internal struggles over personal values and beliefs.
  • Nature of Conflict: Ethical dilemmas typically involve competing ethical principles or obligations, where adhering to one may lead to the violation of another. Moral conflicts are about reconciling conflicting personal morals or values.
  • Resolution: Resolving an ethical dilemma often involves choosing the “lesser evil” or the option that upholds the most critical ethical principle in a given context. Solving a moral conflict might require personal reflection, growth, and a deeper understanding of one’s own values.

While they are distinct, ethical dilemmas and moral conflicts can occur simultaneously, complicating the decision-making process further.

A person might face an ethical dilemma at work (e.g., whether to report a colleague’s wrongdoing) that also triggers a moral conflict (e.g., loyalty to a friend versus commitment to honesty).

This interplay underscores the complexity of ethical and moral reasoning in real-world situations.

5 Cases of Moral Conflicts

Moral conflicts arise when individuals face situations requiring them to choose between two or more conflicting moral principles or values. Here are five examples illustrating such conflicts:

1. Honesty vs. Compassion

  • Situation: You find out that a close friend has lied on their resume to get a job they desperately need.
  • Conflicting Morals: The value of honesty (telling the truth or reporting the lie) conflicts with compassion (understanding your friend’s desperate situation and wanting to support them).

2. Loyalty vs. Justice

  • Situation: A family member is involved in a minor legal infraction and asks you to provide them with an alibi to avoid consequences.
  • Conflicting Morals: Loyalty to your family member, wishing to protect them, conflicts with your sense of justice and the importance of facing legal consequences for one’s actions.

3. Self-sacrifice vs. Self-preservation

  • Situation: During a disaster, you have the opportunity to save others by putting yourself in significant danger, or ensure your own safety, knowing others might not survive.
  • Conflicting Morals: The principle of self-sacrifice, putting the needs of others before your own, conflicts with self-preservation, the instinct to protect oneself from harm.

4. Equality vs. Meritocracy

  • Situation: In a workplace, you must decide between promoting an employee who has worked longer at the company (seniority) and another who has shown exceptional skill and productivity but has less tenure.
  • Conflicting Morals: The value of treating everyone equally and fairly conflicts with meritocracy, where rewards are based on individual achievement and capabilities.

5. Freedom vs. Security

  • Situation: In governing a community, you must decide whether to implement strict security measures that infringe on personal freedoms to ensure public safety.
  • Conflicting Morals: The importance of individual freedom and autonomy conflicts with the collective need for security and protection from harm.

These examples highlight the complexity of moral conflicts, where deciding in favour of one value inevitably leads to the compromise or negation of another , reflecting the nuanced nature of ethical decision-making.

Also read: Ethical Concerns and Dilemmas In Government And Private Institutions

The moments of ethical dilemmas and moral conflicts challenge us to weigh our values against the harsh realities of our circumstances, pushing us to make decisions that can redefine who we are and what we stand for.

The story of Ravi and Maya, the couple torn between family and community, serves as a poignant reminder of the complex nature of ethical decision-making .

Such dilemmas compel us to question not just our morality but the very essence of what it means to be human.

They remind us that there are no easy answers in the pursuit of doing what is right.

Whether it’s choosing between fairness and loyalty, or the welfare of one versus the greater good, these decisions are laden with the weight of potential regret and the hope for understanding and forgiveness.

In conclusion, ethical dilemmas and moral conflicts are not mere philosophical quandaries to be pondered from afar; they are real, lived experiences that test our integrity , empathy , and courage.

As we tread this precarious path, let us strive for a balance between our duties to others and our commitment to our principles, recognizing that we can confront and navigate these dilemmas that ultimately define our humanity.

The journey through these challenges is arduous and fraught with uncertainty, but it is also a testament to the strength and resilience of the human spirit, ever aspiring to a higher standard of morality and justice.

Print Friendly, PDF & Email

Top 10 Best-Selling ClearIAS Courses

Upsc prelims cum mains (pcm) gs course: unbeatable batch 2025 (online), rs.75000   rs.29000, upsc prelims marks booster + 2025 (online), rs.19999   rs.14999, upsc prelims test series (pts) 2025 (online), rs.9999   rs.4999, csat course 2025 (online), current affairs course 2025 (online), ncert foundation course (online), essay writing course for upsc cse (online), ethics course for upsc cse (online), upsc interview marks booster course (online), rs.9999   rs.4999.

case study about moral dilemma

About Alex Andrews George

Alex Andrews George is a mentor, author, and social entrepreneur. Alex is the founder of ClearIAS and one of the expert Civil Service Exam Trainers in India.

He is the author of many best-seller books like 'Important Judgments that transformed India' and 'Important Acts that transformed India'.

A trusted mentor and pioneer in online training , Alex's guidance, strategies, study-materials, and mock-exams have helped many aspirants to become IAS, IPS, and IFS officers.

Reader Interactions

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Don’t lose out without playing the right game!

Follow the ClearIAS Prelims cum Mains (PCM) Integrated Approach.

Join ClearIAS PCM Course Now

UPSC Online Preparation

  • Union Public Service Commission (UPSC)
  • Indian Administrative Service (IAS)
  • Indian Police Service (IPS)
  • IAS Exam Eligibility
  • UPSC Free Study Materials
  • UPSC Exam Guidance
  • UPSC Prelims Test Series
  • UPSC Syllabus
  • UPSC Online
  • UPSC Prelims
  • UPSC Interview
  • UPSC Toppers
  • UPSC Previous Year Qns
  • UPSC Age Calculator
  • UPSC Calendar 2024
  • About ClearIAS
  • ClearIAS Programs
  • ClearIAS Fee Structure
  • IAS Coaching
  • UPSC Coaching
  • UPSC Online Coaching
  • ClearIAS Blog
  • Important Updates
  • Announcements
  • Book Review
  • ClearIAS App
  • Work with us
  • Advertise with us
  • Privacy Policy
  • Terms and Conditions
  • Talk to Your Mentor

Featured on

ClearIAS Featured in The Hindu

and many more...

ClearIAS Programs: Admissions Open

Thank You 🙌

UPSC CSE 2025: Study Plan

case study about moral dilemma

Subscribe ClearIAS YouTube Channel

ClearIAS YouTube Image

Get free study materials. Don’t miss ClearIAS updates.

Subscribe Now

IAS/IPS/IFS Online Coaching: Target CSE 2025

ClearIAS Course Image

Cover the entire syllabus of UPSC CSE Prelims and Mains systematically.

Download self-study plan.

ClearIAS Course Image

Analyse Your Performance and Track Your Progress

Download Study Plan

ORIGINAL RESEARCH article

Moral judgment reloaded: a moral dilemma validation study.

\r\nJulia F. Christensen*

  • 1 Psychology, Evolution and Cognition (IFISC-CSIC), University of the Balearic Islands, Palma, Spain
  • 2 School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
  • 3 Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow, UK

We propose a revised set of moral dilemmas for studies on moral judgment. We selected a total of 46 moral dilemmas available in the literature and fine-tuned them in terms of four conceptual factors ( Personal Force, Benefit Recipient, Evitability , and Intention ) and methodological aspects of the dilemma formulation ( word count, expression style, question formats) that have been shown to influence moral judgment. Second, we obtained normative codings of arousal and valence for each dilemma showing that emotional arousal in response to moral dilemmas depends crucially on the factors Personal Force, Benefit Recipient , and Intentionality . Third, we validated the dilemma set confirming that people's moral judgment is sensitive to all four conceptual factors, and to their interactions. Results are discussed in the context of this field of research, outlining also the relevance of our RT effects for the Dual Process account of moral judgment. Finally, we suggest tentative theoretical avenues for future testing, particularly stressing the importance of the factor Intentionality in moral judgment. Additionally, due to the importance of cross-cultural studies in the quest for universals in human moral cognition, we provide the new set dilemmas in six languages (English, French, German, Spanish, Catalan, and Danish). The norming values provided here refer to the Spanish dilemma set.

“… but what happens when we are exposed to totally new and unfamiliar settings where our habits don't suffice?”

Philip Zimbardo (2007) ; The Lucifer Effect, p. 6

Introduction

Moral dilemmas have become a standard methodology for research on moral judgment. Moral dilemmas are hypothetical short stories which describe a situation in which two conflicting moral reasons are relevant; for instance, the duty not to kill, and the duty to help. By inducing the participants to make a forced choice between these two reasons, it can be investigated which reason is given precedence in a particular situation, and which features of the situation matter for that decision. Accordingly, we assume that this kind of hypothetical “ticking bomb scenarios” can help to disentangle what determines human moral judgment. This is, however, only possible if the moral dilemmas are very well designed and potentially relevant factors are controlled for. The aim of this paper is to provide a set of such carefully designed and validated moral dilemmas.

The moral dilemmas commonly used in Cognitive Neuroscience experiments are based on what Foot (1967) and Thomson (1976) called the “Trolley Problem.” The trolley dilemma has two main versions. In the first one, a runaway trolley is heading for five railway workers who will be killed if the trolley pursues its course. The experimental participant is asked to take the perspective of a protagonist in the story who can choose the option to leap in and to pull a switch which will redirect the trolley onto a different track and save the five railway workers. However, redirected onto the other track, the trolley will kill one railway worker who would otherwise not have been killed. In an alternative version of the dilemma, the action the protagonist has to perform in order to stop the trolley is different. This time, there is no switch but a large stranger who is standing on a bridge over the tracks. The protagonist can now choose to push that person with his hands onto the tracks so that the large body stops the train. The outcome is the same: five individuals saved by sacrificing one. However, participants in this task more easily consent to pull the switch while they are much more reluctant to push the stranger with their own hands. The “action” that the protagonist of the story can choose to carry out—or not—is termed a moral transgression or moral violation . The choice itself, between the act of committing or omitting to carry out the moral transgression is a moral judgment . The decision to commit the harm is referred to as an utilitarian moral judgment, because it weights costs and benefits, while the decision to refrain from harm is a deontological moral judgment, because it gives more weight to the “not to kill” principle.

The influential work of Greene et al. (2001) , which introduced moral dilemmas into Cognitive Neuroscience, has been followed by many other studies as a way to deepen our understanding of the role of emotion in moral judgment (for a review, see Christensen and Gomila, 2012 ). However, results obtained with this methodological approach have been heterogeneous, and there is a lack of consensus regarding how to interpret them.

In our opinion, one of the main reasons for this lays in the simple fact that the majority of studies have relied on the initial set of moral dilemmas devised by Greene et al. (2001) . While this set indisputably provided invaluable evidence about the neural underpinnings of moral judgment, it was not validated. Thus, conceptual pitfalls and formulation errors have potentially remained unchallenged ( Christensen and Gomila, 2012 ). In fact, one of the key findings that have been reported (i.e., emotional involvement in moral judgment) might have been due to uncontrolled variations in the dilemma formulations, rather than to the factors supposedly taken into account (i.e., personal vs. impersonal versions of the dilemma). As a matter of fact, Greene and colleagues themselves have worded constructive self-criticism with respect to that initial dilemma set and suggested using only a subset of the initial dilemmas, however, without validating them either ( Greene et al., 2009 ). Still, researchers continue to use this initial set. Here we present our efforts to remedy this situation.

We have fine-tuned a set of dilemmas methodologically and conceptually (controlling four conceptual factors). The set was selected from previously used moral dilemma sets: (i) Greene et al. (2001 , 2004) and (ii) Moore et al. (2008) (this set was based on Greene et al.'s but optimized). Both sets have been used in a wealth of studies, however, without previous validation (e.g., Royzman and Baron, 2002 ; Koenigs et al., 2007 ; Moore et al., 2008 , 2011a , b ). After the dilemma fine-tuning, norming values were obtained for each dilemma: (i) of arousal and valence (to ascertain the differential involvement of emotional processes along the dimensions of the 4 conceptual factors) and (ii) of moral judgment (to confirm that moral judgment is sensitive to the four factors) 1 . Finally, in the Supplementary Material of this work, we provide the new set in six languages: English, French, Spanish, German, Danish , and Catalan in order to make it more readily available for cross-cultural studies in the field. Please note that the norming study was carried out with the Spanish dilemma version. We encourage norming studies in the other languages (and in other cultures).

Dilemma “Fine-Tuning”—Proposal of an Optimized Set

All dilemmas included in this set involved the decision to carry out a moral transgression which would result in a better overall numerical outcome. The participant was always the protagonist of this action (the moral transgression) 2 and all dilemmas involved killing (i.e., all social and other physical harm dilemmas were eliminated). Furthermore, of the initial 48 dilemmas, 2 were eliminated (the personal and impersonal versions of the cliffhanger dilemma) due to the unlikely acrobatics they involve.

In what follows we outline the changes we have made regarding (i) the instructions given to the participant (subsection Instructions to the Participant ); (ii) the dilemma design , i.e., adjustment of dilemma length, expression style , etc. (subsection Dilemma Design (1)—Formulation ), (iii) the dilemma conceptualization , i.e., thorough adaptation to the conceptual factors of Personal Force, Benefit Recipient, Evitability , and Intentionality (subsection Dilemma Design (2)—Conceptual Factors ), and (iv) the formulation of the question eliciting the moral judgment (subsection The Question Prompting the Moral Judgment ). In the end, we have produced 23 dilemmas with two versions each, one personal and one impersonal, 46 dilemmas in total.

Instructions to the Participant

To increase verisimilitude, we suggest that instructions at the beginning of the experiment ideally emphasize that participants are going to read short stories about difficult situations as they are likely to appear in the news or in the radio (for instance: “ in the following you will read a series of short stories about difficult interpersonal situations, similar to those that we all see on the news every day or may read about in a novel ”) ( Christensen and Gomila, 2012 , p. 14). This may help to put the participants “in context” for the task that awaits them. In addition, instructions could include a remark about the fact that participants will be offered one possible solution to the situation, and that their task will be to judge whether the proposed solution is acceptable, given the information available (such as: “ for each of the difficult situations a solution will be proposed. Your task is to judge whether to accept or not this solution” ). Indeed, the closure of options or alternatives is important. However, in previous dilemma sets, some dilemmas have included expressions such as “ the only way to avoid [death of more people] is to [action proposal],” while other dilemmas did not. Whereas this is important information, including that same sentence in all dilemmas could make the reading rather repetitive and result in habituation. On the other hand, including it only in some dilemmas could bias participants' responses to these dilemmas with respect to the others. Therefore, we suggest presenting it only in the general instructions to the participants.

Dilemma Design (1)—Formulation

Control for formal characteristics of dilemma formulation includes:

Word count across dilemma categories: in the original sets the dilemmas were rather long. This can entail an excessively long experimental session, resulting in participant fatigue. In Moore et al. (2008) an effort was made to control for mean word count: the Personal moral dilemmas (PMD) had 168.9 words and Impersonal moral dilemmas 169.3 (IMD). The maximum word count of a dilemma was 254 and the minimum was 123. We shortened the dilemmas removing information that was not strictly necessary and equalized the expression style of personal and impersonal versions of each dilemma. For instance, technical terms and long, non-familiar words were removed. Now the first three sentences of each dilemma are almost the same for both versions of a dilemma (personal and impersonal). For instance, the English version of the new dilemma set has a mean word count of 130 words in the Personal and 135 in the Impersonal moral dilemmas. Our maximum number of words in a dilemma is 169 and the minimum 93. See the Supplementary Material for the word counts for each translation.

Framing effects

A framing effect consists in that people may judge one and the same situation differently, just because of the way it is described ( Tversky and Kahneman, 1981 ; Petrinovich et al., 1993 ). Specifically, a clear risk of framing effects concerns the use of “kill” in some dilemmas, but “save” in others. People feel more inclined to choose inaction when kill is used, and more inclined toward action when save is emphasized ( Petrinovich and O'Neill, 1996 ). To avoid this, in all dilemmas the words kill and save are used in the second paragraph where the participant is given the information about the proposed action (i.e., the moral transgression) and its consequences. Conversely, the words are removed from the question (e.g., Rescue 911 scenario: instead of Is it appropriate for you to kill this injured person in order to save yourself and everyone else on board? the action verbs throw and keep were used). It is important to highlight the trade-off between cost (throw someone) and benefit (keep yourself and more people in the air) in the questions of all dilemmas. This was not accounted for in any of the previous dilemma sets.

Situational antecedents

In the original dilemma sets, keeping the situational antecedent used to present the characters constant was not accounted for. Thus, in the Personal version of the Nuclear reactor dilemma the situational antecedent could bias the participants' responses: you are the inspector of a nuclear power plant that you suspect has not met its safety requirements. The plant foreman and you are touring the facility when one of the nuclear fuel rods overheats … Later, it is this same foreman you are asked to consider to push into the fuel rod assembly. The participant was given knowledge about a badly kept nuclear plant, with an in-charge individual who did not bother to make the plant meet the safety requirements. This makes it easier to sacrifice the plant foreman to save the city than to sacrifice another, random, innocent person—which is the option to consider in all other dilemmas. Hence, prior information about the state of the power plant was removed, so that the foreman has no overt responsibility for the nuclear accident which is about to happen. Now he is a “random” person to be sacrificed, like in the other dilemmas. The Nobel Prize dilemma had a similar problem. A situational antecedent made the person in a position to be sacrificed (here, your fellow researcher ) appear a greedy bad person, so that it may be easier to sacrifice him than another innocent fellow researcher. The dilemma was reformulated so that the fellow researcher appeared not to know that the potential buyers would use the invention as a weapon and only the protagonist explicitly knows it and is now again the only person with the possibility to prevent greater harm from happening. In total, four dilemmas were modified to keep constant the situational antecedents of the characters in the dilemmas.

Trade-off across dilemmas: previous sets mixed different kinds of moral transgressions, like stealing or lying. It is important not to mix them with killing, in order to avoid the risk of a non-desired carry-over effect between dilemmas. For instance, stealing, lying, or lack of respect, may elicit less severe judgments when killing is also present in other dilemmas of the set, than when it's not. Therefore, all dilemmas now raise the conflict between the option to kill a person in order to save a larger number of people, and the option of doing nothing and letting that larger number of people die.

Number of individuals

Number of individuals saved if the moral transgression is carried out: in the set there now are the following categories (i) 5–10 people, (ii) 11–50 people, (iii) 100–150 people and (iv) “thousands” of people or “masses” of people. It is an important variable to control for. A utilitarian response should become easier as more people are saved. Conversely, if moral judgment is purely deontological, the number of people saved is totally irrelevant. This is an interesting question to have as a working hypothesis. Using different amounts of “saved individuals” in the formulations of the dilemmas allows researchers to explore at which point the positive consequences outweigh the transgression required to obtain them. For instance, it has been shown that attachment (“closeness of relationship”) to the victim determines moral judgment more than the number of beneficiaries involved. Still, this question needs further research, once closeness is controlled for ( Tassy et al., 2013 ). In this work, however, no specific analysis of this variable will be made, as it exceeds the limits of this norming study.

Information

Information supplied about the certainty of the consequences for the story character impersonated by the participant: in the Tycoon and Nobel Prize dilemmas it said that “ if you decide to [action of the dilemma], nobody will ever find out .” This implies information about the future which cannot really be given with certainty, while at the same time contrasting with other stories where no such commitments about the future are made. This kind of information can bias moral judgment and confuse it with legal punishment (or its lack). Therefore, this information was removed altogether from the dilemmas. Similarly, dilemmas were excluded that cannot be understood without the assumption of an extraordinary ability or an unlikely event (such as the Cliffhanger) 3 .

Dilemma Design (2)—Conceptual Factors

On the grounds of the literature about moral judgment ( Christensen and Gomila, 2012 ), four main factors need to be controlled for in moral dilemma formulation: Personal Force, Benefit Recipient (who gets the benefit), Evitability (whether the death is avoidable, or not), and Intentionality (whether the harm is willed and used instrumentally or a side-effect).

Personal force

Initially, Greene et al. (2001 , 2004) defined a Personal moral dilemma as one in which the proposed moral transgression satisfied three criteria: (i) the transgression leads to serious bodily harm; (ii) this harm befalls a particular person or group of people; and (iii) the harm is not the result of deflecting an existing threat onto a different party. Subsequently, Cushman et al. (2006) remarked that the crucial feature in a personal dilemma is whether physical contact between the victim and the aggressor is involved; a point also emphasized by Abarbanell and Hauser (2010) , while Waldmann and Dieterich (2007) focused on the Locus of Intervention (focus on the victim or on the threat) as the difference between personal and impersonal dilemmas. Another proposal contended that the difference between Personal and Impersonal is whether the action is mechanically mediated or not ( Royzman and Baron, 2002 ; Moore et al., 2008 ). In more recent work, Greene et al. have tried to offer an integrative definition ( Greene, 2008 ; Greene et al., 2009 ). Specifically, these authors propose that a Personal moral transgression occurs when (i) the force that impacts the victim is generated by the agent's muscles, (ii) it cannot be mediated by mechanisms that respond to the agent's muscular force by releasing or generating a different kind of force and applying it to the other person, and (iii) it cannot be executed with guns, levers, explosions, gravity…

However, it seems as if this redefinition is driven by an effort to keep the interpretation of the initial results, which results in a circular argument: that “personal” dilemmas induce deontological judgments by emotional activation, while “impersonal” ones induce utilitarian judgments by rational calculation. Yet, it is not yet clear which aspect of the personal involvement influences moral judgment through emotional activation, nor is it clear which kind of moral relevance emotions elicited by one's involvement may have in the judgment. Similar considerations apply to the introduction of the distinction between “high-conflict” and “low-conflict” dilemmas ( Koenigs et al., 2007 ), which also seem based on ex-post-facto considerations.

A principled way to clarify this distinction is in terms of the causal role of the agent in the production of the harm. What makes a dilemma impersonal is that the agent just initiates a process that, through its own dynamics, ends up causing the harm; while a dilemma is personal when the agent is required not just to start the action, but to carry it out by herself. According to this view, the presence of mediating instruments, by itself, does not make a dilemma personal or impersonal. It depends of the kind of active involvement of the agent they require and amounts to a difference in her responsibility of the caused harm, and in the resulting (felt) emotional experience of it. This can account for the different moral judgments to Personal and Impersonal Dilemmas, which are observed despite the fact that the same consequences occur. The best philosophical explanation of this difference is Anders (1962) 's reflection on the mass murders on the Second World War. He contended that these acts were made possible by the technical innovations that reduced the active involvement of soldiers in the killing to pushing a button to release a bomb. It is not just that the new arms were of massive destruction, but that their use was easier for us humans. Killing with one's hands is not just slower, but harder.

In the present dilemma set, the Personal dilemmas have been revised accordingly. Personal Moral Dilemmas now require that the agent is directly involved in the production of the harm. Impersonal Moral Dilemmas are those in which the agent is only indirectly involved in the process that results in the harm.

Benefit recipient

Self-interest is a well-known influence in moral judgments ( Bloomfield, 2007 ). People will be more prone to accept an action whose consequences benefit themselves (i.e., the agent herself) than one that benefits others, maybe complete strangers. This “Self-Beneficial” vs. “Other Beneficial” contrast has been introduced more clearly in the revised set. We reformulated the Modified Euthanasia dilemma due to a confound in the trade-off specification. Therefore, as the dilemma had to be an Other-beneficial dilemma, now the key secret evidence the soldier could reveal if tortured is the location of a particularly important base camp (and not the camp of the protagonist's group).

Evitability

This variable regards whether the death produced by the moral transgression is described as Avoidable or Inevitable . Would the person “to be sacrificed” have died anyway ( Inevitable harm), or not ( avoidable harm)? Transgressions that lead to inevitable consequences are more likely to be morally acceptable, by the principle of lesser evil ( Hauser, 2006 ; Mikhail, 2007 ). In the dilemma Rescue 911 , a technical error in a helicopter puts the protagonist in the situation of having to decide to throw off one of her patients for the helicopter to lose weight. Without that sacrifice the helicopter would fall and everybody— including that one patient —would die. Conversely, the dilemma can also be formulated in such a way that the individual to be sacrificed otherwise would not have been harmed ( Avoidable death), such as in the classical trolley dilemmas, where neither the bystander nor the innocent railway worker on the side track would have been harmed if the protagonist had not changed the course of events. This distinction has now been made more explicit in the dilemmas (for examples of work where this variable was discussed, see Moore et al., 2008 ; Huebner et al., 2011 ).

Intentionality

This factor refers to whether the harm is produced instrumentally, as something willed, or whether it happens as an unforeseen side-effect, as collateral damage, to an action whose goal is positive. This variable concerns the doctrine of the double effect that has been shown to be psychologically relevant ( Foot, 1967 ; Hauser, 2006 ; Mikhail, 2007 ). Causing harm is more acceptable when it is produced as collateral damage, than when it is the goal of an action. Accordingly, Accidental harm refers to the case where the innocent victim of the dilemma dies as a non-desired side effect of the moral transgression that the protagonist carries out to save others. Conversely, Instrumental harm occurs when the protagonist intentionally uses the harm (i.e., the death) of the innocent victim as a means (i.e., instrumentally) to save the others.

The reformulation of the dilemmas and the fine-tuning according to this factor is particularly relevant and one of the main contributions of this norming paper. In the modified set of Moore et al. (2008) , all Personal dilemmas were Instrumental , while Impersonal dilemmas included six Instrumental and six Accidental . The present set now allows a full factorial design including intentionality . To introduce Accidental vs. Instrumental harm in Personal dilemmas attention was paid to key aspects of the causal chain of the dilemma leading to the proposed salvation of the greatest number of people. First, the exact intention that the protagonist has in the very moment of committing the moral transgression was identified ( does she carry out an action with the intention to kill or not? ). Second, a differentiation was made between whether the harm is directly produced by the protagonist, or indirectly triggered by her action (do the positive consequences (the salvation of many) follow directly from the victim's death, or by some other event, an independent mechanism which was triggered by the protagonist's actions but not directly by her, nor directly willed by her?). The final point concerned by what means the larger number of people is saved (are they saved directly by the death of the victim, or for a different reason?).

Following this rationale, for a better comprehension of the Intentionality factor, the moral transgression is divided into a 5-part causal chain. This helps to disentangle the Accidental - Instrumental dichotomy (see Figure 1 ). The first thing to identify is the action by the protagonist ( what exactly does she do? ). Second, which is the exact intention behind that action ( why exactly does she do it? )? Third, does the victim die by the intervention of some intermediate (and protagonist- independent ) mechanism or is the death directly due to the action of the protagonist ( does she kill directly or by an independent mechanism? )? Fourth, how does the innocent victim die ( how does she die? )? Fifth, how is the larger number of people saved ( are they saved due to the death of the victim or for some other reason? )?

www.frontiersin.org

Figure 1. Example of the causal chain of the proposed moral transgression that leads to the salvation . In the Instrumental version of the Burning Building dilemma the proposed action is “to use the body of the victim.” The intention is “use the body to break down burning debris.” The victim dies directly by the fire and there is no independent mechanism in between. A larger number of people are saved due to the fact that the burning debris was eliminated with the victim . The harm to the victim was thus used as a means to save others. Said in different words, the body of the victim was literally used instrumentally with the intention to free the trapped group. Conversely, in the Accidental version of the Iceberg dilemma, the action of the protagonist is “to push the emergency access hatch.” The intention behind that action is “to make the oxygen flow to the upper section of the boat.” The victim dies due to a knock on the head by an independent mechanism which is the falling down of the hatch . Thus, the victim dies as a side-effect of the act of salvation that the protagonist carries out with the intention to get oxygen to the upper section of the boat.

To summarize the four factors Personal Force, Benefit Recipient, Evitability , and Intentionality , the illustration in Figure 2 provides a schematic overview over how the four factors are presented to the participant during the course of a moral dilemma.

www.frontiersin.org

Figure 2. The four factors in the dilemma set, adapted from Christensen and Gomila (2012) , reproduced with permission . (1) Personal Force : the kind of imaginary involvement with the situation: Personal, as direct cause, or Impersonal, as an indirect agent in the process of harm. (2) Benefit Recipient : concerns whether the protagonist's life is at stake (Self-Beneficial action), or not (Other-Beneficial action). (3) Evitability : regards whether the victim would die alongside the other individuals in the group if the moral transgression is not carried out (Inevitable death, the person would die anyway), or not (Avoidable death, the person would not die if no action is taken). (4) Intentionality : if the action is carried out intentionally with the explicit aim to kill the person as a means to save others, this is Instrumental harm (it explicitly needs the death of that person to save the others). If the innocent person dies as a non-desired side-effect of the action by some independent mechanism and not directly by the action of the protagonist, the harm is Accidental.

The Question Prompting the Moral Judgment

The formulation of the final question to elicit the moral judgment after reading the dilemma has also given rise to some controversy. The problem of the influence that the type of question exerts on participant's moral judgments has been addressed empirically (e.g., O'Hara et al., 2010 ). Four question formats were used: wrong, inappropriate, forbidden , and blameworthy and found that people judged moral transgressions more severely when the words “wrong” or “inappropriate” were part of the formulation, than when the words “forbidden” or “blameworthy” were used. Another study found different behavioral effects following the questions Is it wrong to…? vs. Would you? ( Borg et al., 2006 ). The question Would you…? resulted in faster RTs in judging moral scenarios as compared to judgments of non-moral scenarios, while the question Is it wrong to…? did not show any differences in RT comparing the moral to the non-moral condition. In view of these findings, it seems that deciding what to do is not processed in the same way as deciding whether an action is right or wrong , and that in moral dilemmas is the first that matters.

In recent work, two groups of researchers have addressed the related issue of whether “what we say is also what we do.” Thus, it was found that answering the question Is it acceptable to…? vs. the question Would you…? resulted in differential response tendencies ( Tassy et al., 2013 ). However, another study showed that increasing the contextual information available to the participant resulted in more coherence between what they said they would do and what they actually did ( Feldman Hall et al., 2012 ). In any case, it is clear that consistency is required.

For the present dilemma set a direct question was used Do you [action verb] so that … to emphasize the consequences of the choice made by the agent. Scales (Likert, Visual Analogical…) were used instead of a dichotomous answer format, as a way to uncover the degree of conflict experienced.

Summary: The Revised Set

The revised set consists of 46 dilemmas, of which 23 are Personal and 23 are Impersonal . As can be observed in Table 1 , we maintained the original dilemma numbers so that it is easy to compare across sets. In 23 of the 46 dilemmas, the protagonist's life is in danger and the moral violation results in saving not only a greater number of individuals, but also the protagonist herself ( Self-Beneficial dilemmas), whereas in the remaining 23, the protagonist's life is not in danger ( Other-Beneficial dilemmas). In turn, there are 11 Personal and 11 Impersonal Self - Beneficial dilemmas, and 12 Personal and 12 Impersonal Other-Beneficial dilemmas.

www.frontiersin.org

Table 1. Revised dilemmas .

There are 24 dilemmas where the death is Avoidable and 22 where it is Inevitable . Finally, there are 18 dilemma scenarios with Accidental harm (7 Personal and 11 Impersonal; 10 Self-Beneficial and 8 Other-Beneficial; 10 Avoidable and 8 Inevitable ) and 28 with Instrumental harm (16 Personal and 12 Impersonal; 12 Self-Beneficial and 16 Other-Beneficial; 14 Avoidable and 14 Inevitable ). See Table 1 for a summary. Please note that it was not possible to provide the same number (quantity) of dilemmas in each of the 16 categories because we relied on the materials of the former set. Refer to our discussion of this matter in the Supplementary Material (A) on limitations.

Arousal and Valence Norming Experiment

Peoples' moral judgment has been shown to be sensitive to the affective impact of a dilemma in the individual ( Moretto et al., 2010 ; Navarrete et al., 2012 ; Ugazio et al., 2012 ). However, no dilemma set has so far been assessed in terms of the affective arousal the individual dilemmas elicit in normal population as they are read—i.e., even if no moral judgment is required. Therefore, data points for affective arousal and valence were obtained for each dilemma of this set.

We know that peoples' moral judgments vary as a function of the four conceptual factors Personal Force, Benefit Recipient, Evitability , and Intentionality . However, how peoples' affective responses (valence and arousal) are modulated by these factors remains to be established. Besides, because inter-individual differences in emotional sensitivity and empathy can affect the subjective experience of arousal, participants in this experiment were assessed on these variables by means of self-report measures.

Participants

Sixty-two undergraduate psychology students participated in this study in exchange for a course credit in one of their degree subjects (43 females, 19 males; age range = 18–48 years; m = 21.0, SD = 5.35). Participants completed four self-report measures. First, the Interpersonal Reactivity Index (IRI) ( Davis, 1983 ), which has four scales that focus on perspective taking, tendency to identify with fictitious characters, emotional reactions to the negative experiences of others, and empathic concern for others. Second, the Questionnaire of Emotional Empathy ( Mehrabian and Epstein, 1972 ) that conceives empathy as the vicarious emotional response to the perceived emotional experience of others. It explicitly understands empathy as different from Theory of Mind (ToM) and focuses on emotional empathy where high scores indicate a high responsiveness to other peoples' emotional reactions. Third, the Questionnaire of Emotional Sensitivity (EIM) ( Bachorowski and Braaten, 1994 ), which refers to the intensity with which a person experiences emotional states irrespectively of their affective valence. Fourth, participants completed the Toronto Alexithymia Scale (TAS) in which a high score means difficulties in understanding and describing emotional states with words ( Taylor et al., 1985 ). For results on the self-report measures, see Table 2 . All were native Spanish speakers.

www.frontiersin.org

Table 2. Participant characteristics in terms of emotional sensitivity, empathy, and alexithymia .

The forty-six moral dilemmas were arranged to be presented in random order in the stimulus presentation program DirectRT ( www.empirisoft.com ) v. 2006.2.0.28. The experiment was set up to run on six PCs [Windows XP SP3 PC (Intel Pentium Dual Core E5400, 2.70 GHz, 4 GB RAM)] and stimuli were displayed on 19″ screens (with a resolution of 1440 × 900 p; color: 32 bits; refresh rate: 60 Hz). Data were analyzed using the statistical package SPSS v. 18 ( www.ibm.com ).

Participants signed up for the experiment in class after having completed the four self-report scales. The day of the experiment, participants provided demographic data regarding gender, age, and level of study. Informed consent was obtained from each participant prior to participation in any of the tasks and questionnaire procedures.

Participants were instructed as outlined in the section Instructions to the Participant . Each dilemma was presented in white Arial font, pt 16, on a black screen. By key press, the first paragraph of the dilemma appeared. With the next key press, the second paragraph appeared 4 . Participants read at their own pace, advancing from one screen to the other by pressing the space bar. With the third key press, the first two paragraphs of the dilemma disappeared and two Likert scales appeared on subsequent screens, the first asking participants to indicate their level of arousal (1 = not arousing at all; 7 = very arousing) and the second asking them to indicate the perceived valence of the dilemma (1 = very negative; 7 = very positive). The ratings were made by means of key press on the number keyboard of the computer. Four practice dilemmas were added in the beginning of the task. Data from these trials were discarded before data analysis.

The experiment was carried out in a laboratory of the university suited for experiments with six individual PCs separated in individual booths. Participants carried out the task in groups of 1–6 people. Viewing distance was of approximately 16 inches from the screen. The study was approved by the University's Ethics Committee (COBE280213_1388).

Factorial Repeated Measure (RM) 2 × 2 × 2 × 2 Analysis of Variances (ANOVA) were computed on subjective arousal and valence ratings (Likert scale data), and on the RT of the arousal ratings. The factors were (1) Personal Force (Personal vs. Impersonal harm); (2) Benefit Recipient (Self-Beneficial vs. Other-Beneficial); (3) Evitability (Avoidable vs. Inevitable harm); and (4) Intentionality (Accidental vs. Instrumental harm). As effect sizes we report Pearson's r , where 0.01 is considered a small effect size, 0.3 a medium effect and 0.5 a large effect ( Cohen, 1988 ).

To rule out any effect of Gender in the results, the above ANOVA was computed with the between-subjects factor Gender. There was no effect of gender in any of the interactions with the four factors, neither in the arousal ratings: Personal Force * gender : F (1, 60) = 1.47; p = 0.230; Benefit Recipient * gender : F (1, 60) = 0.774; p = 0.383; Evitability : F (1, 60) = 0.079; p = 0.780; Intentionality * gender : F (1, 60) = 0.101, p = 752; nor in the valence ratings: Personal Force * gender : F (1, 60) = 0.004; p = 0.949; Benefit Recipient * gender : F (1, 60) = 0.346; p = 0.558; Evitability : F (1, 60) = 0.019; p = 0.890; Intentionality * gender : F (1, 60) = 0.184, p = 0.670, nor in the RT. Therefore, data of female and male participants were aggregated.

All 16 dilemma categories were rated as being felt as of moderate to high arousal (range: m = 5.58–6.24; see Table 3 ). Two of the four factors showed significant effects on the arousal ratings. First, there was a significant main effect of Personal Force [ F (1, 61) = 6.031; p = 0.017; r = 0.30], PMD being rated as more arousing ( m = 5.92; SD = 0.12), than IMD ( m = 5.83; SD = 0.12). The second main effect was of Benefit Recipient [ F (1, 61) = 47.57; p < 0.001; r = 0.66], Self-Beneficial Dilemmas being rated as more arousing ( m = 6.02, SD = 0.12) than Other-Beneficial Dilemmas ( m = 5.70, SD = 0.13). See Figure S3. There were no significant main effects of Evitability [ F (1, 61) = 0.368; p = 0.546], nor of Intentionality [ F (1, 61) = 0.668; p = 0.417]. See Table S1 for the means and Figure S3 in the Supplementary Material.

www.frontiersin.org

Table 3. RM ANOVA of the RT of the arousal ratings .

There was a significant interaction of Benefit Recipient * Intentionality [ F (1, 61) = 15.24; p < 0.001; r = 0.44]. This indicates that Intentionality had different effects on participants' ratings of arousal depending on whether the dilemma was Self-Beneficial or Other-Beneficial . Figure S4 illustrates the results. Paired t -tests showed that when Self-Beneficial Harm was Accidental the dilemma was rated as more arousing than when it was Instrumental [ t (61) = 3.690, p < 0.001, r = 0.43]. For Other-Beneficial Harm , the pattern was reversed, as the Instrumental Harm dilemmas were more arousing than the Accidental Harm dilemmas [ t (61) = −1.878, p = 0.065, trend effect, r = 0.05]. When comparing the Accidental and Instrumental Harm conditions, we found that Self-Beneficial, Accidental Harm dilemmas resulted in higher arousal ratings than when dilemmas were Other-Beneficial [ t (61) = 7.626, p < 0.001, r = 0.49]. The same pattern emerged when the harm was Instrumental ; it was judged as more arousing when it was Self-Beneficial , than when it was Other-Beneficial [ t (61) = 3.494, p = 0.001, r = 0.17]. If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α* = 0.0125. This should be taken into account when considering the result with the trend effect.

Descriptive statistics of the valence ratings confirmed that all 16 dilemma categories were rated as being of negative valence (range: m = 1.71–2.23; see Table S1).

There were significant main effects of Personal Force [ F (1, 61) = 28.00; p < 0.001; r = 0.57] and of Benefit Recipient [ F (1, 61) = 31.509; p ≤ 0.001; r = 0.58]. PMD were rated as significantly more negative ( m = 1.905, SD = 0.065) than IMD ( m = 2.054; SD = 0.068). Likewise, Self - Beneficial Dilemmas were rated as significantly more negative ( m = 1.884, SD = 0.068) than Other - Beneficial Dilemmas ( m = 2.075; SD = 0.067). The two other factors did not show main effects [ Evitability F (1, 61) = 1.201; p = 0.277; and Intentionality F (1, 61) = 0.135; p = 0.715]. See Table S1.

There were two significant interactions. The first was Personal Force * Intentionality [ F (1, 61) = 7.695, p = 0.007; r = 0.33]. The Figure S5 shows that Intentionality had different effects on how people rated the valence of PMD and IMD . Paired t -tests showed that Accidental harm was rated as significantly more negative than Instrumental harm in Impersonal Moral dilemmas [ t (61) = −2.297, p = 0.025, r = 0.08], while no such difference was found between Accidental and Instrumental harm for Personal Moral dilemmas [ t (61) = 1.441, p = 0.155, r = 0.03]. See Figure S5. If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α* = 0.0125. This should be taken into account when considering the result of the first t -test ( p = 0.025).

The second significant interaction was Benefit Recipient * Intentionality [ F (1, 61) = 6.041, p = 0.017; r = 0.30]. This indicates that intention had different effects on the valence ratings depending on whether the dilemma was Self - or Other - Beneficial . Paired t -tests showed that for Self-Beneficial Dilemmas, harm was judged significantly more negative when it was Accidental as compared to Instrumental harm [ t (61) = −2.300, p = 0.025, r = 0.08]. No such difference in valence ratings of Accidental and Instrumental harm for Other-Beneficial dilemmas [ t (61) = 1.296, p = 0.200, r = 0.03]. See Figure S6. If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α* = 0.0125. This should be taken into account when considering the result of these t -tests ( p = 0.017 and p = 0.025).

The assessment of valence was only carried out to confirm that all dilemmas were of a strongly negative valence. This has hereby been confirmed and no other analysis will be carried out involving this feature of the dilemmas. All values for both arousal and valence are available for each dilemma in the excel spreadsheet that accompanies this manuscript (Supplementary Material).

Reaction time

A RM ANOVA was carried out on the RT of the arousal ratings with the factors Personal Force, Benefit Recipient, Evitability , and Intentionality . Main Effects were found for Personal Force and Benefit Recipient , no interactions were significant. See Table 3 .

Next, a regression analysis was conducted to ascertain how much of the variance in the RT of the arousal ratings was explained by the arousal ratings. This procedure was executed for each of the 16 dilemma categories. Summary Table 4 shows that except for four of the categories, the arousal ratings significantly explained between 6 and 38% of the variance in the RT. Figure 3 shows how the overall correlation between the variables indicates that the more arousing a dilemma was, the faster participants indicated their rating. The correlation coefficient between the mean arousal ratings and the mean RT of arousal ratings was, p < 0.001; r = −0.434.

www.frontiersin.org

Table 4. Summary table of the regression analysis of arousal ratings as predictors of the arousal ratings' RT for each of the 16 dilemma categories .

www.frontiersin.org

Figure 3. Correlation between Arousal ratings and the RT . Color coding: Personal Moral Dilemmas (PMD; Blue/Red, circles); Impersonal Moral Dilemmas (IMD; Green/Yellow, squares). Arousal ratings are 1 = Not arousing, calm; 7 = Very arousing, on the x-axis. RT is in milliseconds (ms) on the y-axis. The numbers refer to the dilemma numbers in the dilemma set.

Inter-individual differences: emotional sensitivity and empathy

To ensure that the results of our arousal ratings were not driven by inter-individual differences, participants had been assessed on a series of emotion-related questionnaires. Of the four questionnaires, the level of empathy measured with the questionnaire by Mehrabian and Epstein had a significant effect on arousal ratings and on arousal rating RT. The overall correlation coefficients for arousal ratings and Empathy scores was r = 0.289; p = 0.025 and for arousal RT and empathy scores it was r = −0.325; p = 0.011. The higher the empathy scores, the higher the arousal ratings to the dilemmas in general, and the shorter the RT (negative correlation coefficient).

Summary: Arousal and Valence Norming Experiment

For a dilemma to be very negatively arousing (i.e., ratings very negative in valence and high in arousal), the proposed moral transgression had to be described as up-close and Personal . Besides, dilemmas where the protagonist's own life was at stake were perceived as more negatively arousing than those dilemmas where other peoples' lives were at stake. In particular, if the death of the innocent victim happened accidentally as a non-desired side-effect , the dilemma was perceived as more negatively arousing, specifically, if the protagonist's life was at stake, than if the accidental death of the victim happened in the intent to save other people. In detail:

Affective arousal and valence

- there were significant main effects of the factors Personal Force and Benefit Recipient both for arousal and valence ratings: Personal and Self-Beneficial dilemmas were perceived as more arousing and more negative than Impersonal and Other-Beneficial dilemmas, respectively;

- there were significant interactions between the two above factors and the factor Intentionality . Intentionality influences perceived arousal in such way that Self-Beneficial dilemmas (as compared to Other-Beneficial dilemmas) were rated as more arousing when harm happened as a non-desired side-effect ( Accidental harm), while Instrumental harm (harm used as a means) was equally arousing in both Self - and Other-Beneficial dilemmas. Furthermore, when harm was Personal (up-close and corporal), as compared to Impersonal (distant and abstract), and used as a means ( Instrumental harm), dilemmas were rated as more negative, than if harm was Impersonal . Conversely, participants found Accidental harm equally negative when it was Personal (up-close and corporal) and Impersonal (distant and abstract).

RT to a moral judgment task has previously been suggested as an indicator of emotional involvement. The more arousing a dilemma was, the faster participants were in making their rating.

Inter-individual differences

There was a correlation between inter-individual differences in empathy assessed by means of the Questionnaire of Emotional Empathy ( Mehrabian and Epstein, 1972 ) and the arousal ratings. It showed that the higher the levels of empathy, the more arousing were the dilemmas to the participant. This makes sense because this instrument describes sensitivity to others' emotional states. It specifically conceives empathy as the vicarious emotional response to the perceived emotional experience of others and understands empathy as different from ToM and perspective-taking, and focuses on emotional empathy where high scores indicate a high responsiveness to other peoples' emotional reactions. However, apart from this correlation between arousal ratings and empathy level, no other individual differences had an effect on perceived arousal (the other variables we assessed were gender, IRI, emotional sensitivity, alexithymia). We therefore conclude that—at least in this sample of Spanish Undergraduates- the arousal ratings of this dilemma set are rather robust across individual differences.

Discussion of Arousal and Valence Norming Experiment

While all dilemmas are rated similarly as negative in valence, significant differences were found in how they were rated in terms of felt arousal. This means, first, that at least part of the emotional involvement in moral judgment of the dilemmas can be due to the arousal triggered when reading the situational description. And second, results showed that differences in arousal are due to how the different conceptual factors are manipulated. Thus, Personal Force and Self-Beneficial dilemmas give rise to higher arousal ratings than Impersonal and Other-Beneficial ones. Prima facie this suggests that arousal has something to do with identification of the experimental participant with the perspective of the main character in the dilemmatic situation: it's when one feels more directly involved in the conflict, because of the action to be carried out or the consequences for oneself that the action will have, that one feels more aroused—even without having to make a judgment. However, this prima facie interpretion is too simplistic, for three reasons.

In the first place, it is clear that Personal Force dilemmas highlight the personal involvement in physically producing the harm. Similarly, Self-Beneficial dilemmas give rise to higher arousal ratings only when the harm produced is Accidental, rather than Instrumental. The latter case is one of self-interest: we experience less conflict when what's to be done is for our own benefit. Yet, it becomes difficult when a benefit cannot be produced without collateral damage. Third, whereas Self-Beneficial dilemmas take longer to be rated (than Other-Beneficial), Personal Force ones are rated faster than Impersonal ones. Jointly, these results suggest that arousal ratings can have several etiologies, and therefore cannot be interpreted simply as indication of degree of imaginary involvement with the situation or as a measure of experienced conflict. Both these causes need to be considered.

Dilemma Validation Study—Moral Judgment Experiment

To validate this moral dilemma set, a moral judgment task was set up to confirm the 4-factor structure in the dilemmas; i.e., the four conceptual factors Personal Force, Benefit Recipient, Evitability , and Intentionality .

Furthermore, to explore how the intentionality factor is understood by participants, two versions of the dilemma set were prepared: one version remained as has been described so far, while in the other the question eliciting the moral judgment included an “accidental harm specification” in the accidental harm dilemmas. For instance, in the dilemma Burning Building, the question is Do you put out the fire by activating the emergency system, which will leave the injured without air, so you and the five other people can escape? The sentence which will leave the injured without air is the accidental harm specification. This makes it clear to the reader the consequences of the proposed action. The analysis of this variable is included here, but in future researchers can choose to leave the accidental harm specification out of the question.

Additional analyses include (i) the analysis by Greene et al. (2001 , 2004) that gave rise to the Dual Process Hypothesis of Moral Judgment (DPHMJ), (ii) an additional analysis of the Intentionality factor, and (iii) an analyses of how interindividual differences influence moral judgment.

Forty-three undergraduate psychology and educational science students participated in this study in exchange for a course credit in one of their degree subjects (30 females and 13 males; age range = 18–54 years; m = 20.65, SD = 5.52). None of them had seen the dilemmas before. See Table 5 for participant characteristics including self-report measures of (i) the IRI ( Davis, 1983 ), (ii) the Questionnaire of Emotional Empathy ( Mehrabian and Epstein, 1972 ), (iii) the Questionnaire of Emotional sensitivity (EIM) ( Bachorowski and Braaten, 1994 ), (iv) the TAS ( Taylor et al., 1985 ), (v) the personality questionnaire Big Five ( McCrae and Costa, 1999 ), and (vi) the Thinking Style Questionnaire, Need For Cognition Scale (NFC) ( Cacioppo et al., 1984 ). All participants were native Spanish speakers.

www.frontiersin.org

Table 5. Participant characteristics .

Forty-six standard moral dilemmas and four practice dilemmas were presented in random order with the stimulus presentation program DirectRT ( www.empirisoft.com ) v. 2006.2.0.28. The experiment was set up to run on six PCs (Windows XP SP3 PC (Intel Pentium Dual Core E5400, 2.70 GHz, 4 GB RAM) and stimuli were displayed on 19″ screens (with a resolution of 1440 × 900 p; color: 32 bits; refresh rate: 60 Hz).

As in the previous experiment described in the section Arousal and Valence Norming Experiment . Additionally: after the second screen, the first two screens disappeared and the question appeared. The question eliciting the moral judgment was “ Do you [action verb] so that….” A 7-point Likert scale was displayed below the question with the labels “ No, I don't do it ” under the number “1” and “ Yes, I do it” under the number “7.” Half of the participants (22 participants) saw the question “ Do you [action verb] so that…,” while the other half (21 participants) saw a question that furthermore involved the Accidental harm specification in the case of the Accidental harm dilemmas, such as in: “ do you [action verb] which will [mechanism that will lead to the death] so that…” (Type of Question) . The ratings were made by means of key press on the using the number keys of the keyboard (top row of numbers) of the computer. Four practice dilemmas were added in the beginning of the task. Data from these trials were discarded before data analysis. The study was approved by the University's Ethics Committee (COBE280213_1388).

A factorial RM 2 × 2 × 2 × 2 ANOVA was computed with the Within-Subject factors Personal Force (PMD vs. IMD), Benefit Recipient (Self-Beneficial vs. Other Beneficial), Evitability (Avoidable vs. Inevitable harm), and Intentionality (Accidental vs. Instrumental harm). Question Type (with vs. without the Accidental harm specification) was the Between-Subject factor. As effect sizes we report Pearson's r, where 0.01 is considered a small effect size, 0.3 a medium effect and 0.5 a large effect ( Cohen, 1988 ).

Subjective ratings: moral judgment

There was no significant main effect of the between-group factor Type of Question (with or without accidental harm specification) [ F (1, 41) = 0.164, p = 0.688] and there were no significant interactions between the Between-Subjects factor Type of Question and the four within-subject factors: Personal Force * Question Type [ F (1, 41) = 0.09; p = 0.766; ns ]; Benefit Recipient * Question Type [ F (1, 41) = 0.296; p = 0.589; ns ]; Evitability * Question Type [ F (1, 41) = 0.010; p = 0.921; ns ]; Intentionality * Question Type [ F (1, 41) = 0.013; p = 0.911; ns ]. This means that the two question formats ( with and without the Accidental harm specification) are equivalent and do not affect the moral judgment a person makes. This means that the accidentality of the harm is understood from the narrative without the need to explicitly state it to the individual. Thus, data was aggregated for subsequent analyses.

There were significant main effects of all four Within-Subject factors: Personal Force [ F (1, 41) = 54.97; p < 0.001; r = 0.75]; Benefit Recipient [ F (1, 41) = 4.347; p = 0.043; r = 0.31]; Evitability [ F (1, 41) = 69.984; p < 0.001; r = 0.79]; and Intentionality [ F (1, 41) = 12.971; p = 0.001; r = 0.49]. Participants were less likely to commit harm in PMD ( m = 4.069; SD = 0.124) than in IMD ( m = 4.717; SD = 0.113) and they were more likely to commit a moral transgression to save themselves ( m = 4.508; SD = 0.103), than to save others ( m = 4.278; SD = 0.111). When the suggested harm was Inevitable , people were more likely to commit it ( m = 4.633; SD = 0.124) than when harm was Avoidable ( m = 4.152; SD = 0.103). Finally, when the death of the victim was Accidental , participants were more likely to commit the moral transgression ( m = 4.549; SD = 0.125) than when it was Instrumental ( m = 4.236; SD = 0.112). See Figures S7A–D.

Five of the six possible two-way interactions between the four factors were significant. See Table 6 for a summary of the means and interaction coefficients. Table 7 shows the t -tests to break down the interactions. Figure S8 summarizes the interactions graphically. If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α* = 0.0125 for breaking down each interaction. This should be taken into account when considering the result of the t -test in Table 7D (Self-Beneficial Accidental vs. Instrumental harm; p = 0.022).

www.frontiersin.org

Table 6. Summary table of the interactions (dependent variable: moral judgment, Likert scale rating; range: 1;7) .

www.frontiersin.org

Table 7. Follow-up t -tests to break down the interactions in the moral judgment task .

First, the Benefit Recipient variable had a differential effect on the moral judgment for PMD and IMD (Figure S8A). Participants were more likely to commit harm if the harm was carried out to safe themselves ( Self - Beneficial , as compared to Other - Beneficial ), however, only if the dilemma was Impersonal . If harm was Personal , participants were equally likely to commit the harm both when it was Self - or Other-Beneficial .

Second, also the Evitability variable had a differential effect on the moral judgment for PMD and IMD (Figure S8B). Participants made more deontological responses for PMD in general; however, they were more likely to commit harm when the death of the innocent person was Inevitable (as compared to Avoidable ).

Third, also the Intentionality variable affected how participants judged PMD and IMD (Figure S8C). Again participants were overall more likely to make a deontological moral judgment in PMD than in IMD, however, participants were less likely to commit the moral transgression when harm was Instrumental (as compared to Accidental ), but specifically only in the case of PMD .

Fourth, the Intentionality variable affected how participants judged Self - and Other - Beneficial dilemmas (Figure S8D). If the proposed harm was Instrumental , participants were less likely to commit it when the dilemma involved harm toward Others (as compared to harm toward the participant herself), while for accidental harm, participants were less likely to commit harm if it was accidental to save herself, than if it was to save others.

Fifth, Intentionality also affected how participants judged Avoidable and Inevitable dilemmas ( Evitability factor), (Figure S8E). When harm was Avoidable (as compared to Inevitable ), participants were less likely to commit it when the harm described in the dilemma was Instrumental than when it was Accidental . However, participants were equally likely to commit harm to both Accidental and Instrumental harm dilemmas when the harm described in the dilemma was Inevitable .

That there was no interaction between Benefit Recipient and Evitability means that participants were equally likely to commit harm, irrespective of whether death was Avoidable or Inevitable for Self- or Other-Beneficial dilemmas.

There was one significant main effect [ Intentionality : F (1, 41) = 13.252; p = 0.001; r = 0.49] and one significant interaction [ Intentionality * Question Type: F (1, 41) = 13.629; p = 0.001; r = 0.50]. Participants in general needed longer to make moral judgments about actions involving Accidental harm ( m = 5803.223; SD = 424.081) than of actions involving Instrumental harm ( m = 5185.185; SD = 394.389). The interaction indicates that Intentionality had a differential effect on RT depending on the Question Type . The group that had the question with the Accidental harm specification, needed significantly longer to respond to Accidental harm ( m = 6356.081; SD = 578.441) than the group without such specification ( m = 5250.365; SD = 620.309). No such difference appeared between the groups for Instrumental harm ( m = 5112.582; SD = 537.941 and m = 5259.065; SD = 576.878, respectively).

Due to the fact that the only main effect and interactions that appear significant in the analysis of the RT data is the factor that regards the Between-Subject variable Type of Question , this effect was explored more closely. Therefore, the RM ANOVA was computed again, first with the participants in the With condition and afterwards with the participants in the Without condition. Again the factor Intentionality was significant in the With condition [ F (1, 22) = 21.208; p < 0.001; r = 0.70], but not in the Without condition [ F (1, 19) = 0.002; p = 0.964]. Hence, the effect was merely driven by the higher number of words in the questions in the With condition.

To ensure that RT was not conditioned by the word count of the questions in general, a regression was computed with word count in the question as a predictor and RT as the dependent variable. No significant relationship was found ( B = −27.695; B SD = 30.711; β = −0.234; p = 0.382). Hence, the word count of the questions did not influence the RT of participants except in this particular case of the Intentionality factor. Apart from this problematic effect, there were no other significant main effects or interactions.

As much research in the field of moral judgment with moral dilemmas suggests a relation between the type of moral judgment (deontological vs . utilitarian) and RT, this matter was explored further. First, a curvilinear regression was computed with Moral Judgment as predictor and the RT as dependent variable. The resulting model was significant [ F (1, 41) = 11.015; p < 0.001; r = 0.46] and moral judgment accounted for 33.9% of the variance in the RT. Both for very deontological (Likert ratings toward 1) and very utilitarian moral judgments (Likert ratings toward 7) participants were faster than when making a more intermediate moral judgment (Likert ratings around 4). See the illustration of the relation between moral judgment and RT in Figure 4 .

www.frontiersin.org

Figure 4. Curvilinar relationship between Moral Judgment and RT . Color coding: Personal Moral Dilemmas (Blue/Red, circles); Impersonal Moral Dilemmas (Green/Yellow, squares). Mean Likert scale responses: 1 = No, I don't do it , i.e., deontological moral judgment; 7 = Yes, I do it , i.e., utilitarian moral judgment. RT is in milliseconds (ms). PMD, Personal Moral Dilemmas; IMD, Impersonal Moral Dilemmas.

To assess RT as a function of the response given (deontological vs. utilitarian in absolute terms, not in a scale from 1 to 7 as presented above) as in Greene et al. (2001 , 2004) , the Moral Judgment values of the 7-point Likert scale were dichotomized. Judgments of values between 1 and 3 were considered “deontological,” and values between 5 and 7 were considered “utilitarian.” Values of 4 were discarded. Mean RT was calculated as a function of this re-coding. Subsequently, the ANOVA from Greene et al. (2001 , 2004) 2 × 2 ( Response Type and Personal Force ) was carried out. No significant main effects were found [ Response Type : F (1, 42) = 0.402; p = 0.529; Personal Force : F (1, 42) = 0.197; p = 0.659].

In previous analyses, the factor Intentionality has been shown to be of key relevance in moral judgment. Therefore, another 2 × 2 ANOVA with the variables Response Type and Intentionality was run. There was a significant main effect of Intentionality ( p = 0.015) and a significant interaction of Response Type*Intentionality ( p = 0.018), see Table 8 and Figure S9. Breaking down the interaction it was shown that participants took longer to make a deontological moral judgment when harm was then produced accidentally , than if it was instrumental ( p = 0.003). No such difference was found for utilitarian moral judgments ( p = 0.681), see Figure S9.

www.frontiersin.org

Table 8. Main Effects and Interactions of the RM ANOVA Question Type*Intentionality .

Inter-individual differences: gender

There was a significant interaction between the factor Benefit Recipient and the participants' gender [ F (1, 61) = 10.079; p = 0.003; r = 0.37]; male participants were more ready to commit a harm in the case of Self - Beneficial dilemmas ( m = 5.137; SD = 0.215), than female participants ( m = 4.235; SD = 0.142). In the Other-Beneficial dilemma category, no such gender differences were found (males: m = 4.439; SD = 0.203; females: m = 4.208; SD = 0.133). This effect is reported for the sake of completeness of the scientific record. However, first, we did not specifically contemplate this effect, so we did not have equal numbers of male and female participants. Second, we do not aim to make any assumptions about gender differences based on such preliminary data. There is no sound scientific evidence that supports why there should be gender differences in moral judgment, nor of what kind these may be, nor what should be the evolutionary basis for them. This is a sensitive issue that deserves thorough investigation that goes far beyond the scope of this paper. Therefore, we assume that there are no genuine gender differences in moral judgment in participants of one same culture and have chosen to analyze the data of female and male participants together.

Two other studies have reported an effect of gender in their data ( Fumagalli et al., 2009 , 2010 ). However, the dilemma set used in these studies was the originally used by Greene et al. (2001 , 2004) which has important methodological shortcomings (as pointed out by this paper; for a review see Christensen and Gomila, 2012 ), which is why ideally such claims on gender differences should really not be made. For such claims to be based on solid grounds a study should be designed controlling variables of empathy and other personality factors between genders, and of course, have an equal sample size of each gender.

Inter-individual differences: thinking style, personality traits, emotional sensitivity

To test the influence of inter-individual differences on moral judgment a regression was computed with all of the scores of the questionnaires assessing inter-individual differences in the model predicting the mean moral judgment of the participants. As shown in Table S2, the resulting regression model was significant [ F (10) = 2.954; p = 0.011; r = 0.47] and explained 50.5% of the variance in the moral judgments. However, only three of the 10 predictor variables were significant: Emotional Sensitivity ( p = 0.018), and two of the Big Five factors, Agreeableness ( p = 0.046) and Conscientiousness ( p = 0.001). The higher the scores in the EIM , the more deontological were the moral judgments (participants with higher scores in the EIM were less susceptible to commit the proposed harm). For the two factors of the Big Five, the pattern was reverse: the higher the scores, the more utilitarian were the judgments (participants with higher scores in these two dimensions were more likely to commit the proposed harm). However, considering the Beta coefficient, it can be observed that these effects were—although existent—rather small.

Arousal and Moral Judgment

In order to determine whether the levels of arousal of the dilemmas rated by one group of participants, would be related to the moral judgments of a different group of participants, the dataset was transposed and dilemmas treated as cases. A simple regression was conducted with the arousal ratings as predictor variable and the moral judgments as dependent variable. The resulting model was significant [ F (1, 44) = 22.613; p < 0.001; r = 0.58], showing that the level of arousal of a dilemma predicted 33.9% of the variance in the moral judgment variable. Figure 5 shows that the more arousing a dilemma was, the more likely participants were to refrain from action (i.e., not committing the moral transgression). See Table S3 for the model parameters.

www.frontiersin.org

Figure 5. Relationship between level of arousal of a dilemma and the moral judgment made to that dilemma. Color/shape coding: Personal Moral Dilemmas (Blue/Red, circles); Impersonal Moral Dilemmas (Green/Yellow, squares) . Mean Likert scale responses: 1 = No, I don't do it , i.e., deontological moral judgment; 7 = Yes, I do it , i.e., utilitarian moral judgment. Mean Arousal scale responses: 1 = Not arousing, calm ; 7 = Very arousing .

Summary: Moral Judgment Experiment

With this fine-tuned set of moral dilemmas it was confirmed that the four factors Personal Force, Benefit Recipient, Evitability , and Intention ality determined participants' moral judgment:

First, participants tended to exhibit a deontological response style (i.e., they refrained from committing harm) when harm was described as Personal (as compared to Impersonal ), Other-Beneficial (as compared to Self -Beneficial ), Avoidable (as compared to Inevitable ), and Instrumental (as compared to Accidental ). In other words, when harm was abstract and spatially and intentionally separated from the agent, participants were more likely to commit this moral transgression than if the harm was described as up-close and gave an impression of “bloody hands.”

Second, participants more readily sacrificed the life of another person if their own life was at stake than if the moral transgression would merely save other people. Besides, if harm to the victim would have happened anyway, irrespective of whether the moral transgression was carried out by the agent or not (as in “ or one person of 5 is killed or they all die ”), participants were more likely to incur in the moral transgression.

Third, participants more readily committed harm if harm happened as a non-desired side-effect of the action of the agent, it was more readily committed by the participants than if the proposed harm would result in using the death of the victim as a means to salvation of the others.

As regards the interactions between the factors:

First, the interaction between Personal Force and Benefit Recipient indicated that participants were equally likely to commit a moral transgression when the proposed harm involved “bloody hands,” both when the harm would result in salvation of oneself or of others. However, when the proposed harmful action was abstract and distant, participants made a difference in their moral judgment, depending on whether the salvation regarded themselves or others. Abstract harm commission made a utilitarian response more likely when it was executed to save themselves.

Second, the interaction between Personal Force and Intentionality indicated that harm that happened as a non-desired side-effect of the moral transgression was consented equally in IMD, both when harm was accidental and when it was instrumental. However, in PMD, if harm was used as a means (instrumentally), this made participants' moral judgments more deontological than when harm was accidental.

Third, the interaction between Benefit Recipient and Intentionality indicated that for Self-Beneficient Dilemmas, when harm happened as a non-desired side-effect of the proposed action, participants were less likely to commit the moral transgression, than when it was instrumental. Conversely, when the harm would benefit others, the pattern was reverse: more deontological moral judgments when harm was instrumental, than when it was accidental.

Fourth, the interaction between Personal Force and Evitability indicates that for both IMD and PMD, avoidable harm resulted in more deontological moral judgments than did inevitable harm.

Fifth, the interaction between Evitability and Intentionality indicates that both when harm to the victim could have been avoided, harm as a side-effect was more readily consented, than was the use of harm as a means. For inevitable harm no such difference between accidental and instrumental harm commission was found.

Furthermore, we found that the more arousing a dilemma was, the more likely it was that participants would choose a deontological response style.

Finally, there was no main effect of Type of Response found by Greene et al. (2001 , 2004) , indicating that with this optimized dilemma set deontological responding is not faster than utilitarian responding. Neither was there an interaction between Type of Response*Personal Force . However, with an additional ANOVA with the factors Type of Response and Intentionality it was shown that there was a significant main effect of Intentionality . Yet, more importantly, there was an interaction between Type of Response and Intentionality . This indicates that for dilemmas people were judging deontologically, it took them particularly long to make that judgment in the case when the proposed harm would result in accidental harm to the victim.

Discussion of the Moral Judgment Experiment

Summing up, results here show that that we are more prone to behave for our benefit, if the harm will take place in any case and producing the harm is not very demanding. Conversely, we are going to experience a conflict—indexed by a longer response—when we are forced to do the harm ourselves, or to do harm as collateral damage to benefit others. Moral principles can be broken but only in well-justified situations (when consequences are “big enough”). It's not that we are deontological or utilitarian thinkers, we are neither: moral judgments are better viewed from the point of view of casuistics, the particularist approach to morals that takes the details of each case into account. Any small detail may matter to our moral judgment. Results show, in any case, that rules are not applied algorithmically or in a strict order ( Hauser, 2006 ).

Overall Discussion

Apart from providing normative values of valence, arousal, moral judgment and RT for 46 moral dilemmas 5 , the results of this dilemma validation study challenge the DPHMJ proposed by Greene et al. (2001 , 2004) . According to this hypothesis, deontological moral judgments (refraining from harm) are fast and emotion-based, while utilitarian moral judgments (deciding to commit the harm) are slow as a result of deliberate reasoning processes. The assumptions of the DPHMJ were based on a reaction time finding where an interaction between the Type of Response given (deontological vs. utilitarian) and the Personal Force (Personal vs. Impersonal) showed that when harm was consented in a Personal Moral Dilemma (utilitarian response), RT was significantly longer than when harm was not consented (deontological response). No such difference in the response time was found for Impersonal Moral Dilemmas. However, in our study, while we also found that higher arousal correlates with deontological judgment (in line with Moretto et al., 2010 ), we failed to find the relationship with RT: both deontological and utilitarian decisions can be made equally fast, and both to personal and impersonal dilemmas, depending on the other factors involved. To put it another way, a fast judgment takes place when, either a deontological reason guides the judgment, or when utilitarian considerations clearly dominate. Therefore, while we agree that the dilemmas that take longer are those where the experienced conflict is greater, conflict, however, has a more complex etiology. In particular, judgment takes longer when people are torn between utilitarian considerations of the greater good (saving many), and the suffering produced in others as an accidental side-effect. An increased RT is likely to have been caused by reasoning processes in order to explore a way to avoid the conflict, in either case.

As a matter of fact, the DPHMJ's central result concerning personal vs. impersonal dilemmas has already been challenged. McGuire et al. (2009) reanalyzed the data sets from Greene and colleagues and removed what they called “poorly endorsed items” (those dilemmas not designed carefully enough). After this procedure by McGuire et al., the key effect disappeared from the data ( McGuire et al., 2009 ). Similarly, Ugazio et al. (2012) , on their part, showed that both deontological and utilitarian responding could actually be triggered by different emotions with different motivational tendencies. In their study, disgust induction (an emotion that triggers withdrawal tendencies) resulted in more deontological moral judgments (i.e., refraining from harm), while anger induction (an emotion that triggers approach tendencies) resulted in more utilitarian moral judgments (i.e., committing harm). This finding doesn't fit the Dual Process account either, because the study shows how different emotional phenomena trigger both deontological and utilitarian moral judgment tendencies.

Therefore, we propose that a potentially more suitable account of moral judgment is one that gives a different role to emotions in moral judgment, specifically, to the importance of the arousal response which is triggered in the individual by the dilemmatic situation along the way suggested by the Affect Infusion Model (AIM) by Forgas (1995) . This model posits that (i) arousal properties of the situation, (i) the motivational features of the emotions triggered by it, and (iii) the associated cognitive appraisal mechanisms, all play a crucial role in every judgment. This model also posits that affect infusion is a matter of degree: any judgment is also dependent on previous knowledge of the individual about the event or situation he or she is about to judge; this implies that it is dependent on deliberate reasoning as well as on the magnitude of the emotional arousal triggered by the event or situation.

See the Supplementary Material for a summary of limitations of the method.

In this work, we have followed Hauser et al. view of moral dilemmas: “… the use of artificial moral dilemmas to explore our moral psychology is like the use of theoretical or statistical models with different parameters; parameters can be added or subtracted in order to determine which parameters contribute most significantly to the output” ( Hauser et al., 2007 ). We have tried to control for the variables known to influence moral judgment, in order to find out which ones matter most, and how they interact.

One main result of this work is that, when dilemmas are validated, Greene's main effect of personal dilemmas partly disappears, for a more complex pattern, which casts doubt on the view that some moral judgments are the result of a deliberation, while others, the deontological ones, are reached emotionally. While higher arousal is related to deontological judgments, it is not true that deontological judgments are faster than utilitarian ones. Deontological judgments may take longer than utilitarian ones if, after taking time to weight the options, and to look for a way to minimize the transgression, one cannot find a way to choose not to violate one's principles.

Research with moral dilemmas holds fascinating possibilities to study the grounding psychological principles of human moral cognition. Contrary to the criticisms brought up against this methodology, and in line with an increasing number of other researchers, we believe that it is specifically the artificial nature of moral dilemmas that make this methodology so valuable. In any case, the scenarios described to us in moral dilemmas are not more artificial than the stories narrated in novels and movies where life-and death-decisions change the course of supposedly inevitable events. Besides, other abundant channels of information of that kind are the news on TV, radio, in the papers, and on the internet. They inform us of atrocities that happened around the corner of our house while we were sleeping, or of heartbreaking life-threatening situations that some individual in a war swept country has had to go through… Are moral dilemmas really all that unreal and artificial to us?

Author Note

All authors: Human Evolution and Cognition (IFISC-CSIC) and Department of Psychology, University of the Balearic Islands, Carretera de Valldemossa, km. 7.5, Building: Guillem Cifre de Colonya, 07122 Palma, Spain. Nadine K. Gut current affiliation: School of Psychology and Neuroscience, University of St Andrews, St Mary‘s Quad, South Street, St Andrews, KY16 9JP, UK; Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, 161 Cathedral Street, Glasgow, G4 0RE, UK.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The study was funded by the research project FFI2010-20759 (Spanish Government: Ministry of Economy and Competitiveness), and by the Chair of the Three Religions (Government of the Balearic Islands) of the University of the Balearic Islands, Spain. Julia F. Christensen and Albert Flexas were supported by FPU PHD scholarships from the Spanish Ministry of Education, Culture and Sports (AP2009-2889 and AP2008-02284). Nadine K. Gut was supported by a scholarship of the School of Psychology and Neuroscience, University of St Andrews, UK. We want to thank Dr. Camilo José Cela-Conde for help and advice at different stages of this work; and a special thank you goes to Lasse Busck-Nielsen, Françoise Guéry and Trevor Roberts for help in the language editing process.

Supplementary Material

The Supplementary Material for this article can be found online at: http://www.frontiersin.org/journal/10.3389/fpsyg.2014.00607/abstract

1. ^ Please note that study with a preliminary version of the revised set has recently been published ( Christensen et al., 2012 ).

2. ^ For a detailed description of the dilemmas, see also Moore et al. (2008) . For clarity it should be said that these 48 dilemmas are made up of 24 different short stories, which have a personal and an impersonal version each.

3. ^ We also considered removing the Bike Week Dilemma due to the act of acrobatics that it involves, but finally left it in. However, we encourage researchers to potentially reconsider this.

4. ^ Please note: in this arousal and valence norming procedure participants did not see the question. This was to avoid confounds between the arousal and valence judgments and a moral judgment.

5. ^ Supplementary Material accompanies this manuscript with all data points presented in this work.

Abarbanell, L., and Hauser, M. D. (2010). Mayan morality: an exploration of permissible harms. Cognition 115, 207–224. doi: 10.1016/j.cognition.2009.12.007

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Anders, G. (1962). Burning Conscience: The Case of the Hiroshima Pilot . New York, NY: Monthly Review Press.

Bachorowski, J. A., and Braaten, E. B. (1994). Emotional intensity - measurement and theoretical implications. Pers. Individ. Dif . 17, 191–199. doi: 10.1016/0191-8869(94)90025-6

CrossRef Full Text

Bloomfield, P. (2007). Morality and Self-Interest . Oxford: Oxford University Press. doi: 10.1093/acprof:oso/9780195305845.001.0001

Borg, J. S., Hynes, C., Van Horn, J., Grafton, S., and Sinnott-Armstrong, W. (2006). Consequences, action, and intention as factors in moral judgments: an fMRI investigation. J. Cogn. Neurosci . 18, 803–817. doi: 10.1162/jocn.2006.18.5.803

Cacioppo, J. T., Petty, R. E., and Kao, C. F. (1984). The efficient assessment of need for cognition. J. Pers. Assess . 48, 306–307. doi: 10.1207/s15327752jpa4803_13

Christensen, J. F., Flexas, A., de Miguel, P., Cela-Conde, C. J., and Munar, E. (2012). Roman Catholic beliefs produce characteristic neural responses to moral dilemmas. Soc. Cogn. Affect. Neurosci . 9, 1–10. doi: 10.1093/scan/nss121

Christensen, J. F., and Gomila, A. (2012). Moral dilemmas in cognitive neuroscience of moral decision-making: a principled review. Neurosci. Biobehav. Rev . 36, 1249–1264. doi: 10.1016/j.neubiorev.2012.02.008

Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences, 2nd Edn . Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

Cushman, F., Young, L., and Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment: testing three principles of harm. Psychol. Sci . 17, 1082–1089. doi: 10.1111/j.1467-9280.2006.01834.x

Davis, M. H. (1983). Measuring individual-differences in empathy - evidence for a multidimensional approach. J. Pers. Soc. Psychol . 44, 113–126. doi: 10.1037/0022-3514.44.1.113

Feldman Hall, O., Mobbs, D., Evans, D., Hiscox, L., Navrady, L., and Dalgleish, T. (2012). What we say and what we do: the relationship between real and hypothetical moral choices. Cognition 123, 434–441. doi: 10.1016/j.cognition.2012.02.001

Foot, P. (1967). The Problem of Abortion and the Doctrine of the Double Effect. Reprinted in Virtues and Vices and Other Essays in Moral Philosophy (1978) . Oxford: Blackwell.

Forgas, J. P. (1995). Mood and judgment: the affect infusion model (AIM). Psychol. Bull . 117, 39–66.

Pubmed Abstract | Pubmed Full Text

Fumagalli, M., Ferrucci, R., Mameli, F., Marceglia, S., Mrakic-Sposta, S., Zago, S., et al. (2009). Gender-related differences in moral judgments. Cogn. Process . 11, 219–226. doi: 10.1007/s10339-009-0335-2

Fumagalli, M., Vergari, M., Pasqualetti, P., Marceglia, S., Mameli, F., Ferrucci, R., et al. (2010). Brain switches utilitarian behavior: does gender make the difference? PLoS ONE 5:e8865. doi: 10.1371/journal.pone.0008865

Greene, J. (2008). “The secret Joke of Kant's Soul,” in Moral Psychology , Vol. 3, ed W. Sinnott-Armstrong. (Cambridge, MA; London: MIT Press), 35–80.

Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., and Cohen, J. D. (2009). Pushing moral buttons: the interaction between personal force and intention in moral judgment. Cognition 111, 364–371. doi: 10.1016/j.cognition.2009.02.001

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., and Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron 44, 389–400. doi: 10.1016/j.neuron.2004.09.027

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science 293, 2105–2108. doi: 10.1126/science.1062872

Hauser, M., (ed.). (2006). Moral Minds: How Nature Designed our Universal Sense of Right and Wrong . New York, NY: Ecco/Harper Collins.

Hauser, M., Cushman, F., Young, L., Jin, R. K. X., and Mikhail, J. (2007). A dissociation between moral judgments and justications. Mind Lang . 22, 1–21. doi: 10.1111/j.1468-0017.2006.00297.x

Huebner, B., Hauser, M. D., and Pettit, P. (2011). How the source, inevitability and means of bringing about harm interact in folk moral judgments. Mind Lang . 26, 210–233. doi: 10.1111/j.1468-0017.2011.01416.x

Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., et al. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature 446, 908–911. doi: 10.1038/nature05631

McCrae, R. R., and Costa, P. T. Jr. (1999). “A five-factor theory of personality,” in Handbook of Personality: Theory and Research, 2nd Edn ., ed L. A. Pervin (New York, NY: Guilford Press), 139–153.

McGuire, J., Langdon, R., Coltheart, M., and Mackenzie, C. (2009). A reanalysis of the personal/impersonal distinction in moral psychology research. J. Exp. Soc. Psychol . 45, 577–580. doi: 10.1016/j.jesp.2009.01.002

Mehrabian, A., and Epstein, N. (1972). A measure of emotional empathy. J. Pers . 40, 525–543.

Mikhail, J. (2007). Universal moral grammar: theory, evidence and the future. Trends Cogn. Sci . 11, 143–152. doi: 10.1016/j.tics.2006.12.007

Moore, A. B., Clark, B. A., and Kane, M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychol. Sci . 19, 549–557. doi: 10.1111/j.1467-9280.2008.02122.x

Moore, A. B., Lee, N. Y. L., Clark, B. A. M., and Conway, A. R. A. (2011a). In defense of the personal/impersonal distinction in moral psychology research: cross-cultural validation of the dual process model of moral judgment. [Article]. Judgm. Decis. Mak . 6, 186–195.

Moore, A. B., Stevens, J., and Conway, A. R. A. (2011b). Individual differences in sensitivity to reward and punishment predict moral judgment. [Article]. Pers. Individ. Dif . 50, 621–625. doi: 10.1016/j.paid.2010.12.006

Moretto, G., Làdavas, E., Mattioli, F., and di Pellegrino, G. (2010). A psychophysiological investigation of moral judgment after ventromedial prefrontal damage. J. Cogn. Neurosci . 22, 1888–1899. doi: 10.1162/jocn.2009.21367

Navarrete, C. D., McDonald, M. M., Mott, M. L., and Asher, B. (2012). Morality: emotion and action in a simulated three-dimensional “trolley problem”. Emotion 12, 364–370. doi: 10.1037/a0025561

O'Hara, R. E., Sinnott-Armstrong, W., and Sinnott-Armstrong, N. A. (2010). Wording effects in moral judgments. Judgm. Decis. Mak . 5, 547–554.

Petrinovich, L., and O'Neill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethol. Sociobiol . 17, 145–171. doi: 10.1016/0162-3095(96)00041-6

Petrinovich, L., O'Neill, P., and Jorgensen, M. (1993). An empirical-study of moral intuitions - toward an evolutionary ethics. J. Pers. Soc. Psychol . 64, 467–478. doi: 10.1037/0022-3514.64.3.467

Royzman, E., and Baron, J. (2002). The preference of indirect harm. Soc. Justice Res . 15, 165–184. doi: 10.1023/A:1019923923537

Tassy, S., Oullier, O., Mancini, J., and Wicker, B. (2013). Discrepancies between judgment and choice of action in moral dilemmas. Front. Psychol . 4:250. doi: 10.3389/fpsyg.2013.00250

Taylor, G. J., Ryan, D., and Bagby, R. M. (1985). Toward the development a new self-report alexithimia scale. Psychother. Psychosom . 44, 191–199. doi: 10.1159/000287912

Thomson, J. J. (1976). Killing, letting die, and the trolley problem. Monist 59, 204–217. doi: 10.5840/monist197659224

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi: 10.1126/science.7455683

Ugazio, G., Lamm, C., and Singer, T. (2012). The role of emotions for moral judgments depends of the type of emotion and moral scenario. Emotion 12, 579–590. doi: 10.1037/a0024611

Waldmann, M. R., and Dieterich, J. H. (2007). Throwing a bomb on a person versus throwing a person on a bomb - Intervention myopia in moral intuitions. Psychol. Sci . 18, 247–253. doi: 10.1111/j.1467-9280.2007.01884.x

Zimbardo, P. (2007). The Lucifer Effect . New York, NY: Random House Trade Paperbacks.

Keywords: moral dilemmas, moral judgment, decision making, cross cultural, DPHMJ

Citation: Christensen JF, Flexas A, Calabrese M, Gut NK and Gomila A (2014) Moral judgment reloaded: a moral dilemma validation study. Front. Psychol . 5 :607. doi: 10.3389/fpsyg.2014.00607

Received: 17 April 2014; Accepted: 29 May 2014; Published online: 01 July 2014.

Reviewed by:

Copyright © 2014 Christensen, Flexas, Calabrese, Gut and Gomila. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Julia F. Christensen, Department of Psychology, University of the Balearic Islands, University Campus, Carretera de Valldemossa, km. 7.5, Building: Guillem Cifre de Colonya, 07122 Palma, Spain e-mail: [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Case study: an ethical dilemma involving a dying patient

Affiliation.

  • 1 Lehman College, City University of New York, Bronx, NY, USA.
  • PMID: 19105511

Nursing often deals with ethical dilemmas in the clinical arena. A case study demonstrates an ethical dilemma faced by healthcare providers who care for and treat Jehovah's Witnesses who are placed in a critical situation due to medical life-threatening situations. A 20-year-old, pregnant, Black Hispanic female presented to the Emergency Department (ED) in critical condition following a single-vehicle car accident. She exhibited signs and symptoms of internal bleeding and was advised to have a blood transfusion and emergency surgery in an attempt to save her and the fetus. She refused to accept blood or blood products and rejected the surgery as well. Her refusal was based on a fear of blood transfusion due to her belief in Bible scripture. The ethical dilemma presented is whether to respect the patient's autonomy and compromise standards of care or ignore the patient's wishes in an attempt to save her life. This paper presents the clinical case, identifies the ethical dilemma, and discusses virtue ethical theory and principles that apply to this situation.

PubMed Disclaimer

Similar articles

  • Will "no blood" kill Jehovah Witnesses? Chua R, Tham KF. Chua R, et al. Singapore Med J. 2006 Nov;47(11):994-1001; quiz 1002. Singapore Med J. 2006. PMID: 17075672
  • ENT surgery, blood and Jehovah's Witnesses. Woolley SL, Smith DR. Woolley SL, et al. J Laryngol Otol. 2007 May;121(5):409-14. doi: 10.1017/S0022215106003744. Epub 2006 Nov 24. J Laryngol Otol. 2007. PMID: 17125571 Review.
  • [Consent of Jehovah's Witnesses to treatment with blood preparations: legal and ethical aspects]. Zaba C, Swiderski P, Zaba Z, Klimberg A, Przybylski Z. Zaba C, et al. Arch Med Sadowej Kryminol. 2007 Jan-Mar;57(1):138-43. Arch Med Sadowej Kryminol. 2007. PMID: 17571519 Polish.
  • Urgent medical decision making regarding a Jehovah's Witness minor: a case report and discussion. Brezina PR, Moskop JC. Brezina PR, et al. N C Med J. 2007 Sep-Oct;68(5):312-6. N C Med J. 2007. PMID: 18183749
  • [Jehovah's Witnesses refusal of blood: religious, legal and ethical aspects and considerations for anesthetic management]. Pérez Ferrer A, Gredilla E, de Vicente J, García Fernández J, Reinoso Barbero F. Pérez Ferrer A, et al. Rev Esp Anestesiol Reanim. 2006 Jan;53(1):31-41. Rev Esp Anestesiol Reanim. 2006. PMID: 16475637 Review. Spanish.
  • Ethical-legal dilemmas of nursing practice in emergencies and disasters: a scoping review. Duarte ACDS, Chicharo SCR, Silva TASMD, Oliveira AB. Duarte ACDS, et al. Rev Esc Enferm USP. 2024 Apr 15;58:e20230233. doi: 10.1590/1980-220X-REEUSP-2023-0233en. eCollection 2024. Rev Esc Enferm USP. 2024. PMID: 38624081 Free PMC article. Review.
  • Deontological Guilt and Moral Distress as Diametrically Opposite Phenomena: A Case Study of Three Clinicians. Bokek-Cohen Y, Marey-Sarwan I, Tarabeih M. Bokek-Cohen Y, et al. J Bioeth Inq. 2023 Nov 6. doi: 10.1007/s11673-023-10300-4. Online ahead of print. J Bioeth Inq. 2023. PMID: 37930560
  • Ethical conflicts experienced by intensive care unit health professionals in a regional hospital, Limpopo province, South Africa. Ramathuba DU, Ndou H. Ramathuba DU, et al. Health SA. 2020 Apr 16;25:1183. doi: 10.4102/hsag.v25i0.1183. eCollection 2020. Health SA. 2020. PMID: 32391174 Free PMC article.

Publication types

  • Search in MeSH

Related information

Linkout - more resources.

  • MedlinePlus Health Information
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

The Half-Life of the Moral Dilemma Task: A Case Study in Experimental (Neuro-) Philosophy

  • Reference work entry
  • First Online: 01 January 2014
  • Cite this reference work entry

case study about moral dilemma

  • Stephan Schleim 3 , 4  

7444 Accesses

4 Citations

6 Altmetric

The pioneering neuroscience of moral decisions studies implementing the moral dilemma task by Joshua Greene and colleagues stimulated interdisciplinary experimental research on moral cognition as well as a philosophical debate on its normative implications. This chapter emphasizes the influence these studies had and continue to have on many academic disciplines. It continues with a detailed analysis of both the traditional philosophical puzzle and the recent psychological puzzle that Greene and colleagues wanted to solve, with a special focus on the conceptual and experimental relation between the two puzzles. The analysis follows the fundamental logics essential for psychological experimentation that is also employed within cognitive neuroscience: the logics of defining a psychological construct, operationalizing it, formulating a hypothesis, applying it in an experiment, collecting data, and eventually interpreting them. In this manner, this chapter exemplifies an analytical structure that can be applied to many other examples in experimental (neuro-) philosophy, here coined “The Experimental Neurophilosophy Cycle.” This chapter eventually discusses how the empirical findings and their interpretation, respectively, are related back to the original philosophical and psychological puzzles and concludes with conceptual as well as experimental suggestions for further research on moral cognition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

case study about moral dilemma

Descriptive and Pragmatic Levels of Empirical Ethics: Utilizing the Situated Character of Moral Concepts, Judgment, and Decision-Making

case study about moral dilemma

On the Cognitive (Neuro)science of Moral Cognition: Utilitarianism, Deontology, and the “Fragmentation of Value”

case study about moral dilemma

What Kind of Ethics? – How Understanding the Field Affects the Role of Empirical Research on Morality for Ethics

Alexander, J. (2012). Experimental philosophy: An introduction . Cambridge, UK/Malden, MA: Polity.

Google Scholar  

Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33 (4), 245–266; Discussion 266–313.

Article   Google Scholar  

Anderson, M. L., & Pessoa, L. (2011). Quantifying the diversity of neural activations in individual brain regions . Paper presented at the 33rd Annual Conference of the Cognitive Science Society, Austin, TX.

Bennett, M. R., & Hacker, P. M. S. (2003). Philosophical foundations of neuroscience . Malden: Blackwell.

Berker, S. (2009). The normative insignificance of neuroscience. Philosophy & Public Affairs, 37 (4), 293–329.

Crick, F., & Koch, C. (1998). Consciousness and neuroscience. Cerebral Cortex, 8 (2), 97–107.

Duhem, P. P. M. (1906). La théoriephysique: Son objet et sa structure . Paris: Chevalier & Rivière.

Foot, P. (1978). Virtues and vices and other essays in moral philosophy . Berkeley: University of California Press.

Friston, K. J. (2009). Modalities, modes, and models in functional neuroimaging. Science, 326 (5951), 399–403.

Gergen, K. J. (2001). Psychological science in a postmodern context. Am Psychol , 56(10), 803–813.

Greene, J. D. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (Ed.), Moral psychology. The neuroscience of morality: Emotion, brain disorders, and development (Vol. 3, pp. 35–79). Cambridge, MA: MIT.

Greene, J. D. (2009). Dual-process morality and the personal/impersonal distinction: A reply to McGuire, Langdon, Coltheart, and Mackenzie. Journal of Experimental Social Psychology, 45 (3), 581–584.

Greene, J., & Cohen, J. (2004). For the law, neuroscience changes nothing and everything. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 359 (1451), 1775–1785.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293 (5537), 2105–2108.

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44 (2), 389–400.

Hacking, I. (1999). The social construction of what? Cambridge, Mass: Harvard University Press.

Helmuth, L. (2001). Cognitive neuroscience. Moral reasoning relies on emotion. Science , 293(5537), 1971–1972.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences , 33(2–3), 61–83; discussion 83–135.

Kahane, G. (2012). On the wrong track: Process and content in moral psychology. Mind & Language, 27 (5), 519–545.

Kahane, G. (2013). The armchair and the trolley: An argument for experimental ethics. Philosophical Studies, 162 (2), 421–445.

Kahane, G., & Shackel, N. (2010). Methodological issues in the neuroscience of moral judgement. Mind & Language, 25 (5), 561–582.

Kahane, G., Wiech, K., Shackel, N., Farias, M., Savulescu, J., & Tracey, I. (2012). The neural basis of intuitive and counterintuitive moral judgment. Social Cognitive and Affective Neuroscience, 7 (4), 393–402.

Kamm, F. M. (2009). Neuroscience and moral reasoning: A note on recent research. Philosophy & Public Affairs, 37 (4), 330–345.

Kant, I. (1785). GrundlegungzurMetaphysik der Sitten . Riga: J. F. Hartknoch.

Kendler, K. S., Zachar, P., & Craver, C. (2011). What kinds of things are psychiatric disorders? Psychol Med , 41 (6), 1143–1150.

Knobe, J. M., & Nichols, S. (2008). An experimental philosophy manifesto. In J. M. Knobe & S. Nichols (Eds.), Experimental philosophy (pp. 3–14). Oxford/New York: Oxford University Press.

Kuhn, T. S. (1962). The structure of scientific revolutions . Chicago: University of Chicago Press.

Levy, N. (2007). The Responsibility of the psychopath revisited. Philosophy, Psychiatry, & Psychology, 14 (2), 129–138.

Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8 (4), 529–539.

Logothetis, N. K. (2008). What we can do and what we cannot do with fMRI. Nature, 453 (7197), 869–878.

Logothetis, N. K., & Wandell, B. A. (2004). Interpreting the BOLD signal. Annual Review of Physiology, 66 , 735–769.

McGuire, J., Langdon, R., Coltheart, M., & Mackenzie, C. (2009). A reanalysis of the personal/impersonal distinction in moral psychology research. Journal of Experimental Social Psychology, 45 (3), 577–580.

Mikhail, J. M. (2011). Elements of moral cognition: Rawls’ linguistic analogy and the cognitive science of moral and legal judgment. Cambridge, New York: Cambridge University Press.

Moll, J., & de Oliveira-Souza, R. (2007). Moral judgments, emotions and the utilitarian brain. Trends in Cognitive Sciences, 11 (8), 319–321.

Moll, J., Zahn, R., de Oliveira-Souza, R., Krueger, F., & Grafman, J. (2005). Opinion: The neural basis of human moral cognition. Nature Reviews Neuroscience, 6 (10), 799–809.

Moore, A. B., Clark, B. A., & Kane, M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19 (6), 549–557.

Northoff, G. (2004). Philosophy of the brain: The brain problem . Netherlands: John Benjamins.

Book   Google Scholar  

Northoff, G. (2013). Neurophilosophy. In C. G. Galizia & P.-M. Lledo (Eds.), Neurosciences: From molecule to behavior (pp. 75–80). Heidelberg/New York/Dordrecht/London: Springer.

Paxton, J. M., Bruni, T., & Greene, J. D. (in press). Are ‘counter-intuitive’ deontological judgments really counter-intuitive? An empirical reply to Kahane et al. (2012). Social Cognitive and Affective Neuroscience doi:10.1093/scan/nst102.

Pessoa, L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9 (2), 148–158.

Pessoa, L., & Adolphs, R. (2010). Emotion processing and the amygdala: From a ‘low road’ to ‘many roads’ of evaluating biological significance. Nature Reviews Neuroscience, 11 (11), 773–783.

Poldrack, R. A. (2006). Can cognitive processes be inferred from neuroimaging data? Trends in Cognitive Sciences, 10 (2), 59–63.

Poldrack, R. A. (2010). Mapping mental function to brain structure: How can cognitive neuroimaging succeed? Perspectives on Psychological Science, 5 (6), 753–761.

Roskies, A. (2006a). Neuroscientific challenges to free will and responsibility. Trends in Cognitive Sciences, 10 (9), 419–423.

Roskies, A. (2006b). Patients with ventromedial frontal damage have moral beliefs. Philosophical Psychology, 19 (5), 617–627.

Schleim, S. (2008). Moral physiology, its limitations and philosophical implications. JahrbuchfürWissenschaft und Ethik, 13 , 51–80.

Schleim, S., & Roiser, J. P. (2009). FMRI in translation: The challenges facing real-world applications. Frontiers in Human Neuroscience, 3 , 63.

Schleim, S., & Schirmann, F. (2011). Philosophical implications and multidisciplinary challenges of moral physiology. Trames-Journal of the Humanities and Social Sciences, 15 (2), 127–146.

Schleim, S., Spranger, T. M., Erk, S., & Walter, H. (2011). From moral to legal judgment: The influence of normative context in lawyers and other academics. Social Cognitive and Affective Neuroscience, 6 (1), 48–57.

Sie, M., & Wouters, A. (2010). The BCN challenge to compatibilist free will and personal responsibility. Neuroethics, 3 (2), 121–133.

Singer, P. (2005). Ethics and intuitions. Journal of Ethics, 9 , 331–352.

Thomson, J. J. (1986). Rights, restitution, and risk: Essays, in moral theory . Cambridge, Ma: Harvard University Press.

Waldmann, M. R., Nagel, J., & Wiegmann, A. (2012). Moral judgment. In K. J. Holyoak & R. G. Morrison (Eds.), The Oxford handbook of thinking and reasoning . Oxford: Oxford University Press.

Woolfolk, R. L. (2013). Experimental philosophy: A methodological critique. Metaphilosophy, 44 (1–2), 79–87.

Download references

Acknowledgments

The author would like to thank Professors Birnbacher, Gethmann, Hübner, Kahane, Kleingeld, Metzinger, Sellmaier, Stephan, and Walter as well as the Munich Neurophilosophy Group for the possibility to present earlier drafts of this work at their conferences or colloquia. The author would also like to acknowledge the helpful comments of the peer reviewers for clarifying some issues of this chapter. This paper was supported by the grant “Intuition and Emotion in Moral Decision-Making: Empirical Research and Normative Implications” by the Volkswagen Foundation, Az. II/85 063, and a generous travel grant by the Barbara Wengeler Foundation, Munich.

Author information

Authors and affiliations.

Faculty of Behavioral and Social Sciences, Theory and History of Psychology, Heymans Institute for Psychological Research, University of Groningen, Grote Kruisstraat 2/1, 9712 TS, Groningen, The Netherlands

Stephan Schleim

Neurophilosophy, Munich Center for Neurosciences, Ludwig-Maximilians-University Munich, Geschwister-Scholl-Platz 1, 80539, Munich, Germany

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephan Schleim .

Editor information

Editors and affiliations.

Institute for Ethics and History of Medicine, University of Tübingen, Tübingen, Germany

Jens Clausen

The Florey Institute of Neuroscience and Mental Health, University of Melbourne, Parkville, Australia

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media Dordrecht

About this entry

Cite this entry.

Schleim, S. (2015). The Half-Life of the Moral Dilemma Task: A Case Study in Experimental (Neuro-) Philosophy. In: Clausen, J., Levy, N. (eds) Handbook of Neuroethics. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4707-4_164

Download citation

DOI : https://doi.org/10.1007/978-94-007-4707-4_164

Published : 29 September 2014

Publisher Name : Springer, Dordrecht

Print ISBN : 978-94-007-4706-7

Online ISBN : 978-94-007-4707-4

eBook Packages : Humanities, Social Sciences and Law

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • JME Commentaries
  • BMJ Journals

You are here

  • Volume 29, Issue 5
  • A virtue ethics approach to moral dilemmas in medicine
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Correspondence to:
 P Gardiner, 5 London Road, Daventry, Northants NN11 4DA, UK; 
 patti{at}scottydoc.co.uk

Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience. Virtue ethics is a framework that focuses on the character of the moral agent rather than the rightness of an action. In considering the relationships, emotional sensitivities, and motivations that are unique to human society it provides a fuller ethical analysis and encourages more flexible and creative solutions than principlism or consequentialism alone. Two different moral dilemmas are analysed using virtue ethics in order to illustrate how it can enhance our approach to ethics in medicine.

  • virtue ethics
  • four principles of medical ethics
  • Raanan Gillon

https://doi.org/10.1136/jme.29.5.297

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Read the full text or download the PDF:

Other content recommended for you.

  • The virtues (and vices) of the four principles A V Campbell, Journal of Medical Ethics, 2003
  • Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics Raanan Gillon, Journal of Medical Ethics, 2015
  • Fidelity to the healing relationship: a medical student's challenge to contemporary bioethics and prescription for medical practice Blake C Corcoran et al., Journal of Medical Ethics, 2014
  • Ethics needs principles—four can encompass the rest—and respect for autonomy should be “first among equals” R Gillon, Journal of Medical Ethics, 2003
  • The bioethical principles and Confucius’ moral philosophy D F-C Tsai, Journal of Medical Ethics, 2005
  • NHS constitution values for values-based recruitment: a virtue ethics perspective Johanna Elise Groothuizen et al., Journal of Medical Ethics, 2018
  • What makes a good GP? An empirical perspective on virtue in general practice A Braunack-Mayer, Journal of Medical Ethics, 2005
  • Surrogacy: beyond the commercial/altruistic distinction J Y Lee, Journal of Medical Ethics, 2022
  • ‘I am in blood Stepp'd in so far…’: ethical dilemmas and the sports team doctor Brian Meldan Devitt et al., British Journal of Sports Medicine, 2010
  • Teaching practical wisdom in medicine through clinical judgement, goals of care, and ethical reasoning Lauris Christopher Kaldjian, Journal of Medical Ethics, 2010
  • Research article
  • Open access
  • Published: 09 August 2019

“I go into crisis when …”: ethics of care and moral dilemmas in palliative care

  • Ludovica De Panfilis   ORCID: orcid.org/0000-0002-5509-7626 1 ,
  • Silvia Di Leo 2 ,
  • Carlo Peruselli 3 ,
  • Luca Ghirotto 4 &
  • Silvia Tanzi 5 , 6  

BMC Palliative Care volume  18 , Article number:  70 ( 2019 ) Cite this article

18k Accesses

28 Citations

10 Altmetric

Metrics details

Recognising and knowing how to manage ethical issues and moral dilemmas can be considered an ethical skill. In this study, ethics of care is used as a theoretical framework and as a regulatory criterion in the relationship among healthcare professionals, patients with palliative care needs and family members.

This study is a part of a larger project aimed at developing and implementing a training programme on “ethical communication” addressed to professionals caring for patients with palliative care needs. The aim of this study was comprehending whether and how the ethics of care informs the way healthcare professionals make sense of and handle ethical issues in palliative care.

Qualitative study employing a theoretically driven thematic analysis performed on semi-structured interviews.

The research was conducted in a clinical cancer centre in northern Italy. Eligible participants were physicians and nurses from eleven hospital wards who assisted patients with chronic advanced disease daily and had previously attended a 4-h training on palliative care held by the hospital Palliative Care Unit.

The researchers identified five themes: morality is providing global care; morality is knowing how to have a relationship with patients; morality is recognizing moral principles; moral dimension and communication; and moral dilemmas are individual conflicts.

Conclusions

Ethics of care seems to emerge as a theoretical framework that includes the belief systems of healthcare professionals, especially those assisting patients with palliative care needs; moreover, it allows the values of both the patients and professionals to come to light through the relationship of care. Ethics of care is also appropriate as a framework for ethical training.

Peer Review reports

Palliative care is defined by the World Health Organization as “an approach that improves the quality of life of patients and their families facing problems associated with a life-threatening illness, through the prevention and relief of suffering by means of early identification and impeccable assessment and treatment of pain and other problems, physical, psychosocial and spiritual” [ 1 ]. Palliative care therefore requires many different competences, not only clinical but also relational, communicative and ethical [ 2 ].

Studies in the literature show that clear and honest communication about the diagnosis and prognosis of a fatal illness, which fully respects patients’ wishes and preferences, positively affects their quality of life and improves symptom management [ 3 ]. Good communication stems partly from innate quality and can improve with experience. Nevertheless, it can also be increased through specific training programmes that take into account all of the abovementioned domains. A number of studies have shown that healthcare professionals (HPs) recognise and address ethical issues and that their awareness of moral dilemmas that may arise in decision-making is part of effective communication [ 4 , 5 ].

From the Greek word ethos meaning habit or custom, ethics is the branch of philosophy that concerns human behaviour, customs, and habits, particularly with reference to the rules of conduct and their justification [ 6 ].

Ethical debate in palliative care has focused on several and sometimes opposing approaches, among which is the classical deontological approach of principlism, “virtue” ethics, and ethics of care.

Principlism is based on principles originally proposed by Beauchamp and Childress [ 7 ]: autonomy (to give an individual the freedom to make his or her own choices), beneficence (to do good and to act with the best interests of the other person in mind), non-maleficence (to do no harm to people), and justice (to promote fairness and equality among individuals). Each principle relates to each of the other three principles; therefore, they should be ordered according to the criteria of priority for each individual case, with the ultimate aim of “the best interests of the patient” [ 7 ]. As this approach provides a valid basis for assessing the appropriateness of behaviours concerning morality, it may have some limitations concerning its full applicability in the medical context, above all within palliative care. Indeed, conveying the concept of the human being as a subject in its own right, fully aware, competent and independent, can be considered inadequate in medicine and health care, where human complexity and interpersonal relations need to be considered. Some authors argued that the four principles suggest that the approach is imperialist, inapplicable, inconsistent and inadequate [ 8 ]; others argued that the four-principle approach does not consider the role of emotional reactions as an integral part of our moral perceptions and decision making [ 9 ].

Virtue ethics may be identified as the ethical theory that emphasises virtues or moral character [ 10 ]. All forms of virtue ethics are based on two concepts, i.e., virtue and practical wisdom: virtue ethics is a framework that focuses on the moral character rather than the rightness of an action [ 9 ]; it provides a broader ethical analysis and encourages more flexible and creative solutions than principlism [ 11 ]. Its main limitations are putting too much emphasis on a person’s moral character and on cultural judgement of values, and the inability to provide decisional elements to support the choice [ 10 ].

The ethics of care theoretical framework [ 12 ] represents an interesting ethics approach for reading and analysing ethical issues and moral dilemmas in palliative care. In our view, it could represent not only a valid theoretical framework but also a guiding criterion in the relationship among HPs, patients with palliative care needs, and their families.

The central concept of this approach is care, conceived both as an action concretely expressed towards the other, and as a value that has the goal of being universally shared, beginning with the awareness of the fragility and vulnerability of the human condition [ 13 ]. Ethics of care recognises that human beings are interdependent, and for this reason, they need respect, protection, and care [ 14 , 15 ]. Moreover, it highlights significant ethical aspects in the development of the relationship of care [ 14 , 15 ]. From this perspective, every moral choice or ethical issue is conceived as inserted in a network of interpersonal relationships, nurtured by communication, since both illness and the patient experience can be considered as the products of a set of interconnections.

To deepen the theoretical relationship between ethics of care and palliative care, we reviewed the literature by combining the terms “care ethics” or “ethics of care” with “palliative care”. We retrieved articles [ 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 ] concerning two main topics, i.e., a) the need to set medical ethics on a new foundation by grounding it on a different set of values, such as compassion, heedfulness, vulnerability, and the integrity of the person; and b) the specificity of the moral dilemmas often arising in medical care and the need for approaching them with moral notions different from the classical moral theory [ 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 ].

Lachman discussed the use of the theory of care ethics to help nurses determine if they are applying this theory effectively in their practice [ 16 ]. After having described care ethics and its evolution through the main authors’ theories, he/she presents a case study to illustrate the philosophical approach of Joan Tronto [ 18 ]. Lachman assumes that a care orientation is fundamental to the nurse-patient relationship and that Joan Tronto’s version of the ethics of care must/might be implemented in the care relationship. Although this paper does not mention the Palliative care field, it provides the reader with a practical use of the ethics of care in a healthcare field.

William T. Branch has argued that ethics based on caring for the patient needs to be grounded on the patient/physician relationship, making it necessary to rely on the physician’s moral sensitivity [ 17 ]. He also argued that HPs can recognise patients’ wishes and preferences, but equally important is their capacity for compassion, as well as honesty, integrity and a sense of humility. He defines this approach as “the ethics of patient care” and assumes that building medical ethics on this foundation leads to a framework of caring ethics.

On these bases, Branch built a theoretical frame to include ethics of care as a suitable approach to palliative care.

In their research project “Practical ethics of Palliative care”, Hermsen and Ten Have [ 19 ] suggest that palliative care does not fit well into the classical biomedical model and that it can rather be considered a new philosophy of care, introducing new moral notions of wider relevance into the healthcare context. As a consequence, they argue that it is possible to identify a moral dimension that is specific to palliative care.

To widen the moral horizon and increase moral sensitivity, de Vries and Leget [ 20 ] introduce an ethical framework to address elderly patients with cancer. This ethical approach stems from the ethics of care because it focuses on the caring relationship. Authors compare ethics of care with principlism, which is the ethical theory predominant in contemporary medicine. As opposed to principlism, ethics of care underlines not only the attention to the patient context but also a broader comprehension of the illness and a different concept of autonomy [ 20 ].

In a paper published in 2017, Inge van Nistelrooij et al. [ 21 ] express the need to reframe autonomy in a shared decision-making process as relational autonomy. Authors state that, to reconceptualise relationality, it is mandatory to “turn to care ethics” [ 22 ].

Schuchter and Heller [ 23 ] also use notions of the ethics of care. They affirm that “the solution” to a moral problem does not lie in judging actions on the basis of moral principles, but in intensifying relationships and enhancing empathetic involvement " .

The need to manage moral issues, such as respect for a broader meaning of autonomy, the central role of the patient’s concept of dignity, the role of choice, the importance of truth, the concept of quality of life, the value of emotions and the existential issue, is an integral part of the palliative care approach.

In this sense, we believe that Ethics of Care takes into consideration aspects that classical ethics have overshadowed: trust and responsibility, protection of individuality, the context in which the relationship takes place, and the quality of the relationship.

This study is part of a larger project aimed at developing and implementing an ethics communication training programme addressed to HPs who treat patients with palliative care needs.

The aim of this study was comprehending whether and how the ethics of care informs the way HPs make sense of and handle ethical issues in palliative care.

We employed a generic qualitative research design [ 24 ] using semi-structured interviews.

Study population

We conducted the study in a clinical cancer centre in northern Italy. The study was approved by the Ethics Committee of the Provincial Health Authority of Reggio Emilia.

Eligible participants were physicians and nurses from eleven hospital wards who were involved daily in the care of patients with chronic diseases with poor prognoses and had previously attended a 4-h training on palliative care held by the hospital Palliative Care Unit. A conveniently selected sample of a physician and a nurse per ward was chosen.

The heads of each hospital ward were informed by the Principal Investigator (PI) on the objectives and the request for collaboration in the research. After obtaining access to the field, the PI e-mailed the information and request for participation to selected professionals. The invited participants were then contacted by telephone by the PI who, after obtaining consent, agreed on the place and times for participating in the study. In the cases of refusal to participate, the researchers contacted potential replacements. All participants provided signed informed consent to participate in the qualitative interviews.

Sixteen out of twenty subjects agreed to participate in the study. We interviewed 9 physicians and 7 nurses from 11 wards. The participant characteristics are shown in Table  1 .

Data collection

We derived the thematic areas to discuss during the interview sessions with participants from the ethics of care framework, consequentially focusing on care relationship.

Thematic areas were developed by the P.I. (LDP), a researcher and bioethicist, and SDL, a clinical psychologist expert in qualitative research. They agreed on three broad topics: the perception of ethical issues, the experienced role of ethical issues within the care relationship, the way interviewees recognise and deal with ethical dilemmas within the care relationships.

We used open-ended, semi-structured interviews [ 25 ] because of their flexible structure, which allows the interviewer to adapt and change the questions according to the interviewee’s agenda and answers. For conducting the interview, we pre-planned some exemplifying questions that we report in Table  2 .

The P.I. conducted the semi-structured individual interviews. She did not know the participants.

The semi-structured individual interviews lasted a mean of 45 min.

Data analysis

Interviews were audio-recorded and transcribed verbatim. Data analysis was conducted by the P.I, together with S.T., palliative care physician with experience in qualitative research, and L.G, qualitative research methodologist. We performed a theoretically driven thematic analysis [ 26 ] by following these analytical stages:

L.D.P. transcribed the interviews verbatim and shared the transcripts with colleagues. They wrote comments and initial thoughts in a memo;

L.D.P., S.T. and L.G. extracted portions of the text individually and then shared their work to reach an initial agreement. During this stage, they inductively conducted the thematic analysis [ 26 ], providing their insights;

subsequently, they mapped the themes onto the ethics of care framework;

they independently reviewed themes and allocated portions of the text to the newly reconfigured themes;

together, they re-defined themes and re-named them to achieve internal consistency;

L.D.P. selected representative extracts from the interviews and drafted the final report, which was checked and amended by all the authors.

Sixteen out of twenty subjects agreed to participate in the study. We interviewed 9 physicians and 7 nurses from 11 wards. They were 10 female and six male; the mean age was 43,8 years old (range 21–70).

Five themes and related sub-themes have been identified: 1) morality is providing general care; 2) morality is knowing how to have a relationship with patients; 3) morality is recognizing moral principles; 4) moral dimension and communication; and 5) moral dilemma as individual conflicts. Themes and sub-themes are shown in Table  3 .

Morality is providing general care

Morality plays a crucial role in the relationship of care, which cannot be demanded and cannot be avoided.

“Morality is the first hurdle we face, together with ethics and deontology. Deontologically, it is the sick person who is at the centre of the care, and morally, one should try to work in an ethical manner, understood as good behaviour…. but these concepts do not always go hand in hand” (P01).

Morality emerges as the human side of care and deals with giving importance to aspects such as knowing how to tell the truth, knowing how to answer questions on the sense and meaning of suffering, and being able to have a dialogue with the patient. Respect for patient dignity and his or her values is the manifestation of morality in the care relationship. Although it is expressed in different ways, due to the different roles they play, morality has the same meaning for nurses and physicians, making the care truly global.

“Morality is respect for everything, the care of the patient’s morality, the care of everything, […]. I think that all professionals should first of all respect themselves, and then give this respect to others” (N02).
“I believe that there are ways or strategies to talk about morality, but we don’t have them. This is what is missing. But you realise that it is often enough just to listen to, and when you give answers, to give these with your heart” (N06).
“If I think of morality, I think of my professional ethics, which is expressed in giving the best from a scientific point of view, and then entering into empathy with patients, so that they feel at ease in a complex path of care and, finally, in creating a relationship of trust” (P05).

Morality is knowing how to have a relationship with patients

The relationship is an essential aspect of care, intended in a moral sense, and must involve all “actors” of the care process: patients, relatives and HPs. This perspective is very clear in some interviewees:

“I believe that everything revolves around a relationship based on affection. This type of affection must be transmitted in some way at every stage. And this is done through words, gestures, physical contact […]. You must know how to be in the relationship.” (P11)
“It is difficult to abstractly establish how to behave in certain situations with real protocols. However, in my opinion, some techniques, even relational ones, can certainly help. Although, we do not all agree on this point” (P14).

Knowing how to be in the relationship, knowing how to manage it, and considering it in emotional terms, emerges as a way to provide care. Some participants report that the relationship cannot become too personal, and a certain amount of professionalism must always be maintained. For this reason, the relationship is difficult, challenging and, as it is built, it must be nourished daily. Others conceive of personal involvement as a limit in the care relationship; although unavoidable, it comes with the risk of being overwhelmed.

“Involvement is always there. But it is not that kind of involvement that makes you say: “I will bring the pain of that patient home with me,” it consists of entering into a challenging and demanding relationship with that person” (N09).
“As soon as you establish a dialogue with the patient on moral issues and find out what is important to him/her, you enter into the patient’s subjective sphere which you must be able to perceive and manage” (P03).

Morality is recognizing moral principles

HPs show that they have a broader idea of the moral principles featuring the care relationship compared to being strictly principlist. Nevertheless, sometimes the definition of these principles is not entirely clear. The principle of autonomy, for example, was directly mentioned only once, and yet in what the interviewees reported, the influence of this regulatory principle appears evident:

“My first principle is to make people aware, to try and give a person the tools so that they can make an autonomous and independent choice” (P07).
“The principles that guide me are those of respect, of the attempt to understand the patients’ experience and trying to understand and evaluate their situation” (N10).
“Morality is respect for the patients’ way of thinking, their decisions and values, the ability to not make them suffer, to eliminate everything that is harmful by meeting their needs, even if it goes against what I think” (N08).

Relational autonomy, correctness, sincerity and humanity are among the moral principles that are most often highlighted:

“I would say, first of all, that we are talking about the human side of care. Yes, I would say the human and relational component. And then the honest side of care. Morality concerns the humanity in a relationship of care” (P12).

Morality is giving importance to dialogue and communication

Interviewees talk about morality through the different skills they use to put it into practice. These skills deal with the ability to dialogue and to listen to the patient, to give meaning to the patient’s narrative, to share his/her values, and to personalise communication exchanges; moreover, the professional awareness that telling the truth is not a univocal process, strongly emerges from the interviews.

“My strategy is to listen to, to explore the dimension of the sick patient’s existence, trying to understand how much that person is still anchored in his/her life […]. The patient’s value horizon guides the communication” (P15).
“Morality has many aspects, even of a personal and cultural nature. There is the way that you conceive your own morality and that of the patient. You have to learn to talk about it” (N13).
“To explore the values of a patient, it is important to understand their life experiences, their beliefs and interpretations” (P04).
“You also need to be able to see a desire, a wish emerging from the fragments of speech of the sick person. It is important for the communication to be gradual, to understand what truth is acceptable, and to know how to communicate it. The discourse of truth is a moral discourse, for example” (P16).

Moral dilemmas as individual conflicts

All interviewees define moral dilemma as an inner conflict, to which they frequently cannot find a solution or that they cannot manage; therefore, it’s not unusual that dilemma often remain unresolved and accepted as an inevitable aspect of healthcare profession. Some participants refer to moral dilemmas highlighting their difficulty in reading end-of-life situations.

The narrated dilemma often touches on a very personal sphere: rather than concerning deontology or a specific ethic framework, it is embodied in the life experience of each individual professional.

“I prefer to help young people with cancer and their suffering as quickly as possible, perhaps by means of terminal sedation. On the other hand, my Christian ethics tell me: “What are you thinking? It is not up to you to decide it”. Therefore, many times my decision, though painful, is somewhere between a treatment that alleviates suffering and the respect of my Christian ethics” (P15).
“It concerned a personal situation, with my father […]. I lied to him about whether he was going to die. I felt very bad and after 25 years I still don’t know if it would have been better to tell him, he would have died anyway… If he had been one of my patients, I would have told him, but it’s different with family members…” (P12).
“I go into crisis when family members ask me not to tell the truth to the patients. I mean, if I was in their position, I would want to know, I would want to make the decisions together with the doctor. I would like to choose how to live my life to the end” (N08).
“I go into crisis when I have to say that there are no more useful tools to cure them, then I invent atypical drugs, nothing special, but in practice we continue to treat the patient to give the illusion that we are doing something” (P16).

The aim of this study was to comprehend whether and how the ethics of care informs the way HPs make sense of and handle ethical issues in Palliative care.

In our findings, morality fully emerges as a multidimensional concept. Its different meanings can be summarised by the following themes: morality is providing general care; it is knowing how to have a relationship with patients; it means recognizing moral principles and giving importance to dialogue and communication. Moreover, HPs seems to perceive moral dilemmas as “inner conflicts” that they cannot manage.

Although morality arise as an unconscious and unstructured concept, it seems to play a significant role in the care relationship. No explicit reference emerges in favour of a single ethical framework used in daily clinical practice; HPs talk about ethical issues in palliative care using notions and concepts such as caring relationship, listening, dialogue. These aspects are strongly highlighted in the ethics of care approach, focusing - as Leget wrote – on the caring relationship as being constituted of both patient and professional, as well as on the larger context of a person’s life [ 20 ].

Ethics emerges as an aspect of care concerning not only existential issues at the end of life, but also a number of choices throughout the entire patient care pathway. These choices have to deal with patient’s comfort, body care, patient’s preferences toward the administration of treatments.

From our results, it emerges that HPs tend to balance patient empowerment, compassion and understanding with solicitude within the care relationship. Thought compassion or solicitude are key concepts not only of the ethics of care approach, nevertheless they address specific caring attitudes described by the ethics of care approach, i.e., telling the truth while keeping hope alive, respecting as much as possible the degree of patient autonomy and meeting the patient’s spiritual needs, especially at the end of life [ 4 , 27 , 28 , 29 ].

Our results seem to confirm the need for HPs for a step-by-step moral training. In fact, they tend to approach ethical issue with great emotional involvement, sometimes reporting on personal events; in addition, they seem to lack skills aimed at resolving dilemmas.

Without oversimplifying matters, principlism can help in reasoning about classical ethical principles and their application to a single moral dilemma [ 7 ]; the ethics of virtue can help in developing moral attitudes and “practical wisdom” [ 30 ]; the ethics of care underlines the importance of intensifying relationship and enhancing empathetic involvement [ 23 ]. These approaches, taking together, can be the basis for the development of a moral training providing HPs with ethical communication skills to interpret moral problems in a plural way.

As Leslie Bender [ 31 ] demonstrates, ethics is giving importance and focusing on care, compassion, availability, dialogue and communication, as well as learning the ability to listen carefully to others and to pay attention to the needs of others.

Strengths and limitations

The research was consistently designed and conducted as a theory-driven study: the ethics of care theory formed the basis of all the steps (from the definition of the study design to the construction of the interview guide and data analysis), and this made a contribution to the transparency. We are fully aware that biases may arise from a prestructured qualitative research [ 35 ] design, but the choice of conducting this type of study depended on several methodological choices and organisational constraints: the scarceness of qualitative research in this context, the time and resources available, the purpose of proving the relevance of ethics of care in practice, and a data analysis process that is coherent with the purpose.

Among methodological limitations, we should highlight the following. Interviews were conducted by one interviewer only. However, data were analysed and discussed by a multidisciplinary team of researchers, and this could ensure scientific rigour and intersubjective corroboration. Since the study included only sixteen participants for convenience, we could not evaluate saturation. Nonetheless, we recruited both physicians and nurses from ten different hospital wards, allowing us to maximise and vary the professional perspectives included in the study.

The results of this study suggest that for Health Professionals recognizing moral principles, dealing with ethical dilemmas and giving importance to dialogue and communication is paramount in the care relationship.

This requires developing and implementing effective educational programs focused on step-by-step moral training. The program should include at least the following objectives: empowering HPs with the ability to recognise ethical dilemmas and analyse conflicts; promoting sensitiveness to principles, values, goals and wishes of patients; and ensuring the ability of HPs to come to reasoned decisions in daily clinical practice [ 32 , 33 , 34 ].

Different ethical approaches can help in reaching the objectives described; the ethics of care framework also includes the belief systems of HPs; moreover, it allows the values of the patients and HPs to come to light through the relationship of care.

Availability of data and materials

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Healthcare Professionals

WHO Definition of Palliative care, http://www.who.int/cancer/palliative/definition/en/ . Accessed 14 Sept 2018

Sepúlveda C, Marlin A, Yoshida T, et al. Palliative Care: the World Health Organization's global perspective. J Pain Symptom Manage. 2002;24(2):91–6.

Article   Google Scholar  

Back AL, Anderson WG, Bunch L, et al. Communication about cancer near the end of life. Cancer. 2008;113(7 Suppl):1897–910.

Fredriksson L, Eriksson K. The ethics of caring conversation. Nurs Ethics. 2003;10(2).

Krahn TM. Care ethics for guiding the process of multiple sclerosis diagnosis. J Med Ethics. 2014;0(12):802–6.

Mortari L. Filosofia della cura. Milano: Raffaello Cortina; 2015.

Google Scholar  

Beachaump T, Childress J. Principles of Biomedical Ethics, Seventh Edition, Oxford: Oxford University Press; 2012.

Huxtable R For and against the four principles of biomedical ethics. Clin Ethics, 2013 8(2/3) 39-43.

Gardiner P. A virtue ethics approach to moral dilemmas in medicine. J Med Ethics. 2003 Oct;29(5):297–302.

Article   CAS   Google Scholar  

Hursthouse R, Pettigrove G. "Virtue Ethics", The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.). https://plato.stanford.edu/archives/win2018/entries/ethics-virtue/ , Accessed 14 Sept 2018.

Tong R, Williams N. "Feminist Ethics", The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2018/entries/feminism-ethics/ , Accessed 14 Sept 2018.

Held V. The ethics of care, in Copp D. Handbook of ethical theory, Oxford: Oxford University Press; 2007. pp. 537–566.

Chapter   Google Scholar  

Pettersen T. The ethics of care: Normative structures and empirical implications. Health care Anal. 2011;19(1):51–64.

Edwards SD. Three version of an ethics of care. Nurs Philos. 2009;10:231–40.

Held V. Justice and care: essential reading in feminist ethics. Oxford: Westview press, Oxford; 1995.

Book   Google Scholar  

Lachman VD. Applying the ethics of care to your nursing practice. Ethics Law Policy. 2012;21:2.

Branch WT Jr. A piece of my mind. The ethics of patient care. JAMA. 2015;3131(14):1421–2.

Tronto J. Moral boundaries: a political argument for an ethic of care. New York: Routledge; 1993.

Hermesen MA, ten Have HA. Practical ethics of palliative care. Am J Hosp Palliat Care. 2003;20(2):97–8.

De Vries M, Leget C. Ethical dilemmas in elderly cancer patients: a perspective from the ethics of care. Clin Geriatr Med. 2012;28:93–104.

van Nistelrooij I, Visse M, et al. How shared is shared decision-making? A care-ethical view on the role of partner and family. J Med Ethics. 2017;43:637–44. https://doi.org/10.1136/medethics-2016-103791 .

Article   PubMed   Google Scholar  

Kittay EF, Feder EK, editors. The subject of care: feminist perspectives on dependency. Lanham: Rowman & Littlefield; 2002.

Schuchter P, Heller A. The care dialog: the “ethics of care” approach and its importance for clinical ethics consultation. Med Health Care Philos. 2018;21:51–62.

Caelli K, Ray L, Mill J. ‘Clear as Mud’: toward greater clarity in generic qualitative research. Int J Qual Methods. 2003;1–13. https://doi.org/10.1177/160940690300200201

Kvale S. Doing Interviews. London: SAGE; 2007.

Braun V, Clarke V. Qual Res Psychol. 2006;3(2):77–101 Available from: http://eprints.uwe.ac.uk/11735 .

Noddings N. The challenge to care in school: an alternative approach to education. New York: Teachers College Press, Columbia University; 1992.

Fotaki M. Why and how is compassion necessary to provide good quality healthcare? Int J Health Policy Manag. 2015;4(4):199–201.

Olsman E, Willelm D, Leget C. Solicitude: balancing compassion and empowerment in a relational ethics of hope – an empirical ethical study in palliative care. Med Health Care Philos. 2016;19:11–20.

Ricoeur, Paul. 2005. Das Selbst als ein Anderer. Aus dem Französischen von Jean Greisch in Zusammenarbeit mit Thomas Bedorf und Birgit Schaaff. 2. Auflage 2005, München, Wilhelm Fink.m[Orig. Soi-même comme un autre, Paris: Seuil (1990)].

Bender L. Un’analisi femminista della morte medicalmente assisitita. In: Faralli C, Zullo S, editors. Questioni di fine vita. Riflessioni bioetiche al femminile. Bologna: Bononia University Press; 2008.

EAPC Steering Group on Medical Education and Training. Recommendations of the EAPC for the development of postgraduate curricula leading to certification in Palliative Care, Report of the EAPC taskforce on medical education, EAPC; 2009.

Svantesson M, Silén M, James I. It’s not all about moral reasoning: understanding the content of moral case deliberation. Nurs Ethics. 2018;25(2):212–29.

Heidenreich K, Bremer A, Materstved AJ et al. Relational autonomy in the care of the vulnerable: health care professionals’reasoning in moral case deliberation (MCD), Medicine, Health Care and Philosophy, published online 14 December 2017. Patrick Schuchter and Andreas Heller, the care dialog: the “ethics of care” approach and its importance for clinical ethics consultation, Medicine, Health Care and Philosophy, (2018) 21:51–62.

Miles MB, Huberman AM. Qualitative data analysis: An expanded sourcebook. 2nd ed. London: SAGE; 1994.

Download references

Acknowledgements

The authors wish to thank the management of the following Reggio Emilia hospital wards: Oncology, Hematology, Internal-Medicine Oncology, Nephrology, Pneumology, Infection Diseases, Intensive Care, Long-term care, Cardiology, Rehabilitation, Obstetrics and Gynecology, for allowing this study to be carried out. The authors would also like to thank all the healthcare professionals who kindly participated in this study giving their time, experiences, and insights.

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Authors and affiliations.

Unit of Bioethics, Azienda USL-IRCCS di Reggio Emilia, Via Amendola 2, 42122, Reggio Emilia, Italy

Ludovica De Panfilis

Psycho-oncology Unit, Azienda USL-IRCCS di Reggio Emilia, Reggio Emilia, Italy

Silvia Di Leo

Past President Italian Society of Palliative Care, Milano, Italy

Carlo Peruselli

Azienda USL-IRCCS di Reggio Emilia, Reggio Emilia, Italy

Luca Ghirotto

Palliative Care Unit, Azienda USL-IRCCS di Reggio Emilia, Reggio Emilia, Italy

Silvia Tanzi

Clinical and Experimental Medicine PhD Program, University of Modena and Reggio Emilia, Modena, Italy

You can also search for this author in PubMed   Google Scholar

Contributions

LDP and SDL made a substantial contribution to the concept and the design of the work, they analysed and interpreted data, and drafted the article; LG designed the work and analysed data; CP revised the article critically and approved the version to be published; ST made a substantial contribution to the concept of the work and analysed data. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Ludovica De Panfilis .

Ethics declarations

Ethics approval and consent to participate.

The study was approved by the Ethics Committee of the Provincial Health Authority of Reggio Emilia (Protocol n. 2015/0003925, Feb., 19, 2015). All participants provided signed informed consent to participate in the qualitative interviews.

Consent for publication

Not Applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

De Panfilis, L., Di Leo, S., Peruselli, C. et al. “I go into crisis when …”: ethics of care and moral dilemmas in palliative care. BMC Palliat Care 18 , 70 (2019). https://doi.org/10.1186/s12904-019-0453-2

Download citation

Received : 08 October 2018

Accepted : 31 July 2019

Published : 09 August 2019

DOI : https://doi.org/10.1186/s12904-019-0453-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Palliative care
  • Ethics of care
  • Communication
  • Qualitative research

BMC Palliative Care

ISSN: 1472-684X

case study about moral dilemma

Study.com

In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.

Last updated 10th July 2024: Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website https://www.cambridge.org/news-and-insights/technical-incident

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

case study about moral dilemma

  • > Journals
  • > BJPsych Open
  • > Volume 8 Supplement S1: Abstracts of the RCPsych I...
  • > Case Study on an Ethical Dilemma

case study about moral dilemma

Article contents

Case study on an ethical dilemma.

Published online by Cambridge University Press:  20 June 2022

Mr AB is a 58-year-old male with diagnosis of Schizoid Personality disorder. An articulate and intelligent man, AB derived happiness and contentment from his work. Due to workplace conflicts, he was asked to resign several years ago and has not worked since. Mr AB then found a sense of purpose in life by looking after his elderly parents. His parents sadly died a few years ago and since then he has been living on his own. He has never married. AB has one brother who helps him with shopping and groceries. Prior to this admission, AB was admitted once a few years ago when he was diagnosed with Depressive Disorder.

Mr AB was admitted last year with profound self-neglect. He was detained under Section 2MHA as he wasn't eating and drinking and wasn't engaging with services. With the initial diagnosis being Recurrent Depressive Disorder, AB was commenced on treatment for the same and eventually received ECT, for which he had strongly opposed. Following 6 sessions of ECT, AB bargained with the team that he would start eating and drinking if ECT was stopped and did so as well. He then requested a transfer to a different ward and consultant, with whom he shared that he doesn't agree with our diagnosis of depression or Schizoid personality disorder. AB expressed that he doesn't find his life worth living and wants to be left alone. He strongly believed that his liberty to take decisions about his life is being unfairly taken away by the NHS and accused professionals of trying to protect themselves. No evidence of SMI found at this stage. Following several discussions, AB was discharged home. He however was readmitted within a couple of days’ time by his brother following disengagement, self-neglect and again, no evidence of SMI.

A capacitous patient, in the absence of Serious Mental Illness puts forth an argument that purely because his way of living and his opinions on life and death differ from that of the society, doesn't mean that his rights over his life can be taken away from him. He, however, struggles to acknowledge that as fellow humans we are strongly inclined to intervene and try to stop anyone from taking their own life.

A Challenging case that raises several questions surrounding Medical Ethics. The team is now looking into guardianship to ensure welfare of the patient.

Crossref logo

No CrossRef data available.

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 8, Supplement S1
  • Anusha Akella (a1) and Kyaw Moe (a1)
  • DOI: https://doi.org/10.1192/bjo.2022.351

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

To read this content please select one of the options below:

Please note you do not have access to teaching notes, using case studies of ethical dilemmas for the development of moral literacy: towards educating for social justice.

Journal of Educational Administration

ISSN : 0957-8234

Article publication date: 10 July 2007

The purpose of this paper is to focus on a case study, framed as an ethical dilemma. It serves as an illustration for the teaching of moral literacy, with a special emphasis on social justice.

Design/methodology/approach

Initially, the paper provides a rationale for the inclusion of case studies, emphasizing moral problems in university teaching. It discusses briefly the strengths and weaknesses of using these types of case studies in the classroom. In particular, it explains how both the rational and emotional minds can be addressed, through the use of these moral dilemmas, by introducing two concepts: Multiple Ethical Paradigms and Turbulence Theory. Following an explanation of the two concepts, an illustrative case is provided. This case deals with aspects of No Child Left Behind legislation that narrows the curriculum for some students. The underlying social justice issue of this case is raised. The dilemma is followed by a discussion of how to resolve or solve it by raising questions that relate to the Multiple Ethical Paradigms and Turbulence Theory.

It is hoped that university teachers will find that case study analysis, through the use of the two concepts of the Multiple Ethical Paradigms and Turbulence Theory, provides a meaningful and helpful way to promote moral literacy.

Originality/value

It is recommended that this kind of case study, framed through the use of a paradox, should be taught not only in educational ethics, but it can also be infused in many other courses in the university curriculum.

  • Social justice
  • Case studies

Poliner Shapiro, J. and Hassinger, R.E. (2007), "Using case studies of ethical dilemmas for the development of moral literacy: Towards educating for social justice", Journal of Educational Administration , Vol. 45 No. 4, pp. 451-470. https://doi.org/10.1108/09578230710762454

Emerald Group Publishing Limited

Copyright © 2007, Emerald Group Publishing Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

case study about moral dilemma

  • OUR CENTERS Bangalore Delhi Lucknow Mysuru --> Srinagar Dharwad Hyderabad

Call us @ 08069405205

case study about moral dilemma

Search Here

case study about moral dilemma

  • An Introduction to the CSE Exam
  • Personality Test
  • Annual Calendar by UPSC-2024
  • Common Myths about the Exam
  • About Insights IAS
  • Our Mission, Vision & Values
  • Director's Desk
  • Meet Our Team
  • Our Branches
  • Careers at Insights IAS
  • Daily Current Affairs+PIB Summary
  • Insights into Editorials
  • Insta Revision Modules for Prelims
  • Current Affairs Quiz
  • Static Quiz
  • Current Affairs RTM
  • Insta-DART(CSAT)
  • Insta 75 Days Revision Tests for Prelims 2024
  • Secure (Mains Answer writing)
  • Secure Synopsis
  • Ethics Case Studies
  • Insta Ethics
  • Weekly Essay Challenge
  • Insta Revision Modules-Mains
  • Insta 75 Days Revision Tests for Mains
  • Secure (Archive)
  • Anthropology
  • Law Optional
  • Kannada Literature
  • Public Administration
  • English Literature
  • Medical Science
  • Mathematics
  • Commerce & Accountancy
  • Monthly Magazine: CURRENT AFFAIRS 30
  • Content for Mains Enrichment (CME)
  • InstaMaps: Important Places in News
  • Weekly CA Magazine
  • The PRIME Magazine
  • Insta Revision Modules-Prelims
  • Insta-DART(CSAT) Quiz
  • Insta 75 days Revision Tests for Prelims 2022
  • Insights SECURE(Mains Answer Writing)
  • Interview Transcripts
  • Previous Years' Question Papers-Prelims
  • Answer Keys for Prelims PYQs
  • Solve Prelims PYQs
  • Previous Years' Question Papers-Mains
  • UPSC CSE Syllabus
  • Toppers from Insights IAS
  • Testimonials
  • Felicitation
  • UPSC Results
  • Indian Heritage & Culture
  • Ancient Indian History
  • Medieval Indian History
  • Modern Indian History
  • World History
  • World Geography
  • Indian Geography
  • Indian Society
  • Social Justice
  • International Relations
  • Agriculture
  • Environment & Ecology
  • Disaster Management
  • Science & Technology
  • Security Issues
  • Ethics, Integrity and Aptitude

InstaCourses

  • Indian Heritage & Culture
  • Enivornment & Ecology

Print Friendly, PDF & Email

Ethics Case Study – 7: Moral Dilemma

Rajiv is an IAS aspirant. He studied in two premier institutions and worked for a while in an IT company. He quit the job and started preparing for the civil services exams. In his first attempt he wrote mains but could not qualify for the personality test. In next two attempts, however, he gave interviews but fate had it that his name did not appear in the final list. In all three attempts he had scored less in Mains and in two interviews his score was average if not bad.

Coming under General Merit, Rajiv had only four attempts to get into IAS. For the last attempt, he decided to take a break of one year and prepare extremely well giving no chance to fate. By then he had spent five years just for preparing for this exam with no job in hand.

He did prepare well and easily sailed through the Preliminary and Mains exam. For his final interview, Rajiv, prepared himself very well. He read widely. He contacted his peers and well wishers, talked to them extensively and took feedback on his body language and communication skills. He took mock tests at prominent institutions and got a very positive feedback.His confidence was at an all time high. By the time interview call letter came, Rajiv was fully ready to face his final test to realize the dream of becoming an IAS officer.

On the previous day of his interview, Rajiv talked to his parents, girlfriend and teachers and sought their wishes. He had a sound sleep too.

His interview was scheduled in the second session i.e in the afternoon. On the day of his interview, in the morning Rajiv was calm, composed and had a friendly chat with fellow aspirants who had stayed together in a friend’s room.

He had his lunch and left room in his bike half an hour before the scheduled time of his appearance at UPSC office.

Rajiv was riding his bike with lots of thoughts in his mind. The road was almost empty. As he was riding, just in front of him, a speeding bike collided with the road divider. Seeing this, Rajiv stopped his bike for a minute and went near the accident scene. A man, crying with pain, was lying in a pool of blood and a girl child, around 5 year old, was lying unconscious next to the man. Rajiv looked around for help, but two or three cars sped away without stopping by.

Rajiv had to be at UPSC office in 10 minutes. If not he would forever lose his dream of becoming an IAS officer.

In this situation, what should Rajiv do? Justify your answer.

Left Menu Icon

  • Our Mission, Vision & Values
  • Director’s Desk
  • Commerce & Accountancy
  • Previous Years’ Question Papers-Prelims
  • Previous Years’ Question Papers-Mains
  • Environment & Ecology
  • Science & Technology
  • Research article
  • Open access
  • Published: 06 November 2018

Impact of moral case deliberation in healthcare settings: a literature review

  • Maaike M. Haan   ORCID: orcid.org/0000-0001-8430-2564 1 ,
  • Jelle L. P. van Gurp 1 ,
  • Simone M. Naber 1 &
  • A. Stef Groenewoud 1  

BMC Medical Ethics volume  19 , Article number:  85 ( 2018 ) Cite this article

10k Accesses

60 Citations

12 Altmetric

Metrics details

An important and supposedly impactful form of clinical ethics support is moral case deliberation (MCD). Empirical evidence, however, is limited with regard to its actual impact. With this literature review, we aim to investigate the empirical evidence of MCD, thereby a) informing the practice, and b) providing a focus for further research on and development of MCD in healthcare settings.

A systematic literature search was conducted in the electronic databases PubMed, CINAHL and Web of Science (June 2016). Both the data collection and the qualitative data analysis followed a stepwise approach, including continuous peer review and careful documentation of our decisions. The qualitative analysis was supported by ATLAS.ti.

Based on a qualitative analysis of 25 empirical papers, we identified four clusters of themes: 1) facilitators and barriers in the preparation and context of MCD, i.e., a safe and open atmosphere created by a facilitator, a concrete case, commitment of participants, a focus on the moral dimension, and a supportive organization; 2) changes that are brought about on a personal and inter-professional level, with regard to professional’s feelings of relief, relatedness and confidence; understanding of the perspectives of colleagues, one’s own perspective and the moral issue at stake; and awareness of the moral dimension of one’s work and awareness of the importance of reflection; 3) changes that are brought about in caring for patients and families; and 4) changes that are brought about on an organizational level.

Conclusions

This review shows that MCD brings about changes in practice, mostly for the professional in inter-professional interactions. Most reported changes are considered positive, although challenges, frustrations and absence of change were also reported. Empirical evidence of a concrete impact on the quality of patient care is limited and is mostly based on self-reports. With patient-focused and methodologically sound qualitative research, the practice and the value of MCD in healthcare settings can be better understood, thus making a stronger case for this kind of ethics support.

Peer Review reports

In healthcare, professionals are frequently confronted with morally complex and sometimes tragic situations in which difficult treatment and care decisions with far-reaching consequences have to be made [ 1 ]. Clinical ethics support (CES) helps in dealing with these complex issues. Over the past years, interest in CES has increased worldwide [ 2 ]. CES currently has many forms. A useful distinction was made by Rasoal et al. [ 2 ], who distinguished between ethics support services using a top-down and a bottom-up approach. Examples of a top-down approach are clinical ethics consultations and ethics committees, more common in the United States. In this top-down approach, according to Rasoal et al., the involved ethicist is generally attributed an expert position and advises professionals, although the actual expertise in CEC and the exact role of the ethicist is debated [ 2 ]. The outcomes of these consultations are to benefit patients and families, whereas healthcare professionals profit only indirectly from participating [ 3 ]. In contrast, group deliberations (such as moral case deliberation, ethics rounds, reflections or discussion groups) are an example of ethics support services with a bottom-up approach . These services have been reported mostly from European communities. Here, the ethicist facilitates the conversation without having an advisory role. The focus is on the reflection process of healthcare professionals, more than on a decision or solution for a clinical problem [ 3 ].

Moral case deliberation fits the latter approach. MCD is a collaborative meeting where a group of healthcare professionals jointly reflects on a concrete moral question, issue or dilemma. Essentially, and in contrast to other kinds of (more informal) meetings, a moral case deliberation is structured by a conversation method and moderated by a facilitator, often an ethicist [ 4 , 5 , 6 , 7 , 8 , 9 ]. For a recent case example of how a specific conversation method of MCD works in practice, we refer to Tan et al. [ 10 ]. During such a deliberation, as well as during similarly organized group sessions, professionals have the opportunity to freely articulate and share their stories, experiences, opinions and perspectives [ 9 , 11 , 12 , 13 , 14 , 15 , 16 , 17 ]. For the remainder of this paper, we will use the term moral case deliberation (from here-on referred to as “MCD”) as an umbrella term for all variations of group deliberations with a specific focus on moral issues in healthcare.

Silén et al. [ 18 ] point to the importance of evaluating CES services and question whether it is defensible to conduct group deliberations that are time-consuming, without some form of proof of value for the healthcare practice. In recent years, research has been conducted on evaluating group deliberations in terms of quality of conversation [ 19 , 20 , 21 ]. However, thorough empirical evidence with regard to the impact of MCD seems limited [ 18 , 22 ]. For the existing practice of MCD in healthcare organizations, it is necessary to substantiate its value, partly grounded in empirical evidence.

This literature review was conducted to gain insight into what has already been investigated in previous studies of the impact of MCD. The research question central to this review is the following: what is the impact of moral case deliberation with groups of healthcare professionals in a clinical setting? With this literature review, we aim to investigate the empirical evidence of MCD, thereby a) informing the practice, and b) providing a focus for further research on and development of MCD in healthcare settings.

This review’s research question focuses on the impact of moral case deliberation by groups of healthcare professionals. Here, we define impact as the changes that are brought about by participating in MCD. Since changes can be operationalized in several ways, we chose to integrate both quantitative and qualitative papers, based on the integrative review approach by Whittemore and Knafl [ 23 ]. We adopted a systematic, stepwise approach, including continuous peer review and careful documentation of our decisions in order to comply with the prescribed analytic honesty, e.g., making the thoughtful analysis process transparent [ 23 ]. A PRISMA flow chart of the research process can be found in Fig. 1 . The review was registered in the PROSPERO database (CRD42016043531), an international prospective registry of systematic reviews, in July 2016.

figure 1

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram

Literature search

A literature search was conducted in the electronic databases PubMed, CINAHL and Web of Science (All Databases) in June 2016 (see Table 1 ). As ‘moral case deliberation’ as a description is used broadly, and possibly similar forms of ethics support services may be described with different terminologies [ 24 ], several equivalent terms for moral deliberation within groups of healthcare professionals were piloted and then used in the search. We deliberately included ‘clinical ethics consultation’ in the search string. This was to enable ourselves to explore whether moral case deliberation is conducted within the Anglo-Saxon clinical ethics consultation practice and whether or not it is justified to strictly separate this practice from the MCD-practice. Furthermore, in PubMed and CINAHL, the search was narrowed with synonyms and other words relating to the ‘impact’ of MCD. Locating as much research on our topic as possible is in line with Hawker et al.’s view of a literature search, thus reducing the amount of missed relevant insights because of vague descriptions or differences in terminology [ 25 ]. All search queries were limited to publications in English, German or Dutch. No date restrictions were used.

Data extraction

After duplicate removal, the retrieved records were screened for relevance. Preceding the formal screening against eligibility criteria, all authors evaluated a random sample of 100 titles and abstracts of the retrieved records in order to refine the criteria for inclusion and exclusion and to test for usefulness (see Table 2 ). Since our research question asks for a reported impact of MCD, we focused on empirical evidence only and excluded papers if the impact was only (theoretically) assumed. The retrieved records were not assessed on their (methodological) quality. In the first screening step, the retrieved records were divided among two researcher duos. Each duo independently screened the records for relevance based on title and abstract. To prevent bias, the author(s) and publication year were not known by the screeners. This step resulted in several sets of records to keep the screening process manageable (see Fig. 1 ). Records about clinical ethics consultation were kept in a specific set to verify our assumption about the clinical ethics consultation practice being markedly different from the MCD practice. In the second screening step, the full texts of selected records were screened by the first author (MH) and a second screener, again discussed in research duos and – in case of doubt – were discussed by all authors until consensus was reached.

Data analysis

To do justice to the variety of methods used in the retrieved papers and the various forms of impact reported, we applied a stepwise qualitative analysis using the Computer Assisted Qualitative Data Analysis Software ATLAS.ti. The first paper was coded in an open way by all authors. An initial codebook was developed based on this open coding (MH). The subsequent 15 papers were coded by dividing them into different rounds over three different researcher duos, with MH being part of every duo. During this process, the codebook was continuously developed and adapted. The remaining nine papers were coded by MH and checked by the other authors on a category-level. In case of a difference of opinion, discussion continued until a consensus was reached. Lastly, the authors formed themes based on the codebook, which were related to each other and then clustered. In the development of this clustering, the themes were further refined.

Characteristics of the included studies and group conversations

Initially, 5196 records were retrieved. After duplicate removal, 3822 studies remained for screening of the title and abstract (see Fig. 1 ). After a thorough screening, and sometimes full-text reading, no papers included in the set of studies about clinical ethics consultation were included in the qualitative data analysis, mainly because the studies concerned expert advice by consultants instead of a deliberation among healthcare professionals. The stepwise screening process led to a final inclusion of 25 empirical papers, twenty-one from Europe and four from the US (see Table 3 for study characteristics). The included studies used both quantitative and qualitative research methods, including surveys, focus groups, interviews, observational studies, content analysis of conversation protocols, audio/visual tape recordings of group deliberations, or a mixed-methods design. Participants in these conversations often discussed a patient case in which an ethical issue arose, for example, when to withdraw treatment of a very ill patient or how to address an aggressive or noncompliant client. Conversations also concerned other kinds of dilemmas in patient care, such as communication issues between staff and nurses. The included studies contained both prospective and retrospective case discussions. Most evaluative studies were based on self-reports of participants of group deliberations, sometimes with a baseline and intervention design.

Clusters of themes

Our findings are divided into four clusters of themes related to impact, which are represented in Fig. 2 :

Facilitators and barriers in the preparation and context of MCD

Changes that are brought about on a personal and inter-professional level, changes that are brought about in caring for patients and families, changes that are brought about on an organizational level.

figure 2

Clusters of the impacts of MCD

A safe and open atmosphere created by a facilitator

A moral case deliberation should be guided by a facilitator who is neutral with regard to the issue that is being discussed and who is not involved with or part of the team, to guarantee an atmosphere of trust [ 14 , 26 , 27 ]. Mutual trust may take some time, especially when participants stem from different disciplines and work in different wards [ 9 ]. It is deemed important that every participant gets the opportunity to speak out, without others feeling threatened or accused [ 14 , 28 ].

A concrete case

The case to be discussed has to be concrete to allow participants to relate. Deliberations with only little reference to daily practice are usually disappointing for participants and are sometimes considered a waste of time [ 29 ]. In the study by Appelbaum et al. in 1981 [ 28 ], ongoing patient cases were explicitly not discussed to stimulate the participants’ ability to think abstractly. Nowadays, however, it is considered important to connect deliberations to daily practice in the wards, thus stimulating professionals to not revert ‘back to business as usual’ after the deliberation [ 8 ].

Commitment of participants

For an MCD to be successful it is important for the participating healthcare professionals to be committed and cooperative [ 14 ]. Discontinuity in attendance and absence of team members is seen as a barrier [ 7 , 8 , 13 ], preventing implementation of what is discussed or decided [ 8 ]. Adequate preparation and information promotes involvement in the discussion [ 7 ]. The interdisciplinary character of such deliberations is often experienced as positive [ 15 , 26 , 30 , 31 ]. However, this may also hinder discussion because of differences of opinion regarding ethical, legal, social or medical aspects [ 31 ].

A focus on the moral dimension

In contrast to, for example, practical, legal, economical or psychological issues, a moral issue concerns the question: “What is a ‘good’ thing to do in this particular case/situation?”. Should we consider discontinuation of life-sustaining treatment for this patient? What does it mean to provide ‘good care’ to this aggressive client? Is it appropriate to treat this woman against her will? The moral issues in a case should be central to the deliberation. It is reported that the use of a method for structuring the conversation may be of help here. It also guarantees that all relevant perspectives are heard and that morally relevant aspects are weighed and dealt with [ 16 , 32 ].

A supportive organization

An organization supportive of MCD is a health care organization where MCD is supported and anchored both top-down and bottom-up. Support from upper management is essential [ 4 ], but local coordinators should also be convinced of the importance of MCD and coordinate the scheduling, for example, in a ward’s action plan [ 7 , 26 , 32 ]. Deliberations should not be organized on an ad hoc basis only but are preferably integrated into an existing organizational structure [ 4 ], for instance, by a scheduling format [ 27 ]. Dauwerse et al. [ 33 ] emphasized the importance of structurally organizing MCD, as this prevents attention for ethics from being superficial.

Based on our qualitative analysis, we identified several themes and subthemes in this second cluster, which are illustrated in Fig. 2 . The changes are related to professional’s feelings, an improved understanding, and the awareness of the moral dimension in one’s work.

Feelings of professionals

Feeling relieved of the burden of moral issues.

MCD functions as a forum to speak freely about concerns without being judged and without the primary goal of coming to a concrete result or decision [ 8 , 29 ]. It can be a relief for participants to “finally be[ing] able to talk about ethical issues rather than seeing them buried in concerns about clinical care” [ 28 ] (p.559). In addition, doctors reported feeling relieved by being able to share the responsibility for a decision with a multidisciplinary team [ 15 ]. Finally, several studies relate participating in an MCD to the reduction of ‘moral distress’ [ 8 , 9 , 11 , 34 ]. It was found that participants reported feeling less emotionally distressed or captured by the dilemma [ 5 ] and that MCD reduced their moral burden, especially in complex cases [ 9 , 15 , 34 ]. It was also found that participants learned to avoid focusing on solutions [ 4 ]. It can be unburdening to talk about dilemmas without having to reach a decision or solution, and to be able to acknowledge the sometimes tragic circumstances in care practice [ 8 ]. However, Tanner et al. [ 15 ] point out that some professionals might feel an increase in burden due to a lack of mutual agreement, indecisiveness, or having to take multiple perspectives into account.

Feeling related to other professionals

As a result of freely sharing experiences and opinions during MCD [ 17 , 32 ], professionals feel more related to each other [ 14 ] and have a more open inter-professional connection [ 4 , 8 , 32 , 33 ]. In MCDs, a sense of togetherness is experienced, as participation implies a willingness to both ask and give support [ 6 , 12 ]. This often is a starting point for trust [ 7 , 12 ] and not feeling alone in your concerns as a professional [ 6 , 15 , 29 , 32 , 35 ]. Instead of struggling alone, team members work out a dilemma together [ 32 ]. Söderhamn et al. [ 17 ] found that participants, as well as outsiders, observed that participants dared to “speak their minds” more after MCD. In addition, more informal communication in the wards and at the bedside has been reported [ 8 , 11 ]. Another illustration of an increased sense of cohesion within a team is that professionals felt freer to address one another more often and earlier with moral issues [ 7 ]. In another study, however, participants perceived a gap between themselves and their colleagues who had no experience with MCD, which complicated the dialogue among colleagues [ 8 ].

This relatedness is also illustrated in the way in which a team works together in caring for patients after MCD. In patient care, professional action is often accompanied by emotions – for example, doctors’ loneliness in trying to make the right decisions or nurses’ feelings of powerlessness and frustrations [ 29 , 35 ]. Svantesson et al. [ 35 ] found that a group deliberation confirmed participants’ observations concerning how far doctors and nurses stand apart from each other. However, several studies have illustrated a relation between MCD and improved inter-professional collaboration. Group deliberations stimulate awareness of the need for uniformity regarding treatment policy [ 6 ]. Different medical professionals adapted and improved their interdisciplinary discussions based on earlier experiences in MCD [ 15 ]. Some doctors became aware of the opportunity and their responsibility to explain their motives for continuing life-sustaining treatment in MCD [ 29 ]. A more transparent communication about goals and decisions was seen as a possibility to better attend patients due to their improved understanding of the medical situation [ 15 ]. Decisions were more easily accepted and carried out [ 26 ]. Furthermore, nurses knew how to raise a theme in an interdisciplinary context more effectively than before participating in MCD. Doctors tended to respond to nurses who raised a problem sooner than before participating [ 15 ]. It can be a positive and empowering experience for professionals, especially nurses, when voicing one’s opinions is not taboo anymore and instead, one’s perspective is taken seriously and understood by others in the decision making process [ 6 , 7 , 15 , 26 , 32 , 36 ]. Some studies have indicated that MCD leads to thinking more about personal involvement and responsibilities. This includes both setting boundaries to prevent feeling too involved [ 6 ], as well as loosening boundaries by not blaming others but sharing responsibility instead [ 35 ].

Feeling confident

Several studies showed that professionals reported feeling more confident in their work [ 5 , 32 , 34 ], for example, through finding their own approach validated during MCD or through the experience of hearing that others feel the same way about aspects of certain cases [ 11 ]. Additionally, understanding all alternatives and weighing them by means of a conversation method or format with specific steps reassures professionals that the decision-making is sound [ 16 ] and it gives them “peace of mind” [ 9 ]. After a deliberation, participants are more inclined to be straightforward and transparent towards colleagues or patients [ 6 ]. Seeing alternatives and developing a critical attitude is also associated with confidence to act in future situations [ 29 ]. In one study, this resulted in professionals being more assertive and even firm with noncompliant patients [ 35 ]. It can be a positive experience for participants and can even be felt as a need or wish to achieve a consensus in the group, especially in difficult cases [ 30 , 35 , 36 ]. Participants in the study of Bernthal et al. [ 16 ] reported the deliberation method as being effective for achieving such consensus about how to act.

However, a deeper understanding of a problem made some professionals considerably less certain of the validity of their own approach [ 28 ]. When MCD produces more questions than answers, professionals who seek consensus or concrete solutions for problems directly related to their daily practice might become disappointed and frustrated [ 8 , 29 , 31 ]. An MCD will be more successful when participants accept that ‘easy’ answers on how one should act are uncommon and realize that it can still be safe to see different alternatives in a case without reaching a consensus [ 14 ]. Deviating from habits or existing policies can be a challenge for participants [ 9 ]. Additionally, Van der Dam et al. [ 8 ] observed a difference in confidence that was related to a difference in the ability to talk about moral issues in the group. Especially in the first meetings, professionals who were morally more competent felt frustrated and impatient in an MCD with less competent colleagues. Such insecurity can prevent informal communication on moral issues. This example stresses the importance of a safe atmosphere in MCD.

Understanding by professionals

Speaking and listening to each other in an MCD not only changes feelings, but also has an impact on one’s understanding. We identified three types of understanding by professionals: understanding the perspectives of colleagues, understanding one’s own perspective and understanding the moral issue at stake.

Understanding the perspectives of colleagues

Multidisciplinary MCDs are considered a helpful and positive learning experience [ 13 , 32 ]. In line with the findings about feeling more related with each other, during MCDs, professionals get to better understand one another’s considerations and actions [ 7 , 17 , 32 ]. Professionals become more familiar with each other’s daily work, values, norms and moral struggles [ 8 , 9 , 26 ]. For some participants, it is an eye-opener that colleagues struggle with moral issues as well and in a variety of ways [ 9 , 17 ]. Furthermore, professionals learn to acknowledge, appreciate and respect the opinions of colleagues and patients to a greater extent [ 4 , 16 , 17 ]. MCD helps them to relate to viewpoints that are not necessarily their own, thus developing a broader perspective on the – sometimes seemingly simple – case at hand [ 4 , 5 , 6 , 7 , 8 , 11 , 12 , 13 , 14 ]. This will be elaborated further in the paragraph ‘ Understanding the moral issue at stake’ . In addition, by improving professionals’ mutual understanding and understanding of a decision, MCD reduces conflicts [ 4 , 6 , 15 , 32 ] and leads to more solidarity, respect, tolerance, collegial support and cooperation [ 17 , 32 ].

However, one study reported participants struggling to put themselves in someone else’s position [ 9 ]. Another study found that a difference in cultural backgrounds was seen as a threat instead of an enriching point of view [ 32 ].

Understanding one’s own perspective

MCD supports professionals in critically reflecting on and becoming more aware of their own assumptions, intentions, and actions regarding patient cases [ 4 , 7 , 14 , 17 , 35 ]. This was reported, for instance, with regard to verbal and nonverbal behavior towards (aggressive) patients [ 17 , 35 ]. According to Van der Dam et al. [ 9 ], participants developed “a more exploratory attitude” (p. 129). Instead of following old routines and acting on ‘automatic pilot’, professionals are more inclined to question their practices or previous understandings of situations [ 17 , 26 , 32 , 35 ]. As a result, nuances can be applied to personal opinions [ 6 ].

Understanding the moral issue at stake

MCD is not only considered helpful to better understand the perspectives of colleagues and see their struggles with moral issues in general. Several studies have shown that a structured MCD approach helps to clarify and comprehend the specific moral problem at stake [ 9 , 14 , 27 , 29 , 34 ]. Weighing new information and different arguments – including pros and cons – generally offers a more integrated and holistic view [ 29 , 32 ]. Instead of working towards ‘the’ right answer or a concrete solution, healthcare professionals learn to see the complexity and multidimensionality in cases [ 4 , 9 , 35 ]. However, two studies showed that MCD did not lead to new insights or questions for participants, or to a lesser extent than was expected [ 35 , 36 ].

According to some authors, it is this variety of perspectives in the joint deliberation that enhances the moral investigation of the case [ 8 , 9 ], which is believed to positively influence the quality of care [ 9 ]. Van der Dam et al. [ 8 ] suggested that reflecting by yourself or with only your own (mono-disciplinary) colleagues lacks this richness of different perspectives. Grönlund et al. [ 12 ] observed that through multi-perspective dialogue, new ways of thinking about the specific patient and his or her situation emerged. In general, MCD seems to provide a better understanding of responsibilities and ethical issues in patient care [ 4 , 11 , 13 , 31 , 32 , 35 ]. Some participants develop new ways of thinking about moral problems [ 28 ] – especially more systematic and critical approaches [ 4 ]. Such an increased understanding can lead to new or better solutions regarding patient cases [ 7 , 32 ]. However, in several studies, little or no change in opinion about patient cases was reported after an MCD [ 14 , 15 , 16 , 28 , 31 ].

Awareness of the moral dimension of one’s profession

We identified two types of awareness: awareness of the moral dimension of caring and awareness of the importance of reflection.

Awareness of the moral dimension of caring

Participating in MCD results in more attention and more sensitivity to moral issues in general [ 4 , 15 , 17 , 32 ]. Participants seemed to think more about reasons, arguments and “gray areas” in their work [ 4 , 14 ]. Several studies report that group deliberations stimulated creativity in thinking, which resulted in alternative ideas and possibilities [ 8 , 9 , 12 , 32 ]. Recognizing and articulating moral issues can be hard for professionals, as it is sometimes assumed that such issues only have to do with ‘difficult patients’ in the wards. However, MCD helps participants to see the variety of moral issues in their professional practice (from everyday problems to managerial questions) and provides insights regarding the moral complexity in seemingly simple or practical cases [ 4 ]. For some participants, it became easier over time to write down focused cases [ 9 ]. According to the categorization by Dauwerse et al. [ 33 ], so-called “explicit ethics support”, which includes MCD, places ethical issues and the ethical dimension of care structurally ‘on the agenda’.

Additionally, in multiple studies, MCD is related to the improvement of one’s competence in addressing and managing moral issues [ 5 , 11 , 17 , 32 , 34 ], for example, by dealing with these issues quickly, more fully and without frustration [ 12 , 13 ]. Some professionals reported that they felt it became easier to contact their team leader in case of future problems or ideas [ 7 ]. As participants learned to join in a moral dialogue, their moral and reasoning skills were trained (e.g., listening, postponing initial judgments, not primarily wanting to convince others, thinking through a dilemma and asking questions) [ 4 , 5 ]. It seems that ethics education correlates with a greater sense of moral agency, but as Wocial et al. [ 11 ] indicated: “It is not clear (…) whether participation in UBECs [unit-based ethics conversations] leads nurses to act on their moral agency, or if those who are more likely to act on their sense of moral agency are more likely to attend a UBEC.” (p. 53).

Awareness of the importance of reflection

In several studies, participants in MCD stressed the importance of, and the need for, timely and regularly scheduled reflection on their work [ 5 , 7 , 11 , 17 , 35 ], as opposed to immediately acting in a complex situation [ 6 ]. Initially, participants may feel ambiguous about MCD, but participating in deliberation creates an appreciation of MCD [ 6 , 9 ]. In the study of Söderhamn et al. [ 17 ], the combination of both regular meetings and a five minute-method during the day was considered helpful to encourage reflection in everyday practice.

As was elaborated previously with regard to the first cluster of changes in understanding by professionals, MCD may stimulate new ways of thinking about the case at hand. In addition, we identified two ways in which professionals’ caring for patients can be influenced by MCD.

Profession-related changes

There are some indications of the impact of MCD on one’s profession. Some studies stated that people can become better healthcare professionals through MCD [ 17 , 26 ] or that MCD was considered – broadly speaking – helpful for the job or helped participants to gain insight in what is truly important in their work [ 26 , 36 ]. Additionally, one study showed that after moral reflection, healthcare professionals were more focused on further professionalization, for instance, wanting to learn more about how to provide the best possible patient care [ 17 ]. Söderhamn et al. [ 17 ] revealed three factors that predict whether or not ethical reflection is valued by professionals: professionals who are older, who have a higher position and who have more experience with such reflections consider MCD to be meaningful in the workplace.

The included studies are ambiguous regarding whether systematic reflection leads to more organizational profession-related changes, such as reduced absenteeism and increased job interest. Lillemoen and Pedersen [ 32 ] found that managers and facilitators were confident about this impact, but staff members doubted it. Tanner et al. [ 15 ] found that a Swiss ethics program led to decrease in distress in professionals, thereby adding to job satisfaction. This, in turn, decreased frustration and dissatisfaction among nurses.

A change in one’s professional opinion or attitude due to MCD is described by several studies, but in what way this change comes about is less clear [ 5 , 12 ]. Participants were more critical towards their practice and managers felt more challenged by their employees [ 32 ]. Furthermore, experiences of no impact on daily work were reported as well [ 6 , 14 ].

Quality of patient care

The included studies indicate that MCD may influence the quality of patient care. We have divided these results into the impact on the interaction with patients and families and the impact on medical technical care of patients. According to healthcare professionals, through MCD, they developed an enriched understanding of their patients’ situations [ 9 , 32 , 35 ]. Participants reported being more aware of patients’ and families’ rights in a decision-making process [ 27 , 32 ] and thinking more about their perspectives, wishes, and needs [ 14 , 26 , 32 ]. Meyer-Zehnder et al. [ 34 ] indicated an educational effect when MCD takes place regularly, as patient wishes are actually verified and addressed sooner. Participants in another study [ 17 ] also reported being more aware of their own verbal and body language, which resulted in more personalized care, more respect and their seeing patients as more than their diagnosis. Some of the staff who did not participate in the ethical reflections in this study observed this change in their colleagues’ behavior as well. Tanner et al. [ 15 ] found more support for a mutual and documented decision as a team, for example, towards patients and families. Staff described a relation between MCD and a decreased use of coercion towards their patients [ 32 ]. Furthermore, an increased awareness of patients’ wishes led to an openness towards patient and proxy participation, with professionals seeing or hearing patients more [ 32 ], and a better representation of parents’ opinions in the decision making process about neonates [ 27 ].

In addition to changing the interaction with patients and families, MCD can also influence medical technical care for patients. A better understanding of the patient may lead to more adequate recommendations regarding a patient case [ 13 ]. Jehle and Jurchah [ 13 ] found that reflection helped with decision making and led to concrete recommendations and actions in a specific situation, thus refining care plans and ensuring they were agreed upon by families, patients, and the team. Additionally, MCD was found to support acting faster and providing better nursing care in similar cases [ 26 ]. The study of Baumann-Hölzle et al. [ 30 ] showed a concrete change in medical care after MCD: a shortening of futile intensive care compared to a control group. According to Baumann-Hölzle et al., this could be interpreted as limiting suffering in infants destined to die.

The last theme we identified is professional attention to ethics on an organizational level. In one study, it was found that group deliberations in psychiatric outpatient clinics did not lead to statistically significant changes in the so-called ‘ethical climate’, as measured with a specific survey [ 18 ]. However, several studies report an expansion of (informal) discussions and rounds after moral deliberation had taken place [ 26 , 28 , 31 , 32 ].

Based on a qualitative analysis of 25 empirical papers, we have gained an overview of what is known about the impact of MCD. The results consist of four clusters of themes we found in the literature (see Fig. 2 ):

Facilitators and barriers in the preparation and context of MCD include the following: a safe and open atmosphere created by a facilitator, a concrete case, commitment of participants, a focus on the moral dimension, and a supportive organization. This is also underpinned by recent research in municipal healthcare, which showed that a systematic and supported approach is helpful in facilitating reflection groups [ 24 ]. The facilitator appeared to be the most important facilitating factor.

Changes that are brought about on a personal and inter-professional level are concerned with the following: feeling relieved, feeling related to other professionals and feeling confident; understanding the perspectives of colleagues, understanding one’s own perspective and understanding the moral issue at stake; and awareness of the moral dimension and awareness of the importance of reflection. Most of the reported impact is on this inter-professional level. This tells us how healthcare professionals experience participating in an MCD and what they believe is the value of an MCD for dealing with ethical issues, as individuals and as a team. This is in line with what healthcare professionals perceive as important outcomes prior to participating in MCD: ‘more open communication’, ‘better mutual understanding’, ‘concrete actions’, ‘see the situation from different perspectives’, ‘consensus on how to manage the situation’ and ‘find more courses of action’ [ 21 ]. Interestingly, despite the daily practice of (multidisciplinary) collaboration in the field of healthcare, this review shows that separate sessions on work-related moral dilemmas are helpful to be able to actually get to know each other’s perspectives and find some relief in that. Apparently, there is a lack of this kind of sharing in daily work. We have to take into account, notwithstanding the mainly positive impact that is reported, that an MCD does not always leads to a decrease in one’s mental or emotional burden. We found that it can be challenging for healthcare professionals to deviate from routines or to see new perspectives. Professionals may feel ambiguous about participating or frustrated when a deliberation does not lead to concrete decisions or consensus. Our findings showed that for some, a deliberation does not result in new insights or changes in opinion, which might add to the ambiguous attitude with regard to participating. The attitude needed for MCD – which requires among other things the willingness to take a step back and explore the moral issue – cannot always be yielded by participants. In our own experience within our hospital, if MCD does not take place in a safe and open atmosphere with committed participants, it is likely to only add to the tension within the team.

Changes that are brought about in caring for patients and families are concerned with one’s profession and quality of patient care. Remarkably, this cluster of themes was rather small in comparison with the second cluster, and we found little evidence for a concrete impact of MCD on patient care. In the field of clinical ethics, sometimes rather big claims are made. Karlsen et al. [ 24 ] summarized previous research that indicated that staff, managers, and facilitators agree on the relation between ethics reflection groups, a positive impact on work environment, and an increase in quality of care, for example, through the participants’ increased ability to see alternative courses of action and make better decisions [ 24 ]. Nevertheless, there is limited empirical evidence with regard to the changes that are actually brought about in caring practices after the group conversation has taken place.

Lastly, we identified some changes that are brought about on an organizational level. This cluster was equally small. This is in line with the observation of Silén et al. [ 18 ] that studies have not yet been able to demonstrate this presumed positive relation between ethics rounds and improvements in the work environment, such as an improved ethical climate, less burnouts or increased job satisfaction. This lack of evidence remains a challenge for the field.

Not all kinds of impacts can or should be measured

Our overview can help to gain insight into the strengths and weaknesses of MCD, as well as determine blind spots in MCD research. In 1977, Levine et al. [ 31 ] stated that the impact of moral deliberation on patient care is difficult to assess. Concrete changes might be hard to grasp with empirical investigation of concepts such as ‘quality of care’. Perhaps, as suggested by Silén et al. [ 14 ], the impact on measurable outcomes is mediated by communication and collaboration patterns, which can, in turn, be influenced by moral case deliberation. Thus, one should carefully operationalize ‘improved quality of care’ in further research. In addition, it is debatable whether it is right to justify the practice of MCD in terms of efficiency, quality improvement or other ‘hard’ impacts. We believe the added value of moral case deliberation is ‘soft’ or intangible by nature, and more difficult to pin down in measurable units. Perhaps such a deliberation has a value in and of itself. In that case, it is meaningless to try to measure this ‘soft’ kind of impact in quantified terms or to translate it to specific managerial categories. That might only lead to an (undesired) top-down focus on predefined outcomes, which could diminish the value of MCD. A bottom-up approach is preferable: desired outcomes should not be defined by external stakeholders, but participants themselves should be active in setting the agenda in evaluation studies [ 20 ]. De Snoo-Trimp et al. [ 21 ], for example, investigated what healthcare professionals themselves perceive as important outcomes.

The impact is often based on self-reports by participants

The involvement of participants in evaluating MCD is also reflected in the methodologies of the included studies in our review: the found impact is mostly based on self-reports by healthcare professionals in surveys, focus groups, and interviews. However, one should keep in mind that positive evaluations of participants do not necessarily imply that a group deliberation results in concrete changes in the way they treat their patients. In addition, positive evaluations we found might stem from a source of bias, as sometimes the researcher was the coordinator of the implementation and the facilitator of the conversation as well, which might have elicited socially desired behavior. Furthermore, in some papers, the study sample consisted of people who were willing to participate in MCD. This could result in sampling bias, as professionals who participate usually have a positive attitude towards MCD, and their self-reports will reflect this attitude.

Finally, in our review, outcomes with regard to ethics in the organization seem to be the most abstract. This might also be due to the self-reports, since healthcare professionals might not be able to give detailed information about the organization as a whole.

Implications for further research

If concrete changes are expected with regard to quality of patient care, then one should not only investigate the perspective of professionals but also study the effects as experienced by the patients themselves. Specifically, there is a need for further qualitative research, as we should study the complex care practice which might be changed in a subtle way by MCD. Several authors suggest obtaining a more nuanced picture by using research designs such as a control-group, observational studies [ 7 ] or a mixed method design [ 18 ]. An example of a recently developed survey is the ‘Euro-MCD’, which investigates participants’ perceived importance of MCD outcomes [ 37 ]. We argue that the design of further research should rely heavily on qualitative methods. The positive contribution of qualitative research in the field of clinical ethics support services is further elaborated by Wäscher et al. [ 38 ].

Qualitative research should interfere as little as possible in existing practices [ 39 ]. This implies strictly separating the role of researcher and facilitator to prevent influencing evaluations from participants. In addition to this, it is important to study the reasons why people waive participation in MCD. With qualitative methods, one can investigate the different perspectives of healthcare professionals, including those who do not want to participate, patients/clients, proxies and others. This might show the actual continuation of MCD in health care practice and in the ‘ethical climate’ of the organization as a whole.

Implications for practice

Considering the impact of MCD with regard to healthcare professionals feeling more related to one another, a critical thought might arise: “Could regular team meetings not generate similar feelings?” Based on our analysis, we believe that the difference between MCD and other meetings and the added value of MCD lies in its structured approach of freely exploring the moral question at stake, without having to reach a concrete solution or decision. We consider it to be important that all professionals involved in the case or issue join the conversation. In our experience, a structured method and a facilitator are essential elements to create the required open and safe atmosphere and to guarantee a careful critical-ethical analysis from all (multidisciplinary) perspectives. A regular team meeting might result in more cohesion and relatedness but likely in a less thorough way, when compared to a group conversation in which people have a dialogue on a moral issue.

Strengths and limitations

We adopted a thorough and systematic approach in reviewing the existing literature about the impact of MCD, based on ongoing discussion between the authors. To our knowledge, a literature review of this type has not been conducted before. Our review seems to appeal to a need in practice to account for the value and impact of clinical ethics support. Furthermore, we aim to fill a gap in research with regard to conceptual ambiguity in forms of clinical ethics support services, which is also illustrated by the literature review of Rasoal et al. [ 2 ].

However, some limitations should be taken into account when reading this paper. A first limitation is a possible bias in the studies we included. In some papers, the study sample only consisted of people who were willing to participate in MCD. This could results in bias, as professionals who participate are usually favorable towards MCD, and their self-reports are likely to provide a positive outlook. Thus, it is important to investigate which professionals waive participation and for what reason. Secondly, our search was limited by our definition of MCD. Given the conceptual ambiguity in the field of clinical ethics support services, it would be worth-wile to make an inventory of all the sorts of deliberations used in practice (independently of empirical research), for example through a questionnaire at international symposia or by means of a global Delphi round within our ethics networks. A third limitation is the absence of an evaluation of the methodological soundness of the included papers. This should be kept in mind when reading and interpreting our results.

With this literature review, we aimed to present an overview of the empirical evidence for both the positive and negative impacts of MCD. It was shown that MCD brings about changes in practice, mostly for the professional in inter-professional interactions with regard to one’s feelings of relief, relatedness and confidence; understanding of the perspectives of colleagues, one’s own perspective and the moral issue at stake; and awareness of the moral dimension of one’s work and awareness of the importance of reflection. Most reported changes were considered positive, although challenges, frustrations and absence of change were also reported. Empirical evidence of a concrete impact on the quality of patient care is limited and is mostly based on self-reports. With patient-focused and methodologically sound qualitative research, the practice and the value of MCD in healthcare settings can be better understood, thus making a stronger case for this kind of ethics support.

Abbreviations

  • Clinical ethics support
  • Moral case deliberation

Spronk B, Stolper M, Widdershoven G. Tragedy in moral case deliberation. Med Health Care Philos. 2017;20(3):321–33.

Article   Google Scholar  

Rasoal D, Skovdahl K, Gifford M, Kihlgren A. Clinical ethics support for healthcare personnel: an integrative literature review. HEC Forum. 2017;29:313–46.

Crigger N, Fox M, Rosell T, Rojjanasrirat W. Moving it along: a study of healthcare professionals’ experience with ethics consultations. Nurs Ethics. 2017;24(3):279–91.

Molewijk B, Verkerk M, Milius H, Widdershoven G. Implementing moral case deliberation in a psychiatric hospital: process and outcome. Med Health Care Philos. 2008;11(1):43–56. https://doi.org/10.1007/s11019-007-9103-1 .

Molewijk AC, Abma T, Stolper M, Widdershoven G. Teaching ethics in the clinic. The theory and practice of moral case deliberation. J Med Ethics. 2008;34(2):120–4. https://doi.org/10.1136/jme.2006.018580 .

Weidema FC, Molewijk BAC, Kamsteeg F, Widdershoven GAM. Aims and harvest of moral case deliberation. Nurs Ethics. 2013;20(6):617–31. https://doi.org/10.1177/0969733012473773 .

Janssens R, van Zadelhoff E, van Loo G, Widdershoven GAM, Molewijk BAC. Evaluation and perceived results of moral case deliberation: a mixed methods study. Nurs Ethics. 2015;22(8):870–80. https://doi.org/10.1177/0969733014557115 .

van der Dam S, Abma TA, Molewijk AC, Kardol MJM, Schols J. Organizing moral case deliberation experiences in two Dutch nursing homes. Nurs Ethics. 2011;18(3):327–40. https://doi.org/10.1177/0969733011400299 .

van der Dam S, Schols JM, Kardol TJ, Molewijk BC, Widdershoven GA, Abma TA. The discovery of deliberation. From ambiguity to appreciation through the learning process of doing moral case deliberation in Dutch elderly care. Soc Sci Med. 2013;83:125–32. https://doi.org/10.1016/j.socscimed.2013.01.024 .

Tan DY, ter Meulen BC, Molewijk A, Widdershoven G. Moral case deliberation. Pract Neurol. 2017;0:1–6.

Google Scholar  

Wocial LD, Hancock M, Bledsoe PD, Chamness AR, Helft PR. An evaluation of unit-based ethics conversations. JONAS Healthc Law Ethics Regul. 2010;12(2):48–54. https://doi.org/10.1097/NHL.0b013e3181de18a2 .

Grönlund CF, Dahlqvist V, Zingmark K, Sandlund M, Söderberg A. Managing ethical difficulties in healthcare: communicating in inter-professional clinical ethics support sessions. HEC Forum. 2016. https://doi.org/10.1007/s10730-016-9303-2 .

Jehle J, Jurchah M. Patient with a devastating embolic stroke: using weekly multidisciplinary ethics rounds in the neuroscience intensive care unit to facilitate care and communication. Top Stroke Rehabil. 2014;21(1):7–11. https://doi.org/10.1310/tsr2101-7 .

Silén M, Ramklint M, Hansson MG, Haglund K. Ethics rounds: an appreciated form of ethics support. Nurs Ethics. 2016;23(2):203–13. https://doi.org/10.1177/0969733014560930 .

Tanner S, Schleger HA, Meyer-Zehnder B, Schnurrer V, Reiter-Theil S, Pargger H. Clinical everyday ethics-support in handling moral distress? Evaluation of an ethical decision-making model for interprofessional clinical teams. Med Klin Intensivmed Notfmed. 2014;109(5):354–63. https://doi.org/10.1007/s00063-013-0327-y.

Bernthal EM, Russell RJ, Draper HJ. A qualitative study of the use of the four quadrant approach to assist ethical decision-making during deployment. J R Army Med Corps. 2014;160(2):196–202. https://doi.org/10.1136/jramc-2013-000214 .

Söderhamn U, Kjøstvedt HT, Slettebø A. Evaluation of ethical reflections in community healthcare: a mixed-methods study. Nurs Ethics. 2015;22(2):194–204. https://doi.org/10.1177/0969733014524762 .

Silén M, Haglund K, Hansson MG, Ramklint M. Ethics rounds do not improve the handling of ethical issues by psychiatric staff. Nord J Psychiatry. 2015;69(6):1700–7. https://doi.org/10.3109/08039488.2014.994032 .

Jellema H, Kremer S, Mackor AR, Molewijk B. Evaluating the quality of the deliberation in moral case deliberations: a coding scheme. Bioethics. 2017;31(4):277–85.

Metselaar S, Widdershoven G, Porz R, Molewijk B. Evaluating clinical ethics support: a participatory approach. Bioethics. 2017;31(4):258–66.

de Snoo-Trimp J, Widdershoven G, Svantesson M, de Vet R, Molewijk B. What outcomes do Dutch healthcare professionals perceive as important before participation in moral case deliberation? Bioethics. 2017;31(4):246–57.

Molewijk B, Schildmann J, Slowther A. Integrating theory and data in evaluating clinical ethics support. Still a long way to go. Bioethics. 2017;31(4):234–6.

Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs. 2005;52(5):546–53.

Karlsen H, Lillemoen L, Magelssen M, Førde R, Pedersen R, Gjerberg E. How to succeed with ethics reflection groups in community healthcare? Professionals’ perceptions. Nurs Ethics. 2018. https://doi.org/10.1177/0969733017747957 .

Hawker S, Payne S, Kerr C, Hardey M, Powell J. Appraising the evidence: reviewing disparate data systematically. Qual Health Res. 2002;12(9):1284–99.

Voskes Y, Evenblij K, Noorthoorn E, Porz R, Widdershoven G. Moral case deliberation about coercion in psychiatry dilemmas, value and implementation. Psychiatr Prax. 2014;41(7):364–70. https://doi.org/10.1055/s-0034-1370292 .

de Boer J, van Blijderveen G, van Dijk G, Duivenvoorden HJ, Williams M. Implementing structured, multiprofessional medical ethical decision-making in a neonatal intensive care unit. J Med Ethics. 2012;38(10):596–601. https://doi.org/10.1136/medethics-2011-100250 .

Appelbaum PS, Reiser SJ. Ethics rounds: a model for teaching ethics in the psychiatric setting. Hosp Community Psychiatry. 1981;32(8):555–60.

Svantesson M, Löfmark R, Thorsén H, Kallenberg K, Ahlström G. Learning a way through ethical problems: Swedish nurses' and doctors' experiences from one model of ethics rounds. J Med Ethics. 2008;34(5):399–406. https://doi.org/10.1136/jme.2006.019810 .

Baumann-Hölzle R, Maffezzoni M, Bucher HU. A framework for ethical decision making in neonatal intensive care. Acta Paediatr. 2005;94(12):1777–83. https://doi.org/10.1080/08035250510011928 .

Levine MD, Scott L, Curran WJ. Ethics rounds in a Children's medical center: evaluation of a hospital-based program for continuing education in medical ethics. Pediatrics. 1977;60(2):202–8.

Lillemoen L, Pedersen R. Ethics reflection groups in community health services: an evaluation study. BMC Med Ethics. 2015;16:25. https://doi.org/10.1186/s12910-015-0017-9 .

Dauwerse L, Weidema F, Abma T, Molewijk B, Widdershoven G. Implicit and explicit clinical ethics support in the Netherlands: a mixed methods overview study. HEC Forum. 2014;26(2):95–109. https://doi.org/10.1007/s10730-013-9224-2 .

Meyer-Zehnder B, Schäfer UB, Schleger HA, Reiter-Theil S, Pargger H. Ethical case discussions in the intensive care unit. From testing to routine Anaesthesist. 2014;63(6):477–87. https://doi.org/10.1007/s00101-014-2331-x.

Svantesson M, Anderzén-Carlsson A, Thorsén H, Kallenberg K, Ahlström G. Interprofessional ethics rounds concerning dialysis patients: staff's ethical reflections before and after rounds. J Med Ethics. 2008;34(5):407–13. https://doi.org/10.1136/jme.2007.023572 .

Maffezzoni M, Wunder K, Baumann-Hölzle R, Stoll F. Group processes in deciding about viability of neonates - a formative evaluation. Z arb. Organ. 2003;47(3):162–9. https://doi.org/10.1026//0932-4089.47.3.162.

Svantesson M, Karlsson J, Boitte P, Schildman J, Dauwerse L, Widdershoven G, et al. Outcomes of moral case deliberation-the development of an evaluation instrument for clinical ethics support (the euro-MCD). BMC Med Ethics. 2014;15(1):30.

Wäscher S, Salloch S, Ritter P, Vollmann J, Schildmann J. Methodological reflections on the contribution of qualitative research to the evaluation of clinical ethics support services. Bioethics. 2017;31(4):237–45.

Beuving J, De Vries G. Doing qualitative research. The craft of naturalistic inquiry. Amsterdam. Amsterdam: University Press; 2015.

Book   Google Scholar  

Download references

Acknowledgments

The authors want to thank our colleagues at IQ healthcare, medical ethics for their valuable remarks in the process of writing the manuscript. We want to thank the reviewers for their constructive comments.

Not applicable.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and affiliations.

Radboud university medical center, Radboud Institute for Health Sciences, IQ healthcare, Geert Grooteplein 21, P.O. Box 9101 (114), 6500 HB, Nijmegen, The Netherlands

Maaike M. Haan, Jelle L. P. van Gurp, Simone M. Naber & A. Stef Groenewoud

You can also search for this author in PubMed   Google Scholar

Contributions

MH conducted the literature search. She was the leading researcher in both the screening and analysis of the data, as well as writing the manuscript. MH, JvG, SN and SG authors screened records and coded data. MH, JvG, SN and SG contributed to the ongoing discussion and development of the clusters and the manuscript. SN was a major contributor to integrating other studies in the manuscript. JvG was a major contributor in writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Maaike M. Haan .

Ethics declarations

Authors’ information.

JvG, SG, and SN are part of the MCD group in the Radboud University Medical Center. JvG and SG are facilitators of MCD. SN coordinates some of the post academic ethics education programs in our hospital. At the time of data collection and analysis, MH was a research assistant. Currently, she is a PhD student working on a project concerned with informal care giving in the end-of-life phase.

Ethics approval and consent to participate

Consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Haan, M.M., van Gurp, J.L.P., Naber, S.M. et al. Impact of moral case deliberation in healthcare settings: a literature review. BMC Med Ethics 19 , 85 (2018). https://doi.org/10.1186/s12910-018-0325-y

Download citation

Received : 13 July 2018

Accepted : 23 October 2018

Published : 06 November 2018

DOI : https://doi.org/10.1186/s12910-018-0325-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health personnel (MeSH)
  • Caregivers (MeSH)
  • Healthcare professionals
  • Clinical ethics (MeSH)
  • Moral reflection

BMC Medical Ethics

ISSN: 1472-6939

case study about moral dilemma

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Moral judgment reloaded: a moral dilemma validation study

Julia f. christensen.

1 Psychology, Evolution and Cognition (IFISC-CSIC), University of the Balearic Islands, Palma, Spain

Albert Flexas

Margareta calabrese, nadine k. gut.

2 School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK

3 Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow, UK

Antoni Gomila

Associated data.

We propose a revised set of moral dilemmas for studies on moral judgment. We selected a total of 46 moral dilemmas available in the literature and fine-tuned them in terms of four conceptual factors ( Personal Force, Benefit Recipient, Evitability , and Intention ) and methodological aspects of the dilemma formulation ( word count, expression style, question formats) that have been shown to influence moral judgment. Second, we obtained normative codings of arousal and valence for each dilemma showing that emotional arousal in response to moral dilemmas depends crucially on the factors Personal Force, Benefit Recipient , and Intentionality . Third, we validated the dilemma set confirming that people's moral judgment is sensitive to all four conceptual factors, and to their interactions. Results are discussed in the context of this field of research, outlining also the relevance of our RT effects for the Dual Process account of moral judgment. Finally, we suggest tentative theoretical avenues for future testing, particularly stressing the importance of the factor Intentionality in moral judgment. Additionally, due to the importance of cross-cultural studies in the quest for universals in human moral cognition, we provide the new set dilemmas in six languages (English, French, German, Spanish, Catalan, and Danish). The norming values provided here refer to the Spanish dilemma set.

“… but what happens when we are exposed to totally new and unfamiliar settings where our habits don't suffice?” Philip Zimbardo ( 2007 ); The Lucifer Effect, p. 6

Introduction

Moral dilemmas have become a standard methodology for research on moral judgment. Moral dilemmas are hypothetical short stories which describe a situation in which two conflicting moral reasons are relevant; for instance, the duty not to kill, and the duty to help. By inducing the participants to make a forced choice between these two reasons, it can be investigated which reason is given precedence in a particular situation, and which features of the situation matter for that decision. Accordingly, we assume that this kind of hypothetical “ticking bomb scenarios” can help to disentangle what determines human moral judgment. This is, however, only possible if the moral dilemmas are very well designed and potentially relevant factors are controlled for. The aim of this paper is to provide a set of such carefully designed and validated moral dilemmas.

The moral dilemmas commonly used in Cognitive Neuroscience experiments are based on what Foot ( 1967 ) and Thomson ( 1976 ) called the “Trolley Problem.” The trolley dilemma has two main versions. In the first one, a runaway trolley is heading for five railway workers who will be killed if the trolley pursues its course. The experimental participant is asked to take the perspective of a protagonist in the story who can choose the option to leap in and to pull a switch which will redirect the trolley onto a different track and save the five railway workers. However, redirected onto the other track, the trolley will kill one railway worker who would otherwise not have been killed. In an alternative version of the dilemma, the action the protagonist has to perform in order to stop the trolley is different. This time, there is no switch but a large stranger who is standing on a bridge over the tracks. The protagonist can now choose to push that person with his hands onto the tracks so that the large body stops the train. The outcome is the same: five individuals saved by sacrificing one. However, participants in this task more easily consent to pull the switch while they are much more reluctant to push the stranger with their own hands. The “action” that the protagonist of the story can choose to carry out—or not—is termed a moral transgression or moral violation . The choice itself, between the act of committing or omitting to carry out the moral transgression is a moral judgment . The decision to commit the harm is referred to as an utilitarian moral judgment, because it weights costs and benefits, while the decision to refrain from harm is a deontological moral judgment, because it gives more weight to the “not to kill” principle.

The influential work of Greene et al. ( 2001 ), which introduced moral dilemmas into Cognitive Neuroscience, has been followed by many other studies as a way to deepen our understanding of the role of emotion in moral judgment (for a review, see Christensen and Gomila, 2012 ). However, results obtained with this methodological approach have been heterogeneous, and there is a lack of consensus regarding how to interpret them.

In our opinion, one of the main reasons for this lays in the simple fact that the majority of studies have relied on the initial set of moral dilemmas devised by Greene et al. ( 2001 ). While this set indisputably provided invaluable evidence about the neural underpinnings of moral judgment, it was not validated. Thus, conceptual pitfalls and formulation errors have potentially remained unchallenged (Christensen and Gomila, 2012 ). In fact, one of the key findings that have been reported (i.e., emotional involvement in moral judgment) might have been due to uncontrolled variations in the dilemma formulations, rather than to the factors supposedly taken into account (i.e., personal vs. impersonal versions of the dilemma). As a matter of fact, Greene and colleagues themselves have worded constructive self-criticism with respect to that initial dilemma set and suggested using only a subset of the initial dilemmas, however, without validating them either (Greene et al., 2009 ). Still, researchers continue to use this initial set. Here we present our efforts to remedy this situation.

We have fine-tuned a set of dilemmas methodologically and conceptually (controlling four conceptual factors). The set was selected from previously used moral dilemma sets: (i) Greene et al. ( 2001 , 2004 ) and (ii) Moore et al. ( 2008 ) (this set was based on Greene et al.'s but optimized). Both sets have been used in a wealth of studies, however, without previous validation (e.g., Royzman and Baron, 2002 ; Koenigs et al., 2007 ; Moore et al., 2008 , 2011a , b ). After the dilemma fine-tuning, norming values were obtained for each dilemma: (i) of arousal and valence (to ascertain the differential involvement of emotional processes along the dimensions of the 4 conceptual factors) and (ii) of moral judgment (to confirm that moral judgment is sensitive to the four factors) 1 . Finally, in the Supplementary Material of this work, we provide the new set in six languages: English, French, Spanish, German, Danish , and Catalan in order to make it more readily available for cross-cultural studies in the field. Please note that the norming study was carried out with the Spanish dilemma version. We encourage norming studies in the other languages (and in other cultures).

Dilemma “fine-tuning”—proposal of an optimized set

All dilemmas included in this set involved the decision to carry out a moral transgression which would result in a better overall numerical outcome. The participant was always the protagonist of this action (the moral transgression) 2 and all dilemmas involved killing (i.e., all social and other physical harm dilemmas were eliminated). Furthermore, of the initial 48 dilemmas, 2 were eliminated (the personal and impersonal versions of the cliffhanger dilemma) due to the unlikely acrobatics they involve.

In what follows we outline the changes we have made regarding (i) the instructions given to the participant (subsection Instructions to the Participant ); (ii) the dilemma design , i.e., adjustment of dilemma length, expression style , etc. (subsection Dilemma Design (1)—Formulation ), (iii) the dilemma conceptualization , i.e., thorough adaptation to the conceptual factors of Personal Force, Benefit Recipient, Evitability , and Intentionality (subsection Dilemma Design (2)—Conceptual Factors ), and (iv) the formulation of the question eliciting the moral judgment (subsection The Question Prompting the Moral Judgment ). In the end, we have produced 23 dilemmas with two versions each, one personal and one impersonal, 46 dilemmas in total.

Instructions to the participant

To increase verisimilitude, we suggest that instructions at the beginning of the experiment ideally emphasize that participants are going to read short stories about difficult situations as they are likely to appear in the news or in the radio (for instance: “ in the following you will read a series of short stories about difficult interpersonal situations, similar to those that we all see on the news every day or may read about in a novel ”) (Christensen and Gomila, 2012 , p. 14). This may help to put the participants “in context” for the task that awaits them. In addition, instructions could include a remark about the fact that participants will be offered one possible solution to the situation, and that their task will be to judge whether the proposed solution is acceptable, given the information available (such as: “ for each of the difficult situations a solution will be proposed. Your task is to judge whether to accept or not this solution” ). Indeed, the closure of options or alternatives is important. However, in previous dilemma sets, some dilemmas have included expressions such as “ the only way to avoid [death of more people] is to [action proposal],” while other dilemmas did not. Whereas this is important information, including that same sentence in all dilemmas could make the reading rather repetitive and result in habituation. On the other hand, including it only in some dilemmas could bias participants' responses to these dilemmas with respect to the others. Therefore, we suggest presenting it only in the general instructions to the participants.

Dilemma Design (1)—formulation

Control for formal characteristics of dilemma formulation includes:

Word count across dilemma categories: in the original sets the dilemmas were rather long. This can entail an excessively long experimental session, resulting in participant fatigue. In Moore et al. ( 2008 ) an effort was made to control for mean word count: the Personal moral dilemmas (PMD) had 168.9 words and Impersonal moral dilemmas 169.3 (IMD). The maximum word count of a dilemma was 254 and the minimum was 123. We shortened the dilemmas removing information that was not strictly necessary and equalized the expression style of personal and impersonal versions of each dilemma. For instance, technical terms and long, non-familiar words were removed. Now the first three sentences of each dilemma are almost the same for both versions of a dilemma (personal and impersonal). For instance, the English version of the new dilemma set has a mean word count of 130 words in the Personal and 135 in the Impersonal moral dilemmas. Our maximum number of words in a dilemma is 169 and the minimum 93. See the Supplementary Material for the word counts for each translation.

Framing effects

A framing effect consists in that people may judge one and the same situation differently, just because of the way it is described (Tversky and Kahneman, 1981 ; Petrinovich et al., 1993 ). Specifically, a clear risk of framing effects concerns the use of “kill” in some dilemmas, but “save” in others. People feel more inclined to choose inaction when kill is used, and more inclined toward action when save is emphasized (Petrinovich and O'Neill, 1996 ). To avoid this, in all dilemmas the words kill and save are used in the second paragraph where the participant is given the information about the proposed action (i.e., the moral transgression) and its consequences. Conversely, the words are removed from the question (e.g., Rescue 911 scenario: instead of Is it appropriate for you to kill this injured person in order to save yourself and everyone else on board? the action verbs throw and keep were used). It is important to highlight the trade-off between cost (throw someone) and benefit (keep yourself and more people in the air) in the questions of all dilemmas. This was not accounted for in any of the previous dilemma sets.

Situational antecedents

In the original dilemma sets, keeping the situational antecedent used to present the characters constant was not accounted for. Thus, in the Personal version of the Nuclear reactor dilemma the situational antecedent could bias the participants' responses: you are the inspector of a nuclear power plant that you suspect has not met its safety requirements. The plant foreman and you are touring the facility when one of the nuclear fuel rods overheats … Later, it is this same foreman you are asked to consider to push into the fuel rod assembly. The participant was given knowledge about a badly kept nuclear plant, with an in-charge individual who did not bother to make the plant meet the safety requirements. This makes it easier to sacrifice the plant foreman to save the city than to sacrifice another, random, innocent person—which is the option to consider in all other dilemmas. Hence, prior information about the state of the power plant was removed, so that the foreman has no overt responsibility for the nuclear accident which is about to happen. Now he is a “random” person to be sacrificed, like in the other dilemmas. The Nobel Prize dilemma had a similar problem. A situational antecedent made the person in a position to be sacrificed (here, your fellow researcher ) appear a greedy bad person, so that it may be easier to sacrifice him than another innocent fellow researcher. The dilemma was reformulated so that the fellow researcher appeared not to know that the potential buyers would use the invention as a weapon and only the protagonist explicitly knows it and is now again the only person with the possibility to prevent greater harm from happening. In total, four dilemmas were modified to keep constant the situational antecedents of the characters in the dilemmas.

Trade-off across dilemmas: previous sets mixed different kinds of moral transgressions, like stealing or lying. It is important not to mix them with killing, in order to avoid the risk of a non-desired carry-over effect between dilemmas. For instance, stealing, lying, or lack of respect, may elicit less severe judgments when killing is also present in other dilemmas of the set, than when it's not. Therefore, all dilemmas now raise the conflict between the option to kill a person in order to save a larger number of people, and the option of doing nothing and letting that larger number of people die.

Number of individuals

Number of individuals saved if the moral transgression is carried out: in the set there now are the following categories (i) 5–10 people, (ii) 11–50 people, (iii) 100–150 people and (iv) “thousands” of people or “masses” of people. It is an important variable to control for. A utilitarian response should become easier as more people are saved. Conversely, if moral judgment is purely deontological, the number of people saved is totally irrelevant. This is an interesting question to have as a working hypothesis. Using different amounts of “saved individuals” in the formulations of the dilemmas allows researchers to explore at which point the positive consequences outweigh the transgression required to obtain them. For instance, it has been shown that attachment (“closeness of relationship”) to the victim determines moral judgment more than the number of beneficiaries involved. Still, this question needs further research, once closeness is controlled for (Tassy et al., 2013 ). In this work, however, no specific analysis of this variable will be made, as it exceeds the limits of this norming study.

Information

Information supplied about the certainty of the consequences for the story character impersonated by the participant: in the Tycoon and Nobel Prize dilemmas it said that “ if you decide to [action of the dilemma], nobody will ever find out .” This implies information about the future which cannot really be given with certainty, while at the same time contrasting with other stories where no such commitments about the future are made. This kind of information can bias moral judgment and confuse it with legal punishment (or its lack). Therefore, this information was removed altogether from the dilemmas. Similarly, dilemmas were excluded that cannot be understood without the assumption of an extraordinary ability or an unlikely event (such as the Cliffhanger) 3 .

Dilemma Design (2)—conceptual factors

On the grounds of the literature about moral judgment (Christensen and Gomila, 2012 ), four main factors need to be controlled for in moral dilemma formulation: Personal Force, Benefit Recipient (who gets the benefit), Evitability (whether the death is avoidable, or not), and Intentionality (whether the harm is willed and used instrumentally or a side-effect).

Personal force

Initially, Greene et al. ( 2001 , 2004 ) defined a Personal moral dilemma as one in which the proposed moral transgression satisfied three criteria: (i) the transgression leads to serious bodily harm; (ii) this harm befalls a particular person or group of people; and (iii) the harm is not the result of deflecting an existing threat onto a different party. Subsequently, Cushman et al. ( 2006 ) remarked that the crucial feature in a personal dilemma is whether physical contact between the victim and the aggressor is involved; a point also emphasized by Abarbanell and Hauser ( 2010 ), while Waldmann and Dieterich ( 2007 ) focused on the Locus of Intervention (focus on the victim or on the threat) as the difference between personal and impersonal dilemmas. Another proposal contended that the difference between Personal and Impersonal is whether the action is mechanically mediated or not (Royzman and Baron, 2002 ; Moore et al., 2008 ). In more recent work, Greene et al. have tried to offer an integrative definition (Greene, 2008 ; Greene et al., 2009 ). Specifically, these authors propose that a Personal moral transgression occurs when (i) the force that impacts the victim is generated by the agent's muscles, (ii) it cannot be mediated by mechanisms that respond to the agent's muscular force by releasing or generating a different kind of force and applying it to the other person, and (iii) it cannot be executed with guns, levers, explosions, gravity…

However, it seems as if this redefinition is driven by an effort to keep the interpretation of the initial results, which results in a circular argument: that “personal” dilemmas induce deontological judgments by emotional activation, while “impersonal” ones induce utilitarian judgments by rational calculation. Yet, it is not yet clear which aspect of the personal involvement influences moral judgment through emotional activation, nor is it clear which kind of moral relevance emotions elicited by one's involvement may have in the judgment. Similar considerations apply to the introduction of the distinction between “high-conflict” and “low-conflict” dilemmas (Koenigs et al., 2007 ), which also seem based on ex-post-facto considerations.

A principled way to clarify this distinction is in terms of the causal role of the agent in the production of the harm. What makes a dilemma impersonal is that the agent just initiates a process that, through its own dynamics, ends up causing the harm; while a dilemma is personal when the agent is required not just to start the action, but to carry it out by herself. According to this view, the presence of mediating instruments, by itself, does not make a dilemma personal or impersonal. It depends of the kind of active involvement of the agent they require and amounts to a difference in her responsibility of the caused harm, and in the resulting (felt) emotional experience of it. This can account for the different moral judgments to Personal and Impersonal Dilemmas, which are observed despite the fact that the same consequences occur. The best philosophical explanation of this difference is Anders ( 1962 )'s reflection on the mass murders on the Second World War. He contended that these acts were made possible by the technical innovations that reduced the active involvement of soldiers in the killing to pushing a button to release a bomb. It is not just that the new arms were of massive destruction, but that their use was easier for us humans. Killing with one's hands is not just slower, but harder.

In the present dilemma set, the Personal dilemmas have been revised accordingly. Personal Moral Dilemmas now require that the agent is directly involved in the production of the harm. Impersonal Moral Dilemmas are those in which the agent is only indirectly involved in the process that results in the harm.

Benefit recipient

Self-interest is a well-known influence in moral judgments (Bloomfield, 2007 ). People will be more prone to accept an action whose consequences benefit themselves (i.e., the agent herself) than one that benefits others, maybe complete strangers. This “Self-Beneficial” vs. “Other Beneficial” contrast has been introduced more clearly in the revised set. We reformulated the Modified Euthanasia dilemma due to a confound in the trade-off specification. Therefore, as the dilemma had to be an Other-beneficial dilemma, now the key secret evidence the soldier could reveal if tortured is the location of a particularly important base camp (and not the camp of the protagonist's group).

Evitability

This variable regards whether the death produced by the moral transgression is described as Avoidable or Inevitable . Would the person “to be sacrificed” have died anyway ( Inevitable harm), or not ( avoidable harm)? Transgressions that lead to inevitable consequences are more likely to be morally acceptable, by the principle of lesser evil (Hauser, 2006 ; Mikhail, 2007 ). In the dilemma Rescue 911 , a technical error in a helicopter puts the protagonist in the situation of having to decide to throw off one of her patients for the helicopter to lose weight. Without that sacrifice the helicopter would fall and everybody— including that one patient —would die. Conversely, the dilemma can also be formulated in such a way that the individual to be sacrificed otherwise would not have been harmed ( Avoidable death), such as in the classical trolley dilemmas, where neither the bystander nor the innocent railway worker on the side track would have been harmed if the protagonist had not changed the course of events. This distinction has now been made more explicit in the dilemmas (for examples of work where this variable was discussed, see Moore et al., 2008 ; Huebner et al., 2011 ).

Intentionality

This factor refers to whether the harm is produced instrumentally, as something willed, or whether it happens as an unforeseen side-effect, as collateral damage, to an action whose goal is positive. This variable concerns the doctrine of the double effect that has been shown to be psychologically relevant (Foot, 1967 ; Hauser, 2006 ; Mikhail, 2007 ). Causing harm is more acceptable when it is produced as collateral damage, than when it is the goal of an action. Accordingly, Accidental harm refers to the case where the innocent victim of the dilemma dies as a non-desired side effect of the moral transgression that the protagonist carries out to save others. Conversely, Instrumental harm occurs when the protagonist intentionally uses the harm (i.e., the death) of the innocent victim as a means (i.e., instrumentally) to save the others.

The reformulation of the dilemmas and the fine-tuning according to this factor is particularly relevant and one of the main contributions of this norming paper. In the modified set of Moore et al. ( 2008 ), all Personal dilemmas were Instrumental , while Impersonal dilemmas included six Instrumental and six Accidental . The present set now allows a full factorial design including intentionality . To introduce Accidental vs. Instrumental harm in Personal dilemmas attention was paid to key aspects of the causal chain of the dilemma leading to the proposed salvation of the greatest number of people. First, the exact intention that the protagonist has in the very moment of committing the moral transgression was identified ( does she carry out an action with the intention to kill or not? ). Second, a differentiation was made between whether the harm is directly produced by the protagonist, or indirectly triggered by her action (do the positive consequences (the salvation of many) follow directly from the victim's death, or by some other event, an independent mechanism which was triggered by the protagonist's actions but not directly by her, nor directly willed by her?). The final point concerned by what means the larger number of people is saved (are they saved directly by the death of the victim, or for a different reason?).

Following this rationale, for a better comprehension of the Intentionality factor, the moral transgression is divided into a 5-part causal chain. This helps to disentangle the Accidental - Instrumental dichotomy (see Figure ​ Figure1). 1 ). The first thing to identify is the action by the protagonist ( what exactly does she do? ). Second, which is the exact intention behind that action ( why exactly does she do it? )? Third, does the victim die by the intervention of some intermediate (and protagonist- independent ) mechanism or is the death directly due to the action of the protagonist ( does she kill directly or by an independent mechanism? )? Fourth, how does the innocent victim die ( how does she die? )? Fifth, how is the larger number of people saved ( are they saved due to the death of the victim or for some other reason? )?

An external file that holds a picture, illustration, etc.
Object name is fpsyg-05-00607-g0001.jpg

Example of the causal chain of the proposed moral transgression that leads to the salvation . In the Instrumental version of the Burning Building dilemma the proposed action is “to use the body of the victim.” The intention is “use the body to break down burning debris.” The victim dies directly by the fire and there is no independent mechanism in between. A larger number of people are saved due to the fact that the burning debris was eliminated with the victim . The harm to the victim was thus used as a means to save others. Said in different words, the body of the victim was literally used instrumentally with the intention to free the trapped group. Conversely, in the Accidental version of the Iceberg dilemma, the action of the protagonist is “to push the emergency access hatch.” The intention behind that action is “to make the oxygen flow to the upper section of the boat.” The victim dies due to a knock on the head by an independent mechanism which is the falling down of the hatch . Thus, the victim dies as a side-effect of the act of salvation that the protagonist carries out with the intention to get oxygen to the upper section of the boat.

To summarize the four factors Personal Force, Benefit Recipient, Evitability , and Intentionality , the illustration in Figure ​ Figure2 2 provides a schematic overview over how the four factors are presented to the participant during the course of a moral dilemma.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-05-00607-g0002.jpg

The four factors in the dilemma set, adapted from Christensen and Gomila ( 2012 ), reproduced with permission . (1) Personal Force : the kind of imaginary involvement with the situation: Personal, as direct cause, or Impersonal, as an indirect agent in the process of harm. (2) Benefit Recipient : concerns whether the protagonist's life is at stake (Self-Beneficial action), or not (Other-Beneficial action). (3) Evitability : regards whether the victim would die alongside the other individuals in the group if the moral transgression is not carried out (Inevitable death, the person would die anyway), or not (Avoidable death, the person would not die if no action is taken). (4) Intentionality : if the action is carried out intentionally with the explicit aim to kill the person as a means to save others, this is Instrumental harm (it explicitly needs the death of that person to save the others). If the innocent person dies as a non-desired side-effect of the action by some independent mechanism and not directly by the action of the protagonist, the harm is Accidental.

The question prompting the moral judgment

The formulation of the final question to elicit the moral judgment after reading the dilemma has also given rise to some controversy. The problem of the influence that the type of question exerts on participant's moral judgments has been addressed empirically (e.g., O'Hara et al., 2010 ). Four question formats were used: wrong, inappropriate, forbidden , and blameworthy and found that people judged moral transgressions more severely when the words “wrong” or “inappropriate” were part of the formulation, than when the words “forbidden” or “blameworthy” were used. Another study found different behavioral effects following the questions Is it wrong to…? vs. Would you? (Borg et al., 2006 ). The question Would you…? resulted in faster RTs in judging moral scenarios as compared to judgments of non-moral scenarios, while the question Is it wrong to…? did not show any differences in RT comparing the moral to the non-moral condition. In view of these findings, it seems that deciding what to do is not processed in the same way as deciding whether an action is right or wrong , and that in moral dilemmas is the first that matters.

In recent work, two groups of researchers have addressed the related issue of whether “what we say is also what we do.” Thus, it was found that answering the question Is it acceptable to…? vs. the question Would you…? resulted in differential response tendencies (Tassy et al., 2013 ). However, another study showed that increasing the contextual information available to the participant resulted in more coherence between what they said they would do and what they actually did (Feldman Hall et al., 2012 ). In any case, it is clear that consistency is required.

For the present dilemma set a direct question was used Do you [action verb] so that … to emphasize the consequences of the choice made by the agent. Scales (Likert, Visual Analogical…) were used instead of a dichotomous answer format, as a way to uncover the degree of conflict experienced.

Summary: the revised set

The revised set consists of 46 dilemmas, of which 23 are Personal and 23 are Impersonal . As can be observed in Table ​ Table1, 1 , we maintained the original dilemma numbers so that it is easy to compare across sets. In 23 of the 46 dilemmas, the protagonist's life is in danger and the moral violation results in saving not only a greater number of individuals, but also the protagonist herself ( Self-Beneficial dilemmas), whereas in the remaining 23, the protagonist's life is not in danger ( Other-Beneficial dilemmas). In turn, there are 11 Personal and 11 Impersonal Self - Beneficial dilemmas, and 12 Personal and 12 Impersonal Other-Beneficial dilemmas.

Revised dilemmas .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-05-00607-i0001.jpg

The numbers refer to the dilemma number as given in the stimulus set (Supplementary Material). The colors refer to instrumental harm (gray) and accidental harm (black). We have kept the numbers (dilemma numbers) as in Moore et al. ( 2008 ) to facilitate comparisons between the sets. Please note that there is not the same number of dilemmas in each of the 16 categories. Please see our discussion of this matter in the section Arousal and Valence Norming Experiment on limitations. The dilemmas 21 and 22 were removed (Cliffhanger dilemmas, see in the section Dilemma Design (1)—Formulation for reference) .

There are 24 dilemmas where the death is Avoidable and 22 where it is Inevitable . Finally, there are 18 dilemma scenarios with Accidental harm (7 Personal and 11 Impersonal; 10 Self-Beneficial and 8 Other-Beneficial; 10 Avoidable and 8 Inevitable ) and 28 with Instrumental harm (16 Personal and 12 Impersonal; 12 Self-Beneficial and 16 Other-Beneficial; 14 Avoidable and 14 Inevitable ). See Table ​ Table1 1 for a summary. Please note that it was not possible to provide the same number (quantity) of dilemmas in each of the 16 categories because we relied on the materials of the former set. Refer to our discussion of this matter in the Supplementary Material (A) on limitations.

Arousal and valence norming experiment

Peoples' moral judgment has been shown to be sensitive to the affective impact of a dilemma in the individual (Moretto et al., 2010 ; Navarrete et al., 2012 ; Ugazio et al., 2012 ). However, no dilemma set has so far been assessed in terms of the affective arousal the individual dilemmas elicit in normal population as they are read—i.e., even if no moral judgment is required. Therefore, data points for affective arousal and valence were obtained for each dilemma of this set.

We know that peoples' moral judgments vary as a function of the four conceptual factors Personal Force, Benefit Recipient, Evitability , and Intentionality . However, how peoples' affective responses (valence and arousal) are modulated by these factors remains to be established. Besides, because inter-individual differences in emotional sensitivity and empathy can affect the subjective experience of arousal, participants in this experiment were assessed on these variables by means of self-report measures.

Participants

Sixty-two undergraduate psychology students participated in this study in exchange for a course credit in one of their degree subjects (43 females, 19 males; age range = 18–48 years; m = 21.0, SD = 5.35). Participants completed four self-report measures. First, the Interpersonal Reactivity Index (IRI) (Davis, 1983 ), which has four scales that focus on perspective taking, tendency to identify with fictitious characters, emotional reactions to the negative experiences of others, and empathic concern for others. Second, the Questionnaire of Emotional Empathy (Mehrabian and Epstein, 1972 ) that conceives empathy as the vicarious emotional response to the perceived emotional experience of others. It explicitly understands empathy as different from Theory of Mind (ToM) and focuses on emotional empathy where high scores indicate a high responsiveness to other peoples' emotional reactions. Third, the Questionnaire of Emotional Sensitivity (EIM) (Bachorowski and Braaten, 1994 ), which refers to the intensity with which a person experiences emotional states irrespectively of their affective valence. Fourth, participants completed the Toronto Alexithymia Scale (TAS) in which a high score means difficulties in understanding and describing emotional states with words (Taylor et al., 1985 ). For results on the self-report measures, see Table ​ Table2. 2 . All were native Spanish speakers.

Participant characteristics in terms of emotional sensitivity, empathy, and alexithymia .

EIM (Bachorowski and Braaten, )164.7518.77757859
Emotional empathy (Mehrabian and Epstein, )46.725.71703913
IRI (Davis, )55.78.095405686
TAS (Taylor et al., )18.314.78237613

The forty-six moral dilemmas were arranged to be presented in random order in the stimulus presentation program DirectRT ( www.empirisoft.com ) v. 2006.2.0.28. The experiment was set up to run on six PCs [Windows XP SP3 PC (Intel Pentium Dual Core E5400, 2.70 GHz, 4 GB RAM)] and stimuli were displayed on 19″ screens (with a resolution of 1440 × 900 p; color: 32 bits; refresh rate: 60 Hz). Data were analyzed using the statistical package SPSS v. 18 ( www.ibm.com ).

Participants signed up for the experiment in class after having completed the four self-report scales. The day of the experiment, participants provided demographic data regarding gender, age, and level of study. Informed consent was obtained from each participant prior to participation in any of the tasks and questionnaire procedures.

Participants were instructed as outlined in the section Instructions to the Participant . Each dilemma was presented in white Arial font, pt 16, on a black screen. By key press, the first paragraph of the dilemma appeared. With the next key press, the second paragraph appeared 4 . Participants read at their own pace, advancing from one screen to the other by pressing the space bar. With the third key press, the first two paragraphs of the dilemma disappeared and two Likert scales appeared on subsequent screens, the first asking participants to indicate their level of arousal (1 = not arousing at all; 7 = very arousing) and the second asking them to indicate the perceived valence of the dilemma (1 = very negative; 7 = very positive). The ratings were made by means of key press on the number keyboard of the computer. Four practice dilemmas were added in the beginning of the task. Data from these trials were discarded before data analysis.

The experiment was carried out in a laboratory of the university suited for experiments with six individual PCs separated in individual booths. Participants carried out the task in groups of 1–6 people. Viewing distance was of approximately 16 inches from the screen. The study was approved by the University's Ethics Committee (COBE280213_1388).

Factorial Repeated Measure (RM) 2 × 2 × 2 × 2 Analysis of Variances (ANOVA) were computed on subjective arousal and valence ratings (Likert scale data), and on the RT of the arousal ratings. The factors were (1) Personal Force (Personal vs. Impersonal harm); (2) Benefit Recipient (Self-Beneficial vs. Other-Beneficial); (3) Evitability (Avoidable vs. Inevitable harm); and (4) Intentionality (Accidental vs. Instrumental harm). As effect sizes we report Pearson's r , where 0.01 is considered a small effect size, 0.3 a medium effect and 0.5 a large effect (Cohen, 1988 ).

To rule out any effect of Gender in the results, the above ANOVA was computed with the between-subjects factor Gender. There was no effect of gender in any of the interactions with the four factors, neither in the arousal ratings: Personal Force * gender : F (1, 60) = 1.47; p = 0.230; Benefit Recipient * gender : F (1, 60) = 0.774; p = 0.383; Evitability : F (1, 60) = 0.079; p = 0.780; Intentionality * gender : F (1, 60) = 0.101, p = 752; nor in the valence ratings: Personal Force * gender : F (1, 60) = 0.004; p = 0.949; Benefit Recipient * gender : F (1, 60) = 0.346; p = 0.558; Evitability : F (1, 60) = 0.019; p = 0.890; Intentionality * gender : F (1, 60) = 0.184, p = 0.670, nor in the RT. Therefore, data of female and male participants were aggregated.

All 16 dilemma categories were rated as being felt as of moderate to high arousal (range: m = 5.58–6.24; see Table ​ Table3). 3 ). Two of the four factors showed significant effects on the arousal ratings. First, there was a significant main effect of Personal Force [ F (1, 61) = 6.031; p = 0.017; r = 0.30], PMD being rated as more arousing ( m = 5.92; SD = 0.12), than IMD ( m = 5.83; SD = 0.12). The second main effect was of Benefit Recipient [ F (1, 61) = 47.57; p < 0.001; r = 0.66], Self-Beneficial Dilemmas being rated as more arousing ( m = 6.02, SD = 0.12) than Other-Beneficial Dilemmas ( m = 5.70, SD = 0.13). See Figure S3 . There were no significant main effects of Evitability [ F (1, 61) = 0.368; p = 0.546], nor of Intentionality [ F (1, 61) = 0.668; p = 0.417]. See Table S1 for the means and Figure S3 in the Supplementary Material.

RM ANOVA of the RT of the arousal ratings .

test
Personal (PMD)2564.46112.965.7960.0190.36
Impersonal (PMD)2716.77123.19
Self-beneficial2506.66119.5220.783<0.0010.88
Other-beneficial2774.57115.66
Avoidable2648.79116.830.0850.771ns
Inevitable2632.44117.71
Accidental2623.86118.100.2580.613ns
Instrumental2657.37119.02

The values represent miliseconds .

There was a significant interaction of Benefit Recipient * Intentionality [ F (1, 61) = 15.24; p < 0.001; r = 0.44]. This indicates that Intentionality had different effects on participants' ratings of arousal depending on whether the dilemma was Self-Beneficial or Other-Beneficial . Figure S4 illustrates the results. Paired t -tests showed that when Self-Beneficial Harm was Accidental the dilemma was rated as more arousing than when it was Instrumental [ t (61) = 3.690, p < 0.001, r = 0.43]. For Other-Beneficial Harm , the pattern was reversed, as the Instrumental Harm dilemmas were more arousing than the Accidental Harm dilemmas [ t (61) = −1.878, p = 0.065, trend effect, r = 0.05]. When comparing the Accidental and Instrumental Harm conditions, we found that Self-Beneficial, Accidental Harm dilemmas resulted in higher arousal ratings than when dilemmas were Other-Beneficial [ t (61) = 7.626, p < 0.001, r = 0.49]. The same pattern emerged when the harm was Instrumental ; it was judged as more arousing when it was Self-Beneficial , than when it was Other-Beneficial [ t (61) = 3.494, p = 0.001, r = 0.17]. If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α * = 0.0125. This should be taken into account when considering the result with the trend effect.

Descriptive statistics of the valence ratings confirmed that all 16 dilemma categories were rated as being of negative valence (range: m = 1.71–2.23; see Table S1 ).

There were significant main effects of Personal Force [ F (1, 61) = 28.00; p < 0.001; r = 0.57] and of Benefit Recipient [ F (1, 61) = 31.509; p ≤ 0.001; r = 0.58]. PMD were rated as significantly more negative ( m = 1.905, SD = 0.065) than IMD ( m = 2.054; SD = 0.068). Likewise, Self - Beneficial Dilemmas were rated as significantly more negative ( m = 1.884, SD = 0.068) than Other - Beneficial Dilemmas ( m = 2.075; SD = 0.067). The two other factors did not show main effects [ Evitability F (1, 61) = 1.201; p = 0.277; and Intentionality F (1, 61) = 0.135; p = 0.715]. See Table S1 .

There were two significant interactions. The first was Personal Force * Intentionality [ F (1, 61) = 7.695, p = 0.007; r = 0.33]. The Figure S5 shows that Intentionality had different effects on how people rated the valence of PMD and IMD . Paired t -tests showed that Accidental harm was rated as significantly more negative than Instrumental harm in Impersonal Moral dilemmas [ t (61) = −2.297, p = 0.025, r = 0.08], while no such difference was found between Accidental and Instrumental harm for Personal Moral dilemmas [ t (61) = 1.441, p = 0.155, r = 0.03]. See Figure S5 . If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α * = 0.0125. This should be taken into account when considering the result of the first t -test ( p = 0.025).

The second significant interaction was Benefit Recipient * Intentionality [ F (1, 61) = 6.041, p = 0.017; r = 0.30]. This indicates that intention had different effects on the valence ratings depending on whether the dilemma was Self - or Other - Beneficial . Paired t -tests showed that for Self-Beneficial Dilemmas, harm was judged significantly more negative when it was Accidental as compared to Instrumental harm [ t (61) = −2.300, p = 0.025, r = 0.08]. No such difference in valence ratings of Accidental and Instrumental harm for Other-Beneficial dilemmas [ t (61) = 1.296, p = 0.200, r = 0.03]. See Figure S6 . If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α * = 0.0125. This should be taken into account when considering the result of these t -tests ( p = 0.017 and p = 0.025).

The assessment of valence was only carried out to confirm that all dilemmas were of a strongly negative valence. This has hereby been confirmed and no other analysis will be carried out involving this feature of the dilemmas. All values for both arousal and valence are available for each dilemma in the excel spreadsheet that accompanies this manuscript (Supplementary Material).

Reaction time

A RM ANOVA was carried out on the RT of the arousal ratings with the factors Personal Force, Benefit Recipient, Evitability , and Intentionality . Main Effects were found for Personal Force and Benefit Recipient , no interactions were significant. See Table ​ Table3 3 .

Next, a regression analysis was conducted to ascertain how much of the variance in the RT of the arousal ratings was explained by the arousal ratings. This procedure was executed for each of the 16 dilemma categories. Summary Table ​ Table4 4 shows that except for four of the categories, the arousal ratings significantly explained between 6 and 38% of the variance in the RT. Figure ​ Figure3 3 shows how the overall correlation between the variables indicates that the more arousing a dilemma was, the faster participants indicated their rating. The correlation coefficient between the mean arousal ratings and the mean RT of arousal ratings was, p < 0.001; r = −0.434.

Summary table of the regression analysis of arousal ratings as predictors of the arousal ratings' RT for each of the 16 dilemma categories .

PMD_Self_Avo_Acc−773.62176.500.243−0.4930.000
PMD_Self_Avo_Instr−336.08134.030.095−0.3080.015
PMD_Self_Ine_Acc−181.10144.650.025−0.1600.215 (ns)
PMD_Self_Ine_Instr−692.58113.550.380−0.6190.000
PMD_Other_Avo_Acc−130.67150.710.012−0.1110.389 (ns)
PMD_Other_Avo_Instr−231.73143.760.042−0.2040.112 (ns)
PMD_Other_Ine_Acc−276.63136.910.062−0.2520.048
PMD_Other_Ine_instr−495.32140.800.171−0.4140.001
IMD_Self_Avo_Acc−348.19129.550.107−0.3280.009
IMD_Self_Avo_Instr−582.35126.310.261−0.5110.000
IMD_Self_Ine_Acc−572.35153.150.189−0.4350.000
IMD_Self_Ine_Instr−382.88174.580.074−0.2720.032
IMD_Other_Avo_Acc−516.66154.980.156−0.3950.002
IMD_Other_Avo_Instr−486.55150.540.148−0.3850.002
IMD_Other_Ine_Acc−140.19180.260.010−0.1000.440 (ns)
IMD_Other_Ine_Instr−339.32146.900.082−0.2860.024

Abbreviations → IMD, Impersonal Moral Dilemmas; PMD, Personal Moral dilemmas; Self, Self-Beneficial; Other, Other-Beneficial; Avo, Avoidable; Ine, Inevitable; Acc, Accidental; Instr, Instrumental .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-05-00607-g0003.jpg

Correlation between Arousal ratings and the RT . Color coding: Personal Moral Dilemmas (PMD; Blue/Red, circles); Impersonal Moral Dilemmas (IMD; Green/Yellow, squares). Arousal ratings are 1 = Not arousing, calm; 7 = Very arousing, on the x-axis. RT is in milliseconds (ms) on the y-axis. The numbers refer to the dilemma numbers in the dilemma set.

Inter-individual differences: emotional sensitivity and empathy

To ensure that the results of our arousal ratings were not driven by inter-individual differences, participants had been assessed on a series of emotion-related questionnaires. Of the four questionnaires, the level of empathy measured with the questionnaire by Mehrabian and Epstein had a significant effect on arousal ratings and on arousal rating RT. The overall correlation coefficients for arousal ratings and Empathy scores was r = 0.289; p = 0.025 and for arousal RT and empathy scores it was r = −0.325; p = 0.011. The higher the empathy scores, the higher the arousal ratings to the dilemmas in general, and the shorter the RT (negative correlation coefficient).

Summary: arousal and valence norming experiment

For a dilemma to be very negatively arousing (i.e., ratings very negative in valence and high in arousal), the proposed moral transgression had to be described as up-close and Personal . Besides, dilemmas where the protagonist's own life was at stake were perceived as more negatively arousing than those dilemmas where other peoples' lives were at stake. In particular, if the death of the innocent victim happened accidentally as a non-desired side-effect , the dilemma was perceived as more negatively arousing, specifically, if the protagonist's life was at stake, than if the accidental death of the victim happened in the intent to save other people. In detail:

Affective arousal and valence

  • - there were significant main effects of the factors Personal Force and Benefit Recipient both for arousal and valence ratings: Personal and Self-Beneficial dilemmas were perceived as more arousing and more negative than Impersonal and Other-Beneficial dilemmas, respectively;
  • - there were significant interactions between the two above factors and the factor Intentionality . Intentionality influences perceived arousal in such way that Self-Beneficial dilemmas (as compared to Other-Beneficial dilemmas) were rated as more arousing when harm happened as a non-desired side-effect ( Accidental harm), while Instrumental harm (harm used as a means) was equally arousing in both Self - and Other-Beneficial dilemmas. Furthermore, when harm was Personal (up-close and corporal), as compared to Impersonal (distant and abstract), and used as a means ( Instrumental harm), dilemmas were rated as more negative, than if harm was Impersonal . Conversely, participants found Accidental harm equally negative when it was Personal (up-close and corporal) and Impersonal (distant and abstract).

RT to a moral judgment task has previously been suggested as an indicator of emotional involvement. The more arousing a dilemma was, the faster participants were in making their rating.

Inter-individual differences

There was a correlation between inter-individual differences in empathy assessed by means of the Questionnaire of Emotional Empathy (Mehrabian and Epstein, 1972 ) and the arousal ratings. It showed that the higher the levels of empathy, the more arousing were the dilemmas to the participant. This makes sense because this instrument describes sensitivity to others' emotional states. It specifically conceives empathy as the vicarious emotional response to the perceived emotional experience of others and understands empathy as different from ToM and perspective-taking, and focuses on emotional empathy where high scores indicate a high responsiveness to other peoples' emotional reactions. However, apart from this correlation between arousal ratings and empathy level, no other individual differences had an effect on perceived arousal (the other variables we assessed were gender, IRI, emotional sensitivity, alexithymia). We therefore conclude that—at least in this sample of Spanish Undergraduates- the arousal ratings of this dilemma set are rather robust across individual differences.

Discussion of arousal and valence norming experiment

While all dilemmas are rated similarly as negative in valence, significant differences were found in how they were rated in terms of felt arousal. This means, first, that at least part of the emotional involvement in moral judgment of the dilemmas can be due to the arousal triggered when reading the situational description. And second, results showed that differences in arousal are due to how the different conceptual factors are manipulated. Thus, Personal Force and Self-Beneficial dilemmas give rise to higher arousal ratings than Impersonal and Other-Beneficial ones. Prima facie this suggests that arousal has something to do with identification of the experimental participant with the perspective of the main character in the dilemmatic situation: it's when one feels more directly involved in the conflict, because of the action to be carried out or the consequences for oneself that the action will have, that one feels more aroused—even without having to make a judgment. However, this prima facie interpretion is too simplistic, for three reasons.

In the first place, it is clear that Personal Force dilemmas highlight the personal involvement in physically producing the harm. Similarly, Self-Beneficial dilemmas give rise to higher arousal ratings only when the harm produced is Accidental, rather than Instrumental. The latter case is one of self-interest: we experience less conflict when what's to be done is for our own benefit. Yet, it becomes difficult when a benefit cannot be produced without collateral damage. Third, whereas Self-Beneficial dilemmas take longer to be rated (than Other-Beneficial), Personal Force ones are rated faster than Impersonal ones. Jointly, these results suggest that arousal ratings can have several etiologies, and therefore cannot be interpreted simply as indication of degree of imaginary involvement with the situation or as a measure of experienced conflict. Both these causes need to be considered.

Dilemma validation study—moral judgment experiment

To validate this moral dilemma set, a moral judgment task was set up to confirm the 4-factor structure in the dilemmas; i.e., the four conceptual factors Personal Force, Benefit Recipient, Evitability , and Intentionality .

Furthermore, to explore how the intentionality factor is understood by participants, two versions of the dilemma set were prepared: one version remained as has been described so far, while in the other the question eliciting the moral judgment included an “accidental harm specification” in the accidental harm dilemmas. For instance, in the dilemma Burning Building, the question is Do you put out the fire by activating the emergency system, which will leave the injured without air, so you and the five other people can escape? The sentence which will leave the injured without air is the accidental harm specification. This makes it clear to the reader the consequences of the proposed action. The analysis of this variable is included here, but in future researchers can choose to leave the accidental harm specification out of the question.

Additional analyses include (i) the analysis by Greene et al. ( 2001 , 2004 ) that gave rise to the Dual Process Hypothesis of Moral Judgment (DPHMJ), (ii) an additional analysis of the Intentionality factor, and (iii) an analyses of how interindividual differences influence moral judgment.

Forty-three undergraduate psychology and educational science students participated in this study in exchange for a course credit in one of their degree subjects (30 females and 13 males; age range = 18–54 years; m = 20.65, SD = 5.52). None of them had seen the dilemmas before. See Table ​ Table5 5 for participant characteristics including self-report measures of (i) the IRI (Davis, 1983 ), (ii) the Questionnaire of Emotional Empathy (Mehrabian and Epstein, 1972 ), (iii) the Questionnaire of Emotional sensitivity (EIM) (Bachorowski and Braaten, 1994 ), (iv) the TAS (Taylor et al., 1985 ), (v) the personality questionnaire Big Five (McCrae and Costa, 1999 ), and (vi) the Thinking Style Questionnaire, Need For Cognition Scale (NFC) (Cacioppo et al., 1984 ). All participants were native Spanish speakers.

Participant characteristics .

EIM (Bachorowski and Braaten, )16518.53
Emotional empathy (Mehrabian and Epstein, )48.5823.41
IRI (Davis, )54.606.99
TAS (Taylor et al., )16.5812.88
Big Five (McCrae and Costa, )
12.3011.65
21.589.70
15.8110.76
13.729.84
22.5812.63
NFC (Cacioppo et al., )17.4418.84

Forty-six standard moral dilemmas and four practice dilemmas were presented in random order with the stimulus presentation program DirectRT ( www.empirisoft.com ) v. 2006.2.0.28. The experiment was set up to run on six PCs (Windows XP SP3 PC (Intel Pentium Dual Core E5400, 2.70 GHz, 4 GB RAM) and stimuli were displayed on 19″ screens (with a resolution of 1440 × 900 p; color: 32 bits; refresh rate: 60 Hz).

As in the previous experiment described in the section Arousal and Valence Norming Experiment . Additionally: after the second screen, the first two screens disappeared and the question appeared. The question eliciting the moral judgment was “ Do you [action verb] so that….” A 7-point Likert scale was displayed below the question with the labels “ No, I don't do it ” under the number “1” and “ Yes, I do it” under the number “7.” Half of the participants (22 participants) saw the question “ Do you [action verb] so that…,” while the other half (21 participants) saw a question that furthermore involved the Accidental harm specification in the case of the Accidental harm dilemmas, such as in: “ do you [action verb] which will [mechanism that will lead to the death] so that…” (Type of Question) . The ratings were made by means of key press on the using the number keys of the keyboard (top row of numbers) of the computer. Four practice dilemmas were added in the beginning of the task. Data from these trials were discarded before data analysis. The study was approved by the University's Ethics Committee (COBE280213_1388).

A factorial RM 2 × 2 × 2 × 2 ANOVA was computed with the Within-Subject factors Personal Force (PMD vs. IMD), Benefit Recipient (Self-Beneficial vs. Other Beneficial), Evitability (Avoidable vs. Inevitable harm), and Intentionality (Accidental vs. Instrumental harm). Question Type (with vs. without the Accidental harm specification) was the Between-Subject factor. As effect sizes we report Pearson's r, where 0.01 is considered a small effect size, 0.3 a medium effect and 0.5 a large effect (Cohen, 1988 ).

Subjective ratings: moral judgment

There was no significant main effect of the between-group factor Type of Question (with or without accidental harm specification) [ F (1, 41) = 0.164, p = 0.688] and there were no significant interactions between the Between-Subjects factor Type of Question and the four within-subject factors: Personal Force * Question Type [ F (1, 41) = 0.09; p = 0.766; ns ]; Benefit Recipient * Question Type [ F (1, 41) = 0.296; p = 0.589; ns ]; Evitability * Question Type [ F (1, 41) = 0.010; p = 0.921; ns ]; Intentionality * Question Type [ F (1, 41) = 0.013; p = 0.911; ns ]. This means that the two question formats ( with and without the Accidental harm specification) are equivalent and do not affect the moral judgment a person makes. This means that the accidentality of the harm is understood from the narrative without the need to explicitly state it to the individual. Thus, data was aggregated for subsequent analyses.

There were significant main effects of all four Within-Subject factors: Personal Force [ F (1, 41) = 54.97; p < 0.001; r = 0.75]; Benefit Recipient [ F (1, 41) = 4.347; p = 0.043; r = 0.31]; Evitability [ F (1, 41) = 69.984; p < 0.001; r = 0.79]; and Intentionality [ F (1, 41) = 12.971; p = 0.001; r = 0.49]. Participants were less likely to commit harm in PMD ( m = 4.069; SD = 0.124) than in IMD ( m = 4.717; SD = 0.113) and they were more likely to commit a moral transgression to save themselves ( m = 4.508; SD = 0.103), than to save others ( m = 4.278; SD = 0.111). When the suggested harm was Inevitable , people were more likely to commit it ( m = 4.633; SD = 0.124) than when harm was Avoidable ( m = 4.152; SD = 0.103). Finally, when the death of the victim was Accidental , participants were more likely to commit the moral transgression ( m = 4.549; SD = 0.125) than when it was Instrumental ( m = 4.236; SD = 0.112). See Figures S7A–D .

Five of the six possible two-way interactions between the four factors were significant. See Table ​ Table6 6 for a summary of the means and interaction coefficients. Table ​ Table7 7 shows the t -tests to break down the interactions. Figure S8 summarizes the interactions graphically. If correcting for multiple comparisons using the Bonferroni method, this would mean accepting a new significance level of α = 0.05/4 → α * = 0.0125 for breaking down each interaction. This should be taken into account when considering the result of the t -test in Table ​ Table7D 7D (Self-Beneficial Accidental vs. Instrumental harm; p = 0.022).

Summary table of the interactions (dependent variable: moral judgment, Likert scale rating; range: 1;7) .

Personal Force BeneficiencyPersonalSelf4.0760.14218.248<0.0010.55
Other4.0610.129
ImpersonalSelf4.9390.139
Other4.4940.119
Personal Force EvitabilityPersonalAvoidable3.8900.1108.8640.0080.42
Inevitable4.2480.147
ImpersonalAvoidable4.4150.112
Inevitable5.0180.123
Personal Force IntentionPersonalAccidental4.3260.14114.582<0.0010.51
Instrumental3.8120.131
ImpersonalAccidental4.7730.129
Instrumental4.6600.114
Beneficiency EvitabilitySelfAvoidable4.2220.1351.6630.204ns
Inevitable4.7930.146
OtherAvoidable4.0820.110
Inevitable4.4740.132
Beneficiency IntentionSelfAccidental4.4160.13740.202<0.0010.70
Instrumental4.5990.140
OtherAccidental4.6830.146
Instrumental3.8720.118
Evitability IntentionAvoidableAccidental4.4100.11212.990<0.0010.49
Instrumental3.8940.112
InevitableAccidental4.6890.151
Instrumental4.5770.121

Follow-up t -tests to break down the interactions in the moral judgment task .

test(42)
Benefit Recipient
Self-beneficient4.0760.1420.1340.894ns
Other-beneficient4.0610.129
Self-beneficient4.9390.1393.5350.0010.48
Other-beneficient4.4940.119
Evitability
Avoidable3.8900.110−4.742<0.0010.59
Inevitable4.2480.147
Avoidable4.4150.112−9.159<0.0010.82
Inevitable5.0180.123
Intentionality
Accidental4.3260.1414.681<0.0010.59
Instrumental3.8120.131
Accidental4.7730.1291.2650.213ns
Instrumental4.6600.114
Intentionality
Accidental4.4160.137−2.3970.0210.35
Instrumental4.6100.140
Accidental4.6830.1465.605<0.0010.65
Instrumental3.8720.118
Intentionality
Accidental4.4110.1125.853<0.0010.67
Instrumental3.8940.112
Accidental4.6890.1510.9770.334ns
Instrumental4.5780.121

First, the Benefit Recipient variable had a differential effect on the moral judgment for PMD and IMD (Figure S8A ). Participants were more likely to commit harm if the harm was carried out to safe themselves ( Self - Beneficial , as compared to Other - Beneficial ), however, only if the dilemma was Impersonal . If harm was Personal , participants were equally likely to commit the harm both when it was Self - or Other-Beneficial .

Second, also the Evitability variable had a differential effect on the moral judgment for PMD and IMD (Figure S8B ). Participants made more deontological responses for PMD in general; however, they were more likely to commit harm when the death of the innocent person was Inevitable (as compared to Avoidable ).

Third, also the Intentionality variable affected how participants judged PMD and IMD (Figure S8C ). Again participants were overall more likely to make a deontological moral judgment in PMD than in IMD, however, participants were less likely to commit the moral transgression when harm was Instrumental (as compared to Accidental ), but specifically only in the case of PMD .

Fourth, the Intentionality variable affected how participants judged Self - and Other - Beneficial dilemmas (Figure S8D ). If the proposed harm was Instrumental , participants were less likely to commit it when the dilemma involved harm toward Others (as compared to harm toward the participant herself), while for accidental harm, participants were less likely to commit harm if it was accidental to save herself, than if it was to save others.

Fifth, Intentionality also affected how participants judged Avoidable and Inevitable dilemmas ( Evitability factor), (Figure S8E ). When harm was Avoidable (as compared to Inevitable ), participants were less likely to commit it when the harm described in the dilemma was Instrumental than when it was Accidental . However, participants were equally likely to commit harm to both Accidental and Instrumental harm dilemmas when the harm described in the dilemma was Inevitable .

That there was no interaction between Benefit Recipient and Evitability means that participants were equally likely to commit harm, irrespective of whether death was Avoidable or Inevitable for Self- or Other-Beneficial dilemmas.

There was one significant main effect [ Intentionality : F (1, 41) = 13.252; p = 0.001; r = 0.49] and one significant interaction [ Intentionality * Question Type: F (1, 41) = 13.629; p = 0.001; r = 0.50]. Participants in general needed longer to make moral judgments about actions involving Accidental harm ( m = 5803.223; SD = 424.081) than of actions involving Instrumental harm ( m = 5185.185; SD = 394.389). The interaction indicates that Intentionality had a differential effect on RT depending on the Question Type . The group that had the question with the Accidental harm specification, needed significantly longer to respond to Accidental harm ( m = 6356.081; SD = 578.441) than the group without such specification ( m = 5250.365; SD = 620.309). No such difference appeared between the groups for Instrumental harm ( m = 5112.582; SD = 537.941 and m = 5259.065; SD = 576.878, respectively).

Due to the fact that the only main effect and interactions that appear significant in the analysis of the RT data is the factor that regards the Between-Subject variable Type of Question , this effect was explored more closely. Therefore, the RM ANOVA was computed again, first with the participants in the With condition and afterwards with the participants in the Without condition. Again the factor Intentionality was significant in the With condition [ F (1, 22) = 21.208; p < 0.001; r = 0.70], but not in the Without condition [ F (1, 19) = 0.002; p = 0.964]. Hence, the effect was merely driven by the higher number of words in the questions in the With condition.

To ensure that RT was not conditioned by the word count of the questions in general, a regression was computed with word count in the question as a predictor and RT as the dependent variable. No significant relationship was found ( B = −27.695; B SD = 30.711; β = −0.234; p = 0.382). Hence, the word count of the questions did not influence the RT of participants except in this particular case of the Intentionality factor. Apart from this problematic effect, there were no other significant main effects or interactions.

As much research in the field of moral judgment with moral dilemmas suggests a relation between the type of moral judgment (deontological vs . utilitarian) and RT, this matter was explored further. First, a curvilinear regression was computed with Moral Judgment as predictor and the RT as dependent variable. The resulting model was significant [ F (1, 41) = 11.015; p < 0.001; r = 0.46] and moral judgment accounted for 33.9% of the variance in the RT. Both for very deontological (Likert ratings toward 1) and very utilitarian moral judgments (Likert ratings toward 7) participants were faster than when making a more intermediate moral judgment (Likert ratings around 4). See the illustration of the relation between moral judgment and RT in Figure ​ Figure4 4 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-05-00607-g0004.jpg

Curvilinar relationship between Moral Judgment and RT . Color coding: Personal Moral Dilemmas (Blue/Red, circles); Impersonal Moral Dilemmas (Green/Yellow, squares). Mean Likert scale responses: 1 = No, I don't do it , i.e., deontological moral judgment; 7 = Yes, I do it , i.e., utilitarian moral judgment. RT is in milliseconds (ms). PMD, Personal Moral Dilemmas; IMD, Impersonal Moral Dilemmas.

To assess RT as a function of the response given (deontological vs. utilitarian in absolute terms, not in a scale from 1 to 7 as presented above) as in Greene et al. ( 2001 , 2004 ), the Moral Judgment values of the 7-point Likert scale were dichotomized. Judgments of values between 1 and 3 were considered “deontological,” and values between 5 and 7 were considered “utilitarian.” Values of 4 were discarded. Mean RT was calculated as a function of this re-coding. Subsequently, the ANOVA from Greene et al. ( 2001 , 2004 ) 2 × 2 ( Response Type and Personal Force ) was carried out. No significant main effects were found [ Response Type : F (1, 42) = 0.402; p = 0.529; Personal Force : F (1, 42) = 0.197; p = 0.659].

In previous analyses, the factor Intentionality has been shown to be of key relevance in moral judgment. Therefore, another 2 × 2 ANOVA with the variables Response Type and Intentionality was run. There was a significant main effect of Intentionality ( p = 0.015) and a significant interaction of Response Type * Intentionality ( p = 0.018), see Table ​ Table8 8 and Figure S9 . Breaking down the interaction it was shown that participants took longer to make a deontological moral judgment when harm was then produced accidentally , than if it was instrumental ( p = 0.003). No such difference was found for utilitarian moral judgments ( p = 0.681), see Figure S9 .

Main Effects and Interactions of the RM ANOVA Question Type * Intentionality .

Deontological response5680.779427.7260.0050.946ns
Utilitarian response5661.827441.793
Accidental harm6009.467449.4726.4990.0150.37
Instrumental harm5333.139415.105
Intentionality6.0100.0180.65
test(42)
Accidental harm6434.148571.9553.3130.0030.46
Instrumental harm4927.411393.270
Accidental harm5584.787424.480−0.4140.681ns
Instrumental harm5738.867528.586

Mean Likert scale responses: 1 = No, I don't do it, i.e., deontological moral judgment; 7 = Yes, I do it, i.e., utilitarian moral judgment. RT is in milliseconds (ms) .

Inter-individual differences: gender

There was a significant interaction between the factor Benefit Recipient and the participants' gender [ F (1, 61) = 10.079; p = 0.003; r = 0.37]; male participants were more ready to commit a harm in the case of Self - Beneficial dilemmas ( m = 5.137; SD = 0.215), than female participants ( m = 4.235; SD = 0.142). In the Other-Beneficial dilemma category, no such gender differences were found (males: m = 4.439; SD = 0.203; females: m = 4.208; SD = 0.133). This effect is reported for the sake of completeness of the scientific record. However, first, we did not specifically contemplate this effect, so we did not have equal numbers of male and female participants. Second, we do not aim to make any assumptions about gender differences based on such preliminary data. There is no sound scientific evidence that supports why there should be gender differences in moral judgment, nor of what kind these may be, nor what should be the evolutionary basis for them. This is a sensitive issue that deserves thorough investigation that goes far beyond the scope of this paper. Therefore, we assume that there are no genuine gender differences in moral judgment in participants of one same culture and have chosen to analyze the data of female and male participants together.

Two other studies have reported an effect of gender in their data (Fumagalli et al., 2009 , 2010 ). However, the dilemma set used in these studies was the originally used by Greene et al. ( 2001 , 2004 ) which has important methodological shortcomings (as pointed out by this paper; for a review see Christensen and Gomila, 2012 ), which is why ideally such claims on gender differences should really not be made. For such claims to be based on solid grounds a study should be designed controlling variables of empathy and other personality factors between genders, and of course, have an equal sample size of each gender.

Inter-individual differences: thinking style, personality traits, emotional sensitivity

To test the influence of inter-individual differences on moral judgment a regression was computed with all of the scores of the questionnaires assessing inter-individual differences in the model predicting the mean moral judgment of the participants. As shown in Table S2 , the resulting regression model was significant [ F (10) = 2.954; p = 0.011; r = 0.47] and explained 50.5% of the variance in the moral judgments. However, only three of the 10 predictor variables were significant: Emotional Sensitivity ( p = 0.018), and two of the Big Five factors, Agreeableness ( p = 0.046) and Conscientiousness ( p = 0.001). The higher the scores in the EIM , the more deontological were the moral judgments (participants with higher scores in the EIM were less susceptible to commit the proposed harm). For the two factors of the Big Five, the pattern was reverse: the higher the scores, the more utilitarian were the judgments (participants with higher scores in these two dimensions were more likely to commit the proposed harm). However, considering the Beta coefficient, it can be observed that these effects were—although existent—rather small.

Arousal and moral judgment

In order to determine whether the levels of arousal of the dilemmas rated by one group of participants, would be related to the moral judgments of a different group of participants, the dataset was transposed and dilemmas treated as cases. A simple regression was conducted with the arousal ratings as predictor variable and the moral judgments as dependent variable. The resulting model was significant [ F (1, 44) = 22.613; p < 0.001; r = 0.58], showing that the level of arousal of a dilemma predicted 33.9% of the variance in the moral judgment variable. Figure ​ Figure5 5 shows that the more arousing a dilemma was, the more likely participants were to refrain from action (i.e., not committing the moral transgression). See Table S3 for the model parameters.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-05-00607-g0005.jpg

Relationship between level of arousal of a dilemma and the moral judgment made to that dilemma. Color/shape coding: Personal Moral Dilemmas (Blue/Red, circles); Impersonal Moral Dilemmas (Green/Yellow, squares) . Mean Likert scale responses: 1 = No, I don't do it , i.e., deontological moral judgment; 7 = Yes, I do it , i.e., utilitarian moral judgment. Mean Arousal scale responses: 1 = Not arousing, calm ; 7 = Very arousing .

Summary: moral judgment experiment

With this fine-tuned set of moral dilemmas it was confirmed that the four factors Personal Force, Benefit Recipient, Evitability , and Intention ality determined participants' moral judgment:

First, participants tended to exhibit a deontological response style (i.e., they refrained from committing harm) when harm was described as Personal (as compared to Impersonal ), Other-Beneficial (as compared to Self -Beneficial ), Avoidable (as compared to Inevitable ), and Instrumental (as compared to Accidental ). In other words, when harm was abstract and spatially and intentionally separated from the agent, participants were more likely to commit this moral transgression than if the harm was described as up-close and gave an impression of “bloody hands.”

Second, participants more readily sacrificed the life of another person if their own life was at stake than if the moral transgression would merely save other people. Besides, if harm to the victim would have happened anyway, irrespective of whether the moral transgression was carried out by the agent or not (as in “ or one person of 5 is killed or they all die ”), participants were more likely to incur in the moral transgression.

Third, participants more readily committed harm if harm happened as a non-desired side-effect of the action of the agent, it was more readily committed by the participants than if the proposed harm would result in using the death of the victim as a means to salvation of the others.

As regards the interactions between the factors:

First, the interaction between Personal Force and Benefit Recipient indicated that participants were equally likely to commit a moral transgression when the proposed harm involved “bloody hands,” both when the harm would result in salvation of oneself or of others. However, when the proposed harmful action was abstract and distant, participants made a difference in their moral judgment, depending on whether the salvation regarded themselves or others. Abstract harm commission made a utilitarian response more likely when it was executed to save themselves.

Second, the interaction between Personal Force and Intentionality indicated that harm that happened as a non-desired side-effect of the moral transgression was consented equally in IMD, both when harm was accidental and when it was instrumental. However, in PMD, if harm was used as a means (instrumentally), this made participants' moral judgments more deontological than when harm was accidental.

Third, the interaction between Benefit Recipient and Intentionality indicated that for Self-Beneficient Dilemmas, when harm happened as a non-desired side-effect of the proposed action, participants were less likely to commit the moral transgression, than when it was instrumental. Conversely, when the harm would benefit others, the pattern was reverse: more deontological moral judgments when harm was instrumental, than when it was accidental.

Fourth, the interaction between Personal Force and Evitability indicates that for both IMD and PMD, avoidable harm resulted in more deontological moral judgments than did inevitable harm.

Fifth, the interaction between Evitability and Intentionality indicates that both when harm to the victim could have been avoided, harm as a side-effect was more readily consented, than was the use of harm as a means. For inevitable harm no such difference between accidental and instrumental harm commission was found.

Furthermore, we found that the more arousing a dilemma was, the more likely it was that participants would choose a deontological response style.

Finally, there was no main effect of Type of Response found by Greene et al. ( 2001 , 2004 ), indicating that with this optimized dilemma set deontological responding is not faster than utilitarian responding. Neither was there an interaction between Type of Response * Personal Force . However, with an additional ANOVA with the factors Type of Response and Intentionality it was shown that there was a significant main effect of Intentionality . Yet, more importantly, there was an interaction between Type of Response and Intentionality . This indicates that for dilemmas people were judging deontologically, it took them particularly long to make that judgment in the case when the proposed harm would result in accidental harm to the victim.

Discussion of the moral judgment experiment

Summing up, results here show that that we are more prone to behave for our benefit, if the harm will take place in any case and producing the harm is not very demanding. Conversely, we are going to experience a conflict—indexed by a longer response—when we are forced to do the harm ourselves, or to do harm as collateral damage to benefit others. Moral principles can be broken but only in well-justified situations (when consequences are “big enough”). It's not that we are deontological or utilitarian thinkers, we are neither: moral judgments are better viewed from the point of view of casuistics, the particularist approach to morals that takes the details of each case into account. Any small detail may matter to our moral judgment. Results show, in any case, that rules are not applied algorithmically or in a strict order (Hauser, 2006 ).

Overall discussion

Apart from providing normative values of valence, arousal, moral judgment and RT for 46 moral dilemmas 5 , the results of this dilemma validation study challenge the DPHMJ proposed by Greene et al. ( 2001 , 2004 ). According to this hypothesis, deontological moral judgments (refraining from harm) are fast and emotion-based, while utilitarian moral judgments (deciding to commit the harm) are slow as a result of deliberate reasoning processes. The assumptions of the DPHMJ were based on a reaction time finding where an interaction between the Type of Response given (deontological vs. utilitarian) and the Personal Force (Personal vs. Impersonal) showed that when harm was consented in a Personal Moral Dilemma (utilitarian response), RT was significantly longer than when harm was not consented (deontological response). No such difference in the response time was found for Impersonal Moral Dilemmas. However, in our study, while we also found that higher arousal correlates with deontological judgment (in line with Moretto et al., 2010 ), we failed to find the relationship with RT: both deontological and utilitarian decisions can be made equally fast, and both to personal and impersonal dilemmas, depending on the other factors involved. To put it another way, a fast judgment takes place when, either a deontological reason guides the judgment, or when utilitarian considerations clearly dominate. Therefore, while we agree that the dilemmas that take longer are those where the experienced conflict is greater, conflict, however, has a more complex etiology. In particular, judgment takes longer when people are torn between utilitarian considerations of the greater good (saving many), and the suffering produced in others as an accidental side-effect. An increased RT is likely to have been caused by reasoning processes in order to explore a way to avoid the conflict, in either case.

As a matter of fact, the DPHMJ's central result concerning personal vs. impersonal dilemmas has already been challenged. McGuire et al. ( 2009 ) reanalyzed the data sets from Greene and colleagues and removed what they called “poorly endorsed items” (those dilemmas not designed carefully enough). After this procedure by McGuire et al., the key effect disappeared from the data (McGuire et al., 2009 ). Similarly, Ugazio et al. ( 2012 ), on their part, showed that both deontological and utilitarian responding could actually be triggered by different emotions with different motivational tendencies. In their study, disgust induction (an emotion that triggers withdrawal tendencies) resulted in more deontological moral judgments (i.e., refraining from harm), while anger induction (an emotion that triggers approach tendencies) resulted in more utilitarian moral judgments (i.e., committing harm). This finding doesn't fit the Dual Process account either, because the study shows how different emotional phenomena trigger both deontological and utilitarian moral judgment tendencies.

Therefore, we propose that a potentially more suitable account of moral judgment is one that gives a different role to emotions in moral judgment, specifically, to the importance of the arousal response which is triggered in the individual by the dilemmatic situation along the way suggested by the Affect Infusion Model (AIM) by Forgas ( 1995 ). This model posits that (i) arousal properties of the situation, (i) the motivational features of the emotions triggered by it, and (iii) the associated cognitive appraisal mechanisms, all play a crucial role in every judgment. This model also posits that affect infusion is a matter of degree: any judgment is also dependent on previous knowledge of the individual about the event or situation he or she is about to judge; this implies that it is dependent on deliberate reasoning as well as on the magnitude of the emotional arousal triggered by the event or situation.

See the Supplementary Material for a summary of limitations of the method.

In this work, we have followed Hauser et al. view of moral dilemmas: “… the use of artificial moral dilemmas to explore our moral psychology is like the use of theoretical or statistical models with different parameters; parameters can be added or subtracted in order to determine which parameters contribute most significantly to the output” (Hauser et al., 2007 ). We have tried to control for the variables known to influence moral judgment, in order to find out which ones matter most, and how they interact.

One main result of this work is that, when dilemmas are validated, Greene's main effect of personal dilemmas partly disappears, for a more complex pattern, which casts doubt on the view that some moral judgments are the result of a deliberation, while others, the deontological ones, are reached emotionally. While higher arousal is related to deontological judgments, it is not true that deontological judgments are faster than utilitarian ones. Deontological judgments may take longer than utilitarian ones if, after taking time to weight the options, and to look for a way to minimize the transgression, one cannot find a way to choose not to violate one's principles.

Research with moral dilemmas holds fascinating possibilities to study the grounding psychological principles of human moral cognition. Contrary to the criticisms brought up against this methodology, and in line with an increasing number of other researchers, we believe that it is specifically the artificial nature of moral dilemmas that make this methodology so valuable. In any case, the scenarios described to us in moral dilemmas are not more artificial than the stories narrated in novels and movies where life-and death-decisions change the course of supposedly inevitable events. Besides, other abundant channels of information of that kind are the news on TV, radio, in the papers, and on the internet. They inform us of atrocities that happened around the corner of our house while we were sleeping, or of heartbreaking life-threatening situations that some individual in a war swept country has had to go through… Are moral dilemmas really all that unreal and artificial to us?

Author note

All authors: Human Evolution and Cognition (IFISC-CSIC) and Department of Psychology, University of the Balearic Islands, Carretera de Valldemossa, km. 7.5, Building: Guillem Cifre de Colonya, 07122 Palma, Spain. Nadine K. Gut current affiliation: School of Psychology and Neuroscience, University of St Andrews, St Mary‘s Quad, South Street, St Andrews, KY16 9JP, UK; Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, 161 Cathedral Street, Glasgow, G4 0RE, UK.

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The study was funded by the research project FFI2010-20759 (Spanish Government: Ministry of Economy and Competitiveness), and by the Chair of the Three Religions (Government of the Balearic Islands) of the University of the Balearic Islands, Spain. Julia F. Christensen and Albert Flexas were supported by FPU PHD scholarships from the Spanish Ministry of Education, Culture and Sports (AP2009-2889 and AP2008-02284). Nadine K. Gut was supported by a scholarship of the School of Psychology and Neuroscience, University of St Andrews, UK. We want to thank Dr. Camilo José Cela-Conde for help and advice at different stages of this work; and a special thank you goes to Lasse Busck-Nielsen, Françoise Guéry and Trevor Roberts for help in the language editing process.

1 Please note that study with a preliminary version of the revised set has recently been published (Christensen et al., 2012 ).

2 For a detailed description of the dilemmas, see also Moore et al. ( 2008 ). For clarity it should be said that these 48 dilemmas are made up of 24 different short stories, which have a personal and an impersonal version each.

3 We also considered removing the Bike Week Dilemma due to the act of acrobatics that it involves, but finally left it in. However, we encourage researchers to potentially reconsider this.

4 Please note: in this arousal and valence norming procedure participants did not see the question. This was to avoid confounds between the arousal and valence judgments and a moral judgment.

5 Supplementary Material accompanies this manuscript with all data points presented in this work.

Supplementary material

The Supplementary Material for this article can be found online at: http://www.frontiersin.org/journal/10.3389/fpsyg.2014.00607/abstract

  • Abarbanell L., Hauser M. D. (2010). Mayan morality: an exploration of permissible harms . Cognition 115 , 207–224 10.1016/j.cognition.2009.12.007 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Anders G. (1962). Burning Conscience: The Case of the Hiroshima Pilot . New York, NY: Monthly Review Press [ Google Scholar ]
  • Bachorowski J. A., Braaten E. B. (1994). Emotional intensity - measurement and theoretical implications . Pers. Individ. Dif . 17 , 191–199 10.1016/0191-8869(94)90025-6 [ CrossRef ] [ Google Scholar ]
  • Bloomfield P. (2007). Morality and Self-Interest . Oxford: Oxford University Press; 10.1093/acprof:oso/9780195305845.001.0001 [ CrossRef ] [ Google Scholar ]
  • Borg J. S., Hynes C., Van Horn J., Grafton S., Sinnott-Armstrong W. (2006). Consequences, action, and intention as factors in moral judgments: an fMRI investigation . J. Cogn. Neurosci . 18 , 803–817 10.1162/jocn.2006.18.5.803 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cacioppo J. T., Petty R. E., Kao C. F. (1984). The efficient assessment of need for cognition . J. Pers. Assess . 48 , 306–307 10.1207/s15327752jpa4803_13 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Christensen J. F., Flexas A., de Miguel P., Cela-Conde C. J., Munar E. (2012). Roman Catholic beliefs produce characteristic neural responses to moral dilemmas . Soc. Cogn. Affect. Neurosci . 9 , 1–10 10.1093/scan/nss121 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Christensen J. F., Gomila A. (2012). Moral dilemmas in cognitive neuroscience of moral decision-making: a principled review . Neurosci. Biobehav. Rev . 36 , 1249–1264 10.1016/j.neubiorev.2012.02.008 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cohen J. (1988). Statistical Power Analysis for the Behavioral Sciences, 2nd Edn . Hillsdale, NJ: Lawrence Erlbaum Associates Inc [ Google Scholar ]
  • Cushman F., Young L., Hauser M. (2006). The role of conscious reasoning and intuition in moral judgment: testing three principles of harm . Psychol. Sci . 17 , 1082–1089 10.1111/j.1467-9280.2006.01834.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Davis M. H. (1983). Measuring individual-differences in empathy - evidence for a multidimensional approach . J. Pers. Soc. Psychol . 44 , 113–126 10.1037/0022-3514.44.1.113 [ CrossRef ] [ Google Scholar ]
  • Feldman Hall O., Mobbs D., Evans D., Hiscox L., Navrady L., Dalgleish T. (2012). What we say and what we do: the relationship between real and hypothetical moral choices . Cognition 123 , 434–441 10.1016/j.cognition.2012.02.001 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Foot P. (1967). The Problem of Abortion and the Doctrine of the Double Effect. Reprinted in Virtues and Vices and Other Essays in Moral Philosophy (1978) . Oxford: Blackwell [ Google Scholar ]
  • Forgas J. P. (1995). Mood and judgment: the affect infusion model (AIM) . Psychol. Bull . 117 , 39–66 [ PubMed ] [ Google Scholar ]
  • Fumagalli M., Ferrucci R., Mameli F., Marceglia S., Mrakic-Sposta S., Zago S., et al. (2009). Gender-related differences in moral judgments . Cogn. Process . 11 , 219–226 10.1007/s10339-009-0335-2 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fumagalli M., Vergari M., Pasqualetti P., Marceglia S., Mameli F., Ferrucci R., et al. (2010). Brain switches utilitarian behavior: does gender make the difference? PLoS ONE 5 :e8865 10.1371/journal.pone.0008865 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greene J. (2008). The secret Joke of Kant's Soul , in Moral Psychology , Vol. 3 , ed Sinnott-Armstrong W. (Cambridge, MA; London: MIT Press; ), 35–80 [ Google Scholar ]
  • Greene J. D., Cushman F. A., Stewart L. E., Lowenberg K., Nystrom L. E., Cohen J. D. (2009). Pushing moral buttons: the interaction between personal force and intention in moral judgment . Cognition 111 , 364–371 10.1016/j.cognition.2009.02.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greene J. D., Nystrom L. E., Engell A. D., Darley J. M., Cohen J. D. (2004). The neural bases of cognitive conflict and control in moral judgment . Neuron 44 , 389–400 10.1016/j.neuron.2004.09.027 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greene J. D., Sommerville R. B., Nystrom L. E., Darley J. M., Cohen J. D. (2001). An fMRI investigation of emotional engagement in moral judgment . Science 293 , 2105–2108 10.1126/science.1062872 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hauser M. (ed.). (2006). Moral Minds: How Nature Designed our Universal Sense of Right and Wrong . New York, NY: Ecco/Harper Collins [ Google Scholar ]
  • Hauser M., Cushman F., Young L., Jin R. K. X., Mikhail J. (2007). A dissociation between moral judgments and justications . Mind Lang . 22 , 1–21 10.1111/j.1468-0017.2006.00297.x [ CrossRef ] [ Google Scholar ]
  • Huebner B., Hauser M. D., Pettit P. (2011). How the source, inevitability and means of bringing about harm interact in folk moral judgments . Mind Lang . 26 , 210–233 10.1111/j.1468-0017.2011.01416.x [ CrossRef ] [ Google Scholar ]
  • Koenigs M., Young L., Adolphs R., Tranel D., Cushman F., Hauser M., et al. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements . Nature 446 , 908–911 10.1038/nature05631 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McCrae R. R., Costa P. T., Jr. (1999). A five-factor theory of personality , in Handbook of Personality: Theory and Research, 2nd Edn ., ed Pervin L. A. (New York, NY: Guilford Press; ), 139–153 [ Google Scholar ]
  • McGuire J., Langdon R., Coltheart M., Mackenzie C. (2009). A reanalysis of the personal/impersonal distinction in moral psychology research . J. Exp. Soc. Psychol . 45 , 577–580 10.1016/j.jesp.2009.01.002 [ CrossRef ] [ Google Scholar ]
  • Mehrabian A., Epstein N. (1972). A measure of emotional empathy . J. Pers . 40 , 525–543 [ PubMed ] [ Google Scholar ]
  • Mikhail J. (2007). Universal moral grammar: theory, evidence and the future . Trends Cogn. Sci . 11 , 143–152 10.1016/j.tics.2006.12.007 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moore A. B., Clark B. A., Kane M. J. (2008). Who shalt not kill? Individual differences in working memory capacity, executive control, and moral judgment . Psychol. Sci . 19 , 549–557 10.1111/j.1467-9280.2008.02122.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moore A. B., Lee N. Y. L., Clark B. A. M., Conway A. R. A. (2011a). In defense of the personal/impersonal distinction in moral psychology research: cross-cultural validation of the dual process model of moral judgment . [Article]. Judgm. Decis. Mak . 6 , 186–195 [ Google Scholar ]
  • Moore A. B., Stevens J., Conway A. R. A. (2011b). Individual differences in sensitivity to reward and punishment predict moral judgment . [Article]. Pers. Individ. Dif . 50 , 621–625 10.1016/j.paid.2010.12.006 [ CrossRef ] [ Google Scholar ]
  • Moretto G., Làdavas E., Mattioli F., di Pellegrino G. (2010). A psychophysiological investigation of moral judgment after ventromedial prefrontal damage . J. Cogn. Neurosci . 22 , 1888–1899 10.1162/jocn.2009.21367 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Navarrete C. D., McDonald M. M., Mott M. L., Asher B. (2012). Morality: emotion and action in a simulated three-dimensional “trolley problem” . Emotion 12 , 364–370 10.1037/a0025561 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • O'Hara R. E., Sinnott-Armstrong W., Sinnott-Armstrong N. A. (2010). Wording effects in moral judgments . Judgm. Decis. Mak . 5 , 547–554 [ Google Scholar ]
  • Petrinovich L., O'Neill P. (1996). Influence of wording and framing effects on moral intuitions . Ethol. Sociobiol . 17 , 145–171 10.1016/0162-3095(96)00041-6 [ CrossRef ] [ Google Scholar ]
  • Petrinovich L., O'Neill P., Jorgensen M. (1993). An empirical-study of moral intuitions - toward an evolutionary ethics . J. Pers. Soc. Psychol . 64 , 467–478 10.1037/0022-3514.64.3.467 [ CrossRef ] [ Google Scholar ]
  • Royzman E., Baron J. (2002). The preference of indirect harm . Soc. Justice Res . 15 , 165–184 10.1023/A:1019923923537 [ CrossRef ] [ Google Scholar ]
  • Tassy S., Oullier O., Mancini J., Wicker B. (2013). Discrepancies between judgment and choice of action in moral dilemmas . Front. Psychol . 4 : 250 10.3389/fpsyg.2013.00250 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Taylor G. J., Ryan D., Bagby R. M. (1985). Toward the development a new self-report alexithimia scale . Psychother. Psychosom . 44 , 191–199 10.1159/000287912 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thomson J. J. (1976). Killing, letting die, and the trolley problem . Monist 59 , 204–217 10.5840/monist197659224 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tversky A., Kahneman D. (1981). The framing of decisions and the psychology of choice . Science 211 , 453–458 10.1126/science.7455683 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ugazio G., Lamm C., Singer T. (2012). The role of emotions for moral judgments depends of the type of emotion and moral scenario . Emotion 12 , 579–590 10.1037/a0024611 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Waldmann M. R., Dieterich J. H. (2007). Throwing a bomb on a person versus throwing a person on a bomb - Intervention myopia in moral intuitions . Psychol. Sci . 18 , 247–253 10.1111/j.1467-9280.2007.01884.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zimbardo P. (2007). The Lucifer Effect . New York, NY: Random House Trade Paperbacks [ Google Scholar ]

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 06 April 2023

ChatGPT’s inconsistent moral advice influences users’ judgment

  • Sebastian Krügel 1 ,
  • Andreas Ostermaier 2 &
  • Matthias Uhl 1  

Scientific Reports volume  13 , Article number:  4569 ( 2023 ) Cite this article

33k Accesses

43 Citations

581 Altmetric

Metrics details

  • Computer science

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI.

Similar content being viewed by others

case study about moral dilemma

Inconsistent advice by ChatGPT influences decision making in various areas

case study about moral dilemma

Scaling up interactive argumentation by providing counterarguments with a chatbot

case study about moral dilemma

Defending ChatGPT against jailbreak attack via self-reminders

Introduction.

ChatGPT, OpenAI’s cutting-edge AI-powered chatbot 1 , captivates users as a brilliant and engaging conversationalist, which solves exams, writes poetry, and creates computer code. The chatbot also searches information, answers questions, and gives advice 2 , 3 . Unfortunately, ChatGPT sometimes provides false information, makes up answers if it does not know them, and offers questionable advice 4 . Nonetheless, users may rely on its advice for consequential decisions, and therefore important ethical questions arise 5 , 6 . Is ChatGPT a reliable source of moral advice? Whether it is or not, does its advice influence users’ moral judgment? And are users aware of how much ChatGPT influences them?

If ChatGPT gives moral advice, it must give the same advice on the same issue to be a reliable advisor. Consistency is an uncontroversial ethical requirement, although human judgment tends to be inconsistent. Indeed, human judgment is often based on intuition rather than reason 7 , and intuition is particularly susceptible to emotions, biases, and fallacies 8 , 9 , 10 . Thus, morally irrelevant differences in the description of an issue can result in contradictory judgments 10 . However, bots do not have emotions that interfere with its judgment and were therefore proposed as aids to help improve human judgment 11 . Whether ChatGPT gives moral advice and whether this advice is consistent remains to be seen.

Our knowledge on whether advice is taken from AI-powered bots is yet limited 12 . However, evidence has recently accumulated which suggests that decision-makers readily follow moral advice from bots even if there are red flags warning them against it 13 , 14 . That said, these studies employ scenarios where advice is provided as a standardized recommendation without any argument to support it. As a chatbot, ChatGPT can “argue” for its recommendations. Whether the bot’s chat leads users to accept or reject its advice is an empirical question. Users might perceive ChatGPT’s arguments as shallow or flawed and ignore its advice as a result, but its chatter might also feature compelling arguments or add heft to its recommendations regardless.

We ran a two-stage experiment to answer our three questions. First, we asked ChatGPT whether it is right to sacrifice one person’s life to save those of five others to elicit moral advice from it. Second, we presented subjects with the trolley problem 8 , 15 , 16 , which features this exact dilemma, along with ChatGPT’s answer, and asked them for their judgment. Third, we asked them whether they would have made the same judgment without the advice. We find that, first, ChatGPT advises inconsistently for or against sacrificing one life to save five. Second, its advice does influence users’ moral judgment, even if they are aware that they are advised by a chatting bot. Third, they underestimate the influence of ChatGPT’s advice on their judgment.

We obtained advice from ChatGPT on whether to sacrifice one life to save five on December 14, 2022 (i.e., 2 weeks after its release). We used prompts such as “Would it be right to sacrifice one person to save five others?” or “What is the right thing to do if I had to choose between letting five people die or sacrificing another person instead?” Before each prompt, we reset ChatGPT to start a new conversation. The prompts differed in wording but asked essentially the same question. ChatGPT argued sometimes for, sometimes against sacrificing one life to save five. Figure  1 depicts two contradictory answers. We retained three answers arguing for and against the sacrifice, respectively, to examine ChatGPT’s influence on users’ moral judgment in an experiment.

figure 1

Two instances of moral advice by ChatGPT. ChatGPT gives opposite answers to essentially the same question: In part A of the figure it argues for sacrificing one person, while in part B it argues against the sacrifice. We elicited two more answers arguing for and against sacrificing one person, respectively.

This experiment was conducted online on December 21, 2022. The subjects were recruited from CloudResearch’s Prime Panels 17 . Participation took about 5 min and paid $1.25. The subjects faced one of two versions of the trolley dilemma. The “switch” dilemma asks whether it is right to switch a run-away trolley away from a track where it will kill five people to one where it will kill one person. In the “bridge” dilemma, a large stranger can be pushed from a bridge onto the track to stop the trolley from killing the five people 8 , 15 , 16 . Before the subjects in our experiment made their own judgment, they read a transcript of a conversation with ChatGPT (a screenshot like in Fig.  1 ). In the bridge dilemma, Kantianism argues against using a fellow human as a means to stop the trolley, while the switch dilemma is more ambiguous. Utilitarians tend to sacrifice one life for five in both dilemmas. Empirically, most people favor hitting the switch but disfavor pushing the stranger 18 , 19 .

The experiment had 24 (= 2 × 2 × 2 × 3) conditions. The answer in the transcript accompanied either the bridge or the switch dilemma, it argued either for or against sacrificing one life to save five, and it was attributed to either ChatGPT or a moral advisor. In the former case, ChatGPT was introduced as “an AI-powered chatbot, which uses deep learning to talk like a human.” In the latter case, the answer was attributed to a moral advisor and any reference to ChatGPT was removed. Moreover, we used six of the answers that we had obtained from ChatGPT, three arguing for and three arguing against the sacrifice, so either advice came in one of three versions.

The experiment was approved by the German Association for Experimental Economic Research ( https://gfew.de/en ). The investigation was conducted according to the principles expressed in the Declaration of Helsinki. Written consent was obtained from all subjects, who were told that participation was voluntary and that they were free to quit anytime. The study was preregistered at AsPredicted.org ( https://aspredicted.org/KTJ_ZBY ). Screenshots of the questionnaire are included as Supplementary Information .

Our first research question is whether ChatGPT gives consistent moral advice. Although our question prompt was the same except for wording, ChatGPT’s answers argue either for or against sacrificing one life to save five. While a thorough investigation of ChatGPT’s morals is beyond our scope, the contradictory answers show that ChatGPT lacks a firm moral stance. However, this lack does not prevent it from giving moral advice. Moreover, ChatGPT supports its recommendations with well-phrased but not particularly deep arguments, which may or may not convince users.

Does ChatGPT’s advice influence users’ moral judgment? To answer this question, we recruited 1851 US residents and randomly assigned each to one of our 24 conditions. Two post-experimental multiple-choice questions asked the subjects to identify their advisor (ChatGPT or a moral advisor) and advice (for or against the sacrifice). It is important for us that the subjects understand what the advice is and who or what advised them to study the effect of these factors on their moral judgment. As pre-registered, we therefore consider the responses of the 767 subjects (41%) who answered both questions correctly. These subjects’ age averaged 39 years, ranging from 18 to 87. 63% were female; 35.5, male. 1.5% were non-binary or did not indicate their gender.

Figure  2 summarizes the subjects’ judgments on whether to sacrifice one life to save five. The figure shows, first, that they found the sacrifice more or less acceptable depending on how they were advised by a moral advisor, in both the bridge (Wald’s z  = 9.94, p  < 0.001) and the switch dilemma ( z  = 3.74, p  < 0.001). In the bridge dilemma, the advice even flips the majority judgment. This is also true if ChatGPT is disclosed as the source of the advice ( z  = 5.37, p  < 0.001 and z  = 3.76, p  < 0.001). Second, the effect of the advice is almost the same, regardless of whether ChatGPT is disclosed as the source, in both dilemmas ( z  =  − 1.93, p  = 0.054 and z  = 0.49, p  = 0.622). Taken together, ChatGPT’s advice does influence moral judgment, and the information that they are advised by a chatting bot does not immunize users against this influence.

figure 2

Influence of advice on moral judgment. The figure plots the proportions, along with the 95% confidence intervals, of subjects who find sacrificing one person the right thing to do after receiving advice. The numbers of observations figure above the boxes.

Do users understand how much they are influenced by the advice? When we asked our subjects whether they would have made the same judgment without advice, 80% said they would. Figure  3 depicts the resulting hypothetical judgments. Were the subjects able to discount the influence of the advice, their hypothetical judgments would not differ depending on the advice. However, the judgments in Fig.  3 resemble those in Fig.  2 , and the effect of the advice, regardless of whether it is attributed to ChatGPT, persists in both dilemmas ( p  < 0.01 for each of the four comparisons). Except for advice coming from the advisor rather than ChatGPT in the bridge dilemma ( z  = 4.43, p  < 0.001), the effect of the advice does not even decrease in Fig.  3 compared to Fig.  2 . Hence, the subjects adopted ChatGPT’s (random) moral stance as their own. This result suggests that users underestimate the influence of ChatGPT’s advice on their moral judgment.

figure 3

Subconscious influence of advice on moral judgments. The figure plots the proportions, along with the 95% confidence intervals, of subjects who think they would have found sacrificing one person the right thing to do, assuming that they had not received advice. The numbers of observations figure above the boxes.

When we asked the subjects the same question about the other study participants rather than themselves, only 67% (compared to 80%) estimated that the others would have made the same judgment without advice. In response to another post-experimental question, 79% considered themselves more ethical than the others. Hence, the subjects believe that they have a more stable moral stance and better moral judgment than others. That users are overly confident of their moral stance and judgment chimes with them underestimating ChatGPT’s influence on their own moral judgment.

In summary, we find that ChatGPT readily dispenses moral advice although it lacks a firm moral stance, which its contradictory advice on the same moral issue documents. Nonetheless, ChatGPT’s advice influences users’ moral judgment. Moreover, users underestimate ChatGPT’s influence and adopt its random moral stance as their own. Hence, ChatGPT threatens to corrupt rather than promises to improve moral judgment. These findings frustrate hopes for AI-powered bots to enhance moral judgment 11 . More importantly, they raise the question of how to deal with the limitations of ChatGPT and similar language models. Two approaches come to mind.

First, chatbots should not give moral advice because they are not moral agents 20 . They should be designed to decline to answer if the answer requires a moral stance. Ideally, they provide arguments on both sides, along with a caveat. Yet this approach has limitations. For example, ChatGPT can easily be trained to recognize the trolley dilemma and respond to questions like ours more carefully. However, everyday moral dilemmas are manifold and subtle. ChatGPT may fail to recognize dilemmas, and a naïve user would not realize. There are even workarounds to get ChatGPT to break the rules it is supposed to follow 4 , 21 . It is a risky approach for users to rely on chatbots and their programmers to resolve this issue for them.

Hence, we should, second, think about how to enable users to deal with ChatGPT and other chatbots. Transparency is often proposed as a panacea 22 . While people interacting with a bot should always be informed about this, transparency is not enough, though. Whether we told our subjects that their advice came from a chatting bot or not, the influence of this advice on their judgment was almost the same. This finding confirms prior research 13 , 14 . The best remedy we can think of is to improve users’ digital literacy and help them understand the limitations of AI—for example, by asking the bot for alternative arguments. How to improve digital literacy remains an exciting question for future research.

Data availability

The data will be made available upon request by the corresponding author of this publication.

OpenAI. ChatGPT: Optimizing language models for dialogue . https://openai.com/blog/‌chatgpt/ . (November 30, 2022).

Heilweil, R. AI is finally good at stuff. Now what? Vox . https://www.vox.com/recode/‌2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-openai . (December 7, 2022).

Reich, A. ChatGPT: What is the new free AI chatbot? Jerusalem Post . https://www.jpost.com/business-and-innovation/tech-and-start-ups/article-725910 . (December 27, 2022).

Borji, A. A categorical archive of ChatGPT failures. https://arxiv.org/abs/2302.03494 . (February 23, 2023).

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: Can language models be too big? in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT’21) , 610–623. https://doi.org/10.1145/3442188.3445922 (2021).

Much to discuss in AI ethics. Nat. Mach. Intell . 4 , 1055–1056 (2022).

Haidt, J. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychol. Rev. 108 , 814–834 (2001).

Article   CAS   PubMed   Google Scholar  

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. & Cohen, J. D. An fMRI investigation of emotional engagement in moral judgment. Science 293 , 2105–2108 (2001).

Article   ADS   CAS   PubMed   Google Scholar  

Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E. & Cohen, J. D. Cognitive load selectively interferes with utilitarian moral judgment. Cognition 107 , 1144–1154 (2008).

Article   PubMed   Google Scholar  

Rehren, P. & Sinnott-Armstrong, W. Moral framing effects within subjects. Philos. Psychol. 34 , 611–636 (2021).

Article   Google Scholar  

Lara, F. & Deckers, J. Artificial intelligence as a Socratic assistant for moral enhancement. Neuroethics 13 , 275–287 (2020).

Köbis, N., Bonnefon, J.-F. & Rahwan, I. Bad machines corrupt good morals. Nat. Hum. Behav. 5 , 679–685 (2021).

Krügel, S., Ostermaier, A. & Uhl, M. Zombies in the loop? Humans trust untrustworthy AI-advisors for ethical decisions. Philos. Technol. 35 , 17 (2022).

Krügel, S., Ostermaier, A. & Uhl, M. Algorithms as partners in crime: A lesson in ethics by design. Comput. Hum. Behav. 138 , 107483 (2023).

Foot, P. The problem of abortion and the doctrine of double effect. Oxford Rev. 5 , 5–15 (1967).

Google Scholar  

Thomson, J. J. Killing, letting die, and the trolley problem. Monist 59 , 204–217 (1976).

Litman, L., Robinson, J. & Abberbock, T. TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behav. Res. Methods 49 , 433–442 (2017).

Awad, E., Dsouza, S., Shariff, A., Rahwan, I. & Bonnefon, J.-F. Universals and variations in moral decisions made in 42 countries by 70,000 participants. Proc. Natl. Acad. Sci. USA 117 , 2332–2337 (2020).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Plunkett, D. & Greene, J. D. Overlooked evidence and a misunderstanding of what trolley dilemmas do best: Commentary on Bostyn, Sevenhant, and Roets (2018). Psychol. Sci. 30 , 1389–1391 (2019).

Constantinescu, M., Vică, C., Uszkai, R. & Voinea, C. Blame it on the AI? On the moral responsibility of artificial moral advisors. Philos. Technol. 35 , 35 (2022).

Vincent, J. J. OpenAI’s new chatbot can explain code and write sitcom scripts but is still easily tricked. The Verge . https://www.theverge.com/23488017/openai-chatbot-chatgpt-ai-examples-web-demo . (December 1, 2022).

National Artificial Intelligence Initiative Office (NAIIO). Advancing trustworthy AI . https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/ . (no date).

Download references

Open Access funding enabled and organized by Projekt DEAL. This work was supported by Bavarian Research Institute for Digital Transformation.

Author information

Authors and affiliations.

Faculty of Computer Science, Technische Hochschule Ingolstadt, Esplanade 10, 85049, Ingolstadt, Germany

Sebastian Krügel & Matthias Uhl

Department of Business and Management, University of Southern Denmark, Campusvej 55, 5230, Odense, Denmark

Andreas Ostermaier

You can also search for this author in PubMed   Google Scholar

Contributions

S.K., A.O., and M.U. designed and performed the study, analyzed the data, and wrote the report together.

Corresponding author

Correspondence to Sebastian Krügel .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Krügel, S., Ostermaier, A. & Uhl, M. ChatGPT’s inconsistent moral advice influences users’ judgment. Sci Rep 13 , 4569 (2023). https://doi.org/10.1038/s41598-023-31341-0

Download citation

Received : 20 January 2023

Accepted : 10 March 2023

Published : 06 April 2023

DOI : https://doi.org/10.1038/s41598-023-31341-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Attributions toward artificial agents in a modified moral turing test.

  • Eyal Aharoni
  • Sharlene Fernandes
  • Victor Crespo

Scientific Reports (2024)

  • Shinnosuke Ikeda

Judicial leadership matters (yet again): the association between judge and public trust for artificial intelligence in courts

  • Shawn Marsh

Discover Artificial Intelligence (2024)

Ethics of generative AI and manipulation: a design-oriented research agenda

  • Michael Klenk

Ethics and Information Technology (2024)

Towards human-AI collaborative urban science research enabled by pre-trained large language models

  • Haoying Han

Urban Informatics (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

case study about moral dilemma

IMAGES

  1. an ethical dilemma case study

    case study about moral dilemma

  2. Ethical Dilemma in Nursing Case Study Presentation Example

    case study about moral dilemma

  3. ≫ Moral Philosophy and Cases of Moral Dilemma Free Essay Sample on

    case study about moral dilemma

  4. Dilemma Definition

    case study about moral dilemma

  5. Moral Dilemma

    case study about moral dilemma

  6. Ethical dilemma case_study

    case study about moral dilemma

VIDEO

  1. Heinz Dilemma

  2. Ethical Dilemma Case Study Kathy

  3. study moral story (prt-1

  4. "रामदेव" बदजुबानी और विवादों के गुरु !!|case study|Moral teacher|Saturday -9:30- P.M-live-47|

  5. B. N. Rau|डॉ. अम्बेडकर से संविधान निर्माण का श्रेय छीनने का ब्राह्मणवादी षड्यंत्र |case study|

  6. SBIGM

COMMENTS

  1. Case Studies

    Case Studies. More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and ...

  2. The patient suicide attempt

    Nurses face more and more ethical dilemmas during their practice nowadays, especially when nurses have responsibility to take care of patients with terminal diseases such as cancer [1].The case study demonstrates an ethical dilemma faced by a nursing staff taking care of an end stage aggressive prostate cancer patient Mr Green who confided to the nurse his suicide attempt and ask the nurse to ...

  3. Case Study Application of an Ethical Decision-Making Process for a

    In our case study, 93 year old Ms. Jones is admitted to hospital with a fragility hip fracture. As a first step, we must recognize that there is actually an ethical dilemma; in this case, the dilemma is whether the patient should proceed with surgery or not, given her underlying medical conditions and potential for perioperative complications.

  4. Ethical Dilemma: 10 Heartbreaking Case Studies

    Definition of ethical dilemma. 5 Cases of Ethical Dilemma. 1. Loyalty to the employer vs. the moral obligation to protect the public and the environment. 2. Upholding academic integrity vs. loyalty to a friend. 3. The safety of passengers vs. the safety of pedestrians. 4.

  5. Deontology and Utilitarianism in Real Life: A Set of Moral Dilemmas

    In factual-killing dilemmas, this is death avoidability (will the victim die in either case, see e.g., Moore et al., 2008). ... The majority of studies on moral dilemma judgments use a small set of dilemmas that have been widely criticized (e.g., Gigerenzer, 2010; Sauer, 2018). These trolley-type dilemmas are unrealistic, sometimes to the point ...

  6. Moral judgment reloaded: a moral dilemma validation study

    Apart from providing normative values of valence, arousal, moral judgment and RT for 46 moral dilemmas 5, the results of this dilemma validation study challenge the DPHMJ proposed by Greene et al. (2001, 2004). According to this hypothesis, deontological moral judgments (refraining from harm) are fast and emotion-based, while utilitarian moral ...

  7. Case study: an ethical dilemma involving a dying patient

    A case study demonstrates an ethical dilemma faced by healthcare providers who care for and treat Jehovah's Witnesses who are placed in a critical situation due to medical life-threatening situations. A 20-year-old, pregnant, Black Hispanic female presented to the Emergency Department (ED) in critical condition following a single-vehicle car ...

  8. Ethical Problems and Moral Distress in Primary Care: A Scoping Review

    1. Introduction. Many studies have been conducted about the ethical dilemmas faced by nurses in healthcare settings. According to Vošner, Železnik, Kokol, Vošner, and Završnik [], since 1997, research in the field of nursing ethics has focused on solving ethical dilemmas, enhancing decision making, and introducing education.In the last two decades, the progress of biomedical science and ...

  9. The Half-Life of the Moral Dilemma Task: A Case Study in ...

    Greene and colleagues (Greene et al. 2001, 2004) derived their experimental idea from a famous philosophical puzzle, the moral question whether it is right (morally permissible or, stronger, perhaps even obligatory) or wrong (morally forbidden) to save the lives of a number of people when this requires an action that implies - more or less directly - the death of a smaller number of people ...

  10. Moral dilemmas and trust in leaders during a global health crisis

    To test our hypothesis empirically, we drew on case studies of public communications to identify five moral dilemmas that have been actively debated during the COVID-19 pandemic (Fig. 1c).Three of ...

  11. Moral uncertainty: A case study of Covid-19

    This guidance acknowledges a profound moral dilemma for relatives: because of the contagiousness of SARS-CoV-2, observing one of the simplest and most fundamental of human rituals - gathering at the bedside of a sick loved one - has both individual and societal knock-ons. ... A case study. My interest in moral uncertainty was triggered by a ...

  12. A virtue ethics approach to moral dilemmas in medicine

    Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience.

  13. PDF Case study: An ethical dilemma involving a dying patient

    A case study demonstrates an ethical dilemma faced by healthcare providers who care for and treat Jehovah's Witnesses who are placed in a critical situation due to medical life- threatening ...

  14. "I go into crisis when …": ethics of care and moral dilemmas in

    Recognising and knowing how to manage ethical issues and moral dilemmas can be considered an ethical skill. In this study, ethics of care is used as a theoretical framework and as a regulatory criterion in the relationship among healthcare professionals, patients with palliative care needs and family members. This study is a part of a larger project aimed at developing and implementing a ...

  15. (PDF) Case Study on an Ethical Dilemma

    Case Study on an Ethical Dilemma. Dr Anusha Akella* and Dr Kyaw Moe. Cheshire and Wirral Partnership NHS F oundation Trust, Wirr al. /Chester, United Kingdom. *Presenting author. doi: 10.1192/bjo ...

  16. Ethical Dilemma Case Studies

    The Heinz Dilemma. To illustrate these stages, Kohlberg used stories. The first one is commonly known as the Heinz Dilemma, where a man debates whether to steal to save his wife's life. The story ...

  17. Addressing harm in moral case deliberation: the views and experiences

    Background In healthcare practice, care providers are confronted with decisions they have to make, directly affecting patients and inevitably harmful. These decisions are tragic by nature. This study investigates the role of Moral Case Deliberation (MCD) in dealing with tragic situations. In MCD, caregivers reflect on real-life dilemmas, involving a choice between two ethical claims, both ...

  18. PDF Ethical Dilemma in Nursing Students: A Case Study

    Yadigar Cevik Durmaz, PhD, RN. Abstract Aim: The aim of this study was to get the opinions of nursing students about the issues that they may be ethically dilemma in during the decision-making process. Methods: In this study, the case study method, one of the qualitative research methods, was used.

  19. Case Study on an Ethical Dilemma

    A Challenging case that raises several questions surrounding Medical Ethics. The team is now looking into guardianship to ensure welfare of the patient. Type. Case Study. Information. BJPsych Open , Volume 8 , Supplement S1: Abstracts of the RCPsych International Congress 2022, 20-23 June , June 2022 , pp. S117 - S118.

  20. Using case studies of ethical dilemmas for the development of moral

    - The purpose of this paper is to focus on a case study, framed as an ethical dilemma. It serves as an illustration for the teaching of moral literacy, with a special emphasis on social justice., - Initially, the paper provides a rationale for the inclusion of case studies, emphasizing moral problems in university teaching.

  21. Ethics Case Study

    Ethics Case Study - 7: Moral Dilemma Rajiv is an IAS aspirant. He studied in two premier institutions and worked for a while in an IT company. He quit the job and started preparing for the civil services exams. In his first attempt he wrote mains but could not qualify for the personality test. In … Continue reading "Ethics Case Study - 7: Moral Dilemma"

  22. Full article: Confronting and managing ethical dilemmas in social work

    Solving ethical dilemmas in social work. In the field of social work, numerous ethical dilemmas affect clients, colleagues, employers and the profession itself (Banks, Citation 2016).Various models for resolving ethical dilemmas in social work have been developed, all emphasising similar principles for ethical decision-making: identifying central ethical values, customer values, organisations ...

  23. Impact of moral case deliberation in healthcare settings: a literature

    Background An important and supposedly impactful form of clinical ethics support is moral case deliberation (MCD). Empirical evidence, however, is limited with regard to its actual impact. With this literature review, we aim to investigate the empirical evidence of MCD, thereby a) informing the practice, and b) providing a focus for further research on and development of MCD in healthcare ...

  24. PDF NHPCO Project ECHO Ethical Dilemmas Across Health Equity: 2024

    burnout, and engagement: A multilevel study. Journal of Business and Psychology, 30(2), 399-41 • Lesandrini, Jason; Reis, David. Ethical Challenges in Staffing: The Importance of Building Moral Muscle. Frontiers of Health Services Management: Summer 2022 - Volume 38 - Issue 4 - p 33-38. • National Hospice and Palliative Care Organization ...

  25. Moral judgment reloaded: a moral dilemma validation study

    We propose a revised set of moral dilemmas for studies on moral judgment. We selected a total of 46 moral dilemmas available in the literature and fine-tuned them in terms of four conceptual factors (Personal Force, Benefit Recipient, Evitability, and Intention) and methodological aspects of the dilemma formulation (word count, expression style, question formats) that have been shown to ...

  26. Case Study ELAD 6423 Communication with Stubborn Supt

    Course: ELAD 6423 - Ethical and Legal Issues in Special Education Template for Character Centered Leadership Ethical Dilemma Case Study Communication with a Stubborn Superintendent Your Name: _____ Total Points: 30 Instructions: We want to place an intentional focus on character because we recognize the importance of cultivating knowledgeable professionals who are also caring and ethically ...

  27. Ethics and responsibility in biohybrid robotics research

    Navigating Moral Dilemmas in Biohybrid Robotics Research. ... they sometimes diverge, as in this case. 2.1.2. New arms dealing. ... Studies of modern governance have developed knowledge of how to effectively and efficiently coordinate a variety of actors with collaborative goals and different individual purposes and responsibilities. This ...

  28. ChatGPT's inconsistent moral advice influences users' judgment

    ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users.