• Carbon Accounting & Carbon Neutral Strategy
  • ESG, CSR, & Sustainability Reporting
  • Sustainability Strategy
  • ESG Regulatory Compliance
  • Portfolio Management & Reporting
  • AERA GHG Manager
  • EPIC for Corporates
  • ZENO for Financial Institutions
  • GHG Accounting
  • Sustainability Reporting
  • ESG Investing & Reporting

en_US

Ethical Business Practices: Case Studies and Lessons Learned

Introduction

Ethical business practices are a cornerstone of any successful company, influencing not only the public perception of a brand but also its long-term profitability. However, understanding what constitutes ethical behavior and how to implement it can be a complex process. This article explores some case studies that shine a light on ethical business practices, offering valuable lessons for businesses in any industry.

Case Study 1: Patagonia’s Commitment to Environmental Ethics

Patagonia, the outdoor clothing and gear company, has long set a standard for environmental responsibility. The company uses eco-friendly materials, promotes recycling of its products, and actively engages in various environmental causes.

Lessons Learned

  • Transparency : Patagonia is vocal about its ethical practices and even provides information on the environmental impact of individual products.
  • Consistency: Ethics are not an “add-on” for Patagonia; they are integrated into the very fabric of the company’s operations, from sourcing to production to marketing.
  • Engagement: The company doesn’t just focus on its practices; it encourages consumers to get involved in the causes it supports.

Case Study 2: Salesforce and Equal Pay

Salesforce, the cloud-based software company, took a stand on the gender pay gap issue. They conducted an internal audit and found that there was indeed a significant wage disparity between male and female employees for similar roles. To address this, Salesforce spent over $6 million to balance the scales.

  • Self-Audit: It’s crucial for companies to actively review their practices. What you don’t know can indeed hurt you, and ignorance is not an excuse.
  • Taking Responsibility: Rather than sweeping the issue under the rug, Salesforce openly acknowledged the problem and took immediate corrective action.
  • Long-Term Benefits: Fair treatment boosts employee morale and productivity, leading to long-term profitability.

Case Study 3: Starbucks and Racial Sensitivity Training

In 2018, Starbucks faced a public relations crisis when two Black men were wrongfully arrested at one of their Philadelphia stores. Instead of issuing just a public apology, Starbucks closed down 8,000 of its stores for an afternoon to conduct racial sensitivity training.

Lessons   Learned

  • Immediate Action : Swift and meaningful action is critical in showing commitment to ethical behavior.
  • Education: Sometimes, the problem is a lack of awareness. Investing in employee education can avoid repeated instances of unethical behavior.
  • Public Accountability: Starbucks made their training materials available to the public, showing a level of transparency and accountability that helped regain public trust.

Why Ethics Matter

Ethical business practices are not just morally correct; they have a direct impact on a company’s bottom line. Customers today are more informed and more sensitive to ethical considerations. They often make purchasing decisions based on a company’s ethical standing, and word-of-mouth (or the digital equivalent) travels fast.

The case studies above show that ethical business practices should be a top priority for companies of all sizes and industries. These are not isolated examples but are representative of a broader trend in consumer expectations and regulatory frameworks. The lessons gleaned from these cases—transparency, consistency, engagement, self-audit, taking responsibility, and education—are universally applicable and offer a robust roadmap for any business seeking to bolster its ethical standing.

By implementing ethical business practices sincerely and not as a marketing gimmick, companies not only stand to improve their public image but also set themselves up for long-term success, characterized by a loyal customer base and a motivated, satisfied workforce.

case study on ethical practices

Monitor ESG performance in portfolios, create your own ESG frameworks, and make better informed business decisions.

In order to contact us please fill the form on the right or directly email us at the address below

[email protected]

3 Church Street, 25/F, Suite 21 Singapore 049483 (+65) 6692 9267

Gustav Mahlerplein 2 Amsterdam, Netherlands 1082 MA (+31) 6 4817 3634

No. 299, Tongren Road, #2604B Jing'an District, Shanghai, China 200040 (+86) 021 6229 8732

77 Dunhua South Road, 7F Section 2, Da'an District Taipei City, Taiwan 106414 (+886) 02 2706 2108

Viet Tower 1, Thai Ha, Dong Da Hanoi, Vietnam 100000 (+84) 936 075 490

Av Jorge Basadre Grohmann 607 San Isidro, Lima, Peru 15073 (+51) 951 722 377

© 2024 • Seneca Technologies Pte Ltd • All rights reserved

  • ESG, CSR, & Sustainability Reporting
  • ESG Data Collection and Management
  • ESG Scoring and Target Setting
  • ESG Report Writing (ISSB, GRI, SASB, TCFD, CSRD)
  • Materiality Assessment
  • ESG Ratings Analyses and Improvement
  • ESG Performance Analyses and Benchmarking
  • Stock Exchange Reporting
  • EU Taxonomy Reporting (CSRD, SFDR, PAI)
  • Portfolio Management & Reporting
  • Portfolio Custom Scoring and Screening
  • Portfolio Analyses and Benchmarking
  • Product and Firm Level Regulatory Reporting (SFDR)
  • Carbon Accounting & Carbon Neutral Strategy
  • Carbon Inventory (GHG Protocol)
  • Science Based Target Setting (SBTi)
  • Carbon Neutral Strategy
  • Privacy Policy
  • Terms of Use
  • Data Processing Agreement

qrcode_wechat

© 2023 • Seneca • All rights reserved

  • Based Target Setting (SBTi) Carbon

case study on ethical practices

  • Browse All Articles
  • Newsletter Sign-Up

case study on ethical practices

  • 09 Sep 2024

McDonald’s and the Post #MeToo Rules of Sex in the Workplace

As #MeToo cast a spotlight on harassment in the workplace, former McDonald's CEO Stephen Easterbrook went from savior to pariah. Drawing from a series of case studies, Lynn Paine outlines seven lessons all corporate boards can take away from the scandal to improve culture and prevent abuse of power.

case study on ethical practices

  • 18 Jun 2024
  • Cold Call Podcast

How Natural Winemaker Frank Cornelissen Innovated While Staying True to His Brand

In 2018, artisanal Italian vineyard Frank Cornelissen was one of the world’s leading producers of natural wine. But when weather-related conditions damaged that year’s grapes, founder Frank Cornelissen had to decide between staying true to the tenets of natural wine making or breaking with his public beliefs to save that year’s grapes by adding sulfites. Harvard Business School assistant professor Tiona Zuzul discusses the importance of staying true to your company’s principles while remaining flexible enough to welcome progress in the case, Frank Cornelissen: The Great Sulfite Debate.

case study on ethical practices

  • 30 Apr 2024

When Managers Set Unrealistic Expectations, Employees Cut Ethical Corners

Corporate misconduct has grown in the past 30 years, with losses often totaling billions of dollars. What businesses may not realize is that misconduct often results from managers who set unrealistic expectations, leading decent people to take unethical shortcuts, says Lynn S. Paine.

case study on ethical practices

  • 23 Apr 2024

Amazon in Seattle: The Role of Business in Causing and Solving a Housing Crisis

In 2020, Amazon partnered with a nonprofit called Mary’s Place and used some of its own resources to build a shelter for women and families experiencing homelessness on its campus in Seattle. Yet critics argued that Amazon’s apparent charity was misplaced and that the company was actually making the problem worse. Paul Healy and Debora Spar explore the role business plays in addressing unhoused communities in the case “Hitting Home: Amazon and Mary’s Place.”

case study on ethical practices

  • 15 Apr 2024

Struggling With a Big Management Decision? Start by Asking What Really Matters

Leaders must face hard choices, from cutting a budget to adopting a strategy to grow. To make the right call, they should start by following their own “true moral compass,” says Joseph Badaracco.

case study on ethical practices

  • 26 Mar 2024

How Do Great Leaders Overcome Adversity?

In the spring of 2021, Raymond Jefferson (MBA 2000) applied for a job in President Joseph Biden’s administration. Ten years earlier, false allegations were used to force him to resign from his prior US government position as assistant secretary of labor for veterans’ employment and training in the Department of Labor. Two employees had accused him of ethical violations in hiring and procurement decisions, including pressuring subordinates into extending contracts to his alleged personal associates. The Deputy Secretary of Labor gave Jefferson four hours to resign or be terminated. Jefferson filed a federal lawsuit against the US government to clear his name, which he pursued for eight years at the expense of his entire life savings. Why, after such a traumatic and debilitating experience, would Jefferson want to pursue a career in government again? Harvard Business School Senior Lecturer Anthony Mayo explores Jefferson’s personal and professional journey from upstate New York to West Point to the Obama administration, how he faced adversity at several junctures in his life, and how resilience and vulnerability shaped his leadership style in the case, "Raymond Jefferson: Trial by Fire."

case study on ethical practices

  • 02 Jan 2024

Should Businesses Take a Stand on Societal Issues?

Should businesses take a stand for or against particular societal issues? And how should leaders determine when and how to engage on these sensitive matters? Harvard Business School Senior Lecturer Hubert Joly, who led the electronics retailer Best Buy for almost a decade, discusses examples of corporate leaders who had to determine whether and how to engage with humanitarian crises, geopolitical conflict, racial justice, climate change, and more in the case, “Deciding When to Engage on Societal Issues.”

case study on ethical practices

  • 12 Dec 2023

Can Sustainability Drive Innovation at Ferrari?

When Ferrari, the Italian luxury sports car manufacturer, committed to achieving carbon neutrality and to electrifying a large part of its car fleet, investors and employees applauded the new strategy. But among the company’s suppliers, the reaction was mixed. Many were nervous about how this shift would affect their bottom lines. Professor Raffaella Sadun and Ferrari CEO Benedetto Vigna discuss how Ferrari collaborated with suppliers to work toward achieving the company’s goal. They also explore how sustainability can be a catalyst for innovation in the case, “Ferrari: Shifting to Carbon Neutrality.” This episode was recorded live December 4, 2023 in front of a remote studio audience in the Live Online Classroom at Harvard Business School.

case study on ethical practices

  • 11 Dec 2023
  • Research & Ideas

Doing Well by Doing Good? One Industry’s Struggle to Balance Values and Profits

Few companies wrestle with their moral mission and financial goals like those in journalism. Research by Lakshmi Ramarajan explores how a disrupted industry upholds its values even as the bottom line is at stake.

case study on ethical practices

  • 27 Nov 2023

Voting Democrat or Republican? The Critical Childhood Influence That's Tough to Shake

Candidates might fixate on red, blue, or swing states, but the neighborhoods where voters spend their teen years play a key role in shaping their political outlook, says research by Vincent Pons. What do the findings mean for the upcoming US elections?

case study on ethical practices

  • 21 Nov 2023

The Beauty Industry: Products for a Healthy Glow or a Compact for Harm?

Many cosmetics and skincare companies present an image of social consciousness and transformative potential, while profiting from insecurity and excluding broad swaths of people. Geoffrey Jones examines the unsightly reality of the beauty industry.

case study on ethical practices

  • 09 Nov 2023

What Will It Take to Confront the Invisible Mental Health Crisis in Business?

The pressure to do more, to be more, is fueling its own silent epidemic. Lauren Cohen discusses the common misperceptions that get in the way of supporting employees' well-being, drawing on case studies about people who have been deeply affected by mental illness.

case study on ethical practices

  • 07 Nov 2023

How Should Meta Be Governed for the Good of Society?

Julie Owono is executive director of Internet Sans Frontières and a member of the Oversight Board, an outside entity with the authority to make binding decisions on tricky moderation questions for Meta’s companies, including Facebook and Instagram. Harvard Business School visiting professor Jesse Shapiro and Owono break down how the Board governs Meta’s social and political power to ensure that it’s used responsibly, and discuss the Board’s impact, as an alternative to government regulation, in the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.”

case study on ethical practices

  • 24 Oct 2023

From P.T. Barnum to Mary Kay: Lessons From 5 Leaders Who Changed the World

What do Steve Jobs and Sarah Breedlove have in common? Through a series of case studies, Robert Simons explores the unique qualities of visionary leaders and what today's managers can learn from their journeys.

case study on ethical practices

  • 03 Oct 2023
  • Research Event

Build the Life You Want: Arthur Brooks and Oprah Winfrey Share Happiness Tips

"Happiness is not a destination. It's a direction." In this video, Arthur C. Brooks and Oprah Winfrey reflect on mistakes, emotions, and contentment, sharing lessons from their new book.

case study on ethical practices

  • 12 Sep 2023

Successful, But Still Feel Empty? A Happiness Scholar and Oprah Have Advice for You

So many executives spend decades reaching the pinnacles of their careers only to find themselves unfulfilled at the top. In the book Build the Life You Want, Arthur Brooks and Oprah Winfrey offer high achievers a guide to becoming better leaders—of their lives.

case study on ethical practices

  • 10 Jul 2023
  • In Practice

The Harvard Business School Faculty Summer Reader 2023

Need a book recommendation for your summer vacation? HBS faculty members share their reading lists, which include titles that explore spirituality, design, suspense, and more.

case study on ethical practices

  • 01 Jun 2023

A Nike Executive Hid His Criminal Past to Turn His Life Around. What If He Didn't Have To?

Larry Miller committed murder as a teenager, but earned a college degree while serving time and set out to start a new life. Still, he had to conceal his record to get a job that would ultimately take him to the heights of sports marketing. A case study by Francesca Gino, Hise Gibson, and Frances Frei shows the barriers that formerly incarcerated Black men are up against and the potential talent they could bring to business.

case study on ethical practices

  • 04 Apr 2023

Two Centuries of Business Leaders Who Took a Stand on Social Issues

Executives going back to George Cadbury and J. N. Tata have been trying to improve life for their workers and communities, according to the book Deeply Responsible Business: A Global History of Values-Driven Leadership by Geoffrey Jones. He highlights three practices that deeply responsible companies share.

case study on ethical practices

  • 14 Mar 2023

Can AI and Machine Learning Help Park Rangers Prevent Poaching?

Globally there are too few park rangers to prevent the illegal trade of wildlife across borders, or poaching. In response, Spatial Monitoring and Reporting Tool (SMART) was created by a coalition of conservation organizations to take historical data and create geospatial mapping tools that enable more efficient deployment of rangers. SMART had demonstrated significant improvements in patrol coverage, with some observed reductions in poaching. Then a new predictive analytic tool, the Protection Assistant for Wildlife Security (PAWS), was created to use artificial intelligence (AI) and machine learning (ML) to try to predict where poachers would be likely to strike. Jonathan Palmer, Executive Director of Conservation Technology for the Wildlife Conservation Society, already had a good data analytics tool to help park rangers manage their patrols. Would adding an AI- and ML-based tool improve outcomes or introduce new problems? Harvard Business School senior lecturer Brian Trelstad discusses the importance of focusing on the use case when determining the value of adding a complex technology solution in his case, “SMART: AI and Machine Learning for Wildlife Conservation.”

Building an Ethical Company

Create an organization that helps employees behave more honorably. by Isaac H. Smith and Maryam Kouchaki

case study on ethical practices

Summary .   

Just as people can develop skills and abilities over time, they can learn to be more or less ethical. Yet many organizations limit ethics training to the onboarding process. If they do address it thereafter, it may be only by establishing codes of conduct or whistleblower hotlines. Such steps may curb specific infractions, but they don’t necessarily help employees develop as ethical people.

Drawing on evidence from hundreds of research studies, the authors offer a framework for helping workers build moral character. Managers can provide experiential training in ethical dilemmas. They can foster psychological safety when minor lapses occur, conduct pre- and postmortems for initiatives with ethical components, and create a culture of service by encouraging volunteer work and mentoring in ethics.

People don’t enter the workforce with a fixed moral character. Just as employees can nurture (or neglect) their skills and abilities over time, they can learn to be more or less ethical. Yet rather than take a long-term view of employees’ moral development, many organizations treat ethics training as a onetime event, often limiting it to the onboarding process. If they do address ethics thereafter, it may be only by espousing codes of conduct or establishing whistleblower hotlines. Such steps may curb specific unethical actions, but they don’t necessarily help employees develop as moral people.

Partner Center

  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

case study on ethical practices

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Creating Brand Value
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading Change and Organizational Renewal
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

4 Examples of Ethical Leadership in Business

Business leader communicating ethical decision to team

  • 14 Sep 2023

Have you ever faced an ethical dilemma? Maybe you found someone’s wallet on the ground or witnessed someone cheating during a test or competition. In these scenarios, the right answer isn’t always clear.

In business, you’re bound to encounter ethical dilemmas, especially as a leader. Behaving unethically can be illegal—for instance, stealing money or harming employees. In these situations, making the right choice is clearer. Sometimes, it’s not a question of legality but of weighing potential outcomes.

“Many of the decisions you face will not have a single right answer,” says Harvard Business School Professor Nien-hê Hsieh in the online course Leadership, Ethics, and Corporate Accountability . “Sometimes, the most viable answer may come with negative effects. In such cases, the decision is not black and white. As a result, many call them ‘gray-area decisions.’”

When facing ambiguity, how do you make the most ethical decision? Here’s a primer on ethical leadership and four examples of leaders who faced the same question.

Access your free e-book today.

What Is Ethical Leadership?

Ethical leadership is the practice of making decisions that balance stakeholders’ best interests with your company’s financial health, and empowering others to do the same.

As a leader, you have ethical responsibilities to four stakeholder groups—customers, employees, investors, and society—which Leadership, Ethics, and Corporate Accountability breaks down.

Responsibilities to Customers and Employees

  • Well-being: What’s ultimately good for the person
  • Rights: Entitlement to receive certain treatment
  • Duties: A moral obligation to behave in a specific way
  • Best practices: Aspirational standards not required by law or cultural norms

Employees have a fifth category—fairness—which comprises three types to consider:

  • Legitimate expectations: Employees reasonably expect certain practices or behaviors to continue based on experiences with the organization and explicit promises.
  • Procedural fairness: Managers must resolve issues impartially and consistently.
  • Distributive fairness: Your company equitably allocates opportunities, benefits, and burdens.

Responsibilities to Investors

Your responsibilities to investors are known as fiduciary duties . The four types are:

  • Duty of obedience: Adhere to corporate bylaws, superiors’ instructions, and the law.
  • Duty of information: Disclose necessary information and remain truthful about performance and operations. Refuse to divulge certain information to nonessential parties.
  • Duty of loyalty: Act in the most favorable way for shareholders and avoid conflicts of interest.
  • Duty of care: Evaluate decisions’ potential outcomes before acting.

Responsibilities to Society

In addition to creating value for your business, you’re responsible for making a positive, or at least neutral, impact on society and the environment.

One framework to conceptualize this is the triple bottom line, also called the “three P’s”:

  • Profit: Your business’s responsibility to make a profit.
  • People: Your business’s responsibility to positively impact society by creating jobs, supporting charities, or promoting well-being initiatives.
  • The planet: Your business’s responsibility to positively impact the natural environment, or at least not damage it.

The 3 P's of the Triple Bottom Line: Profit, People, and the Planet

Even business leaders with the best intentions can make unethical decisions. In a Harvard Business Review article , HBS Professor Max Bazerman describes the concept of motivated blindness , in which you become unaware of unethical decisions when they benefit you or your company.

Hsieh echoes this sentiment in Leadership, Ethics, and Corporate Accountability .

“Even when the right thing to do seems clear from an outsider’s perspective, factors like time, social pressures, and the need for self-preservation can complicate things,” Hsieh says in the course.

Learning about ethical leadership can enable you to be aware of unintended negligence and make more conscious, ethical decisions.

Here are four examples of business leaders who faced ethical dilemmas, how they handled them, and what you can learn from their experiences.

1. Johnson & Johnson’s Tylenol Poisonings

A classic case of ethical leadership in business is “the Chicago Tylenol poisonings.” On September 9, 1982, a Chicago-area 12-year-old girl woke up with a cold. Her parents gave her a tablet of extra-strength Tylenol to ease her symptoms and, within hours, she died.

Six more deaths followed—the connecting factor between them was having taken extra-strength Tylenol shortly before passing away. It was later discovered that the tablets were laced with cyanide, a chemical that interferes with the body’s ability to use oxygen.

Johnson & Johnson, Tylenol’s parent company, had an ethical dilemma and a public relations disaster to contend with.

Baffled as to how the cyanide got in the tablets, Johnson & Johnson’s leaders acted quickly and pulled all Tylenol products off the shelves—31 million bottles worth over $100 million—and stopped all production and advertising.

The swiftness of their decision, although incredibly costly, put customers’ well-being at the forefront and saved lives.

Johnson & Johnson partnered with the Chicago Police, the Federal Bureau of Investigation (FBI), and the Food and Drug Administration (FDA) to track down the perpetrator who added cyanide to the medication. The company offered a $100,000 reward and provided detailed updates on its investigation and product developments following the crisis.

When it became clear that the killer had bought the product, laced it with cyanide, and returned it to store shelves undetected, Johnson & Johnson developed the first-ever tamper-resistant packaging. The “safety seal” that now covers the opening of most food and drug products was born.

“Our highest responsibility has always been the health and safety of our consumers,” a Johnson & Johnson representative wrote in a statement to the Chicago Tribune . “While this tragic incident remains unsolved, this event resulted in important industry improvements to patient safety measures, including the creation of tamper-resistant packaging.”

The Tylenol brand recovered from the incident, largely because of Johnson & Johnson’s leadership team’s swift action and transparent care for customers.

2. JetBlue’s Shutdown

On Valentine’s Day, 2007 , at the John F. Kennedy International Airport, JetBlue Airlines sent nine planes from the gate to the runway during a snowstorm, hoping conditions would rapidly improve—but it had no such luck.

The misstep caused the planes to sit on the tarmac for more than five hours with disgruntled passengers inside. The issue snowballed from there.

Since JetBlue employees had to work overtime to deal with the delays, few had enough allowable flight time to handle upcoming departures. JetBlue was left with no choice but to cancel 1,096 flights over the following five days.

CEO David Neeleman responded by writing an apology letter to customers and crafting a “ customer bill of rights ” that the airline still abides by. The document outlined customers’ rights to information about flights, as well as how they’d be compensated in the event of delays or cancellations.

Neeleman also went on a public apology tour, taking full responsibility for the incident rather than blaming it on the weather.

This response stands in contrast to the 2022 Southwest Airlines incident that played out similarly but with less accountability from leaders. Initially caused by bad weather and then exacerbated by Southwest’s outdated booking systems, the 16,700 canceled flights left thousands stranded between December 21 and 31.

In contrast to Neeleman’s apologies and emphasis on customer rights, Southwest CEO Bob Jordan took a defensive stance, explaining in a video the impact that “record bitter cold” had on all airlines and that Southwest was doing everything it could to remedy the issue. While those points may have been true, the response didn’t go over well with customers who wanted to feel respected and understood.

Each leader's choices highlight the importance of being transparent and championing customer rights when facing similar issues.

Related: The Importance of Reflective Leadership in Business

3. Starbucks’s Racial Bias Incident

If one of your employees made a critical decision based on racial bias, how would you respond? That was the question Kevin Johnson, then-CEO of coffee shop chain Starbucks, had to answer in April 2014 .

One day, two Black men entered a Starbucks in Philadelphia and asked to use the bathroom. The manager on duty told them the restroom was for paying customers only, so they sat down to wait for their friend to arrive before ordering.

The manager called the police, who arrested the men for trespassing. Although no charges were filed, the arrest went viral and sparked protests throughout the United States.

Starbucks, which prides itself on being an ethical brand , has one of the most diverse leadership groups in corporate America—five of the board’s 14 members are women, and five are from racial minority groups. This racially motivated incident clashed with its values.

Johnson fired the manager who called for the arrest, apologized to the two men, and announced racial bias training for all Starbucks employees.

To emphasize the training’s importance, Johnson closed 8,000 locations on May 29, 2018, to educate 175,000 employees. This cost Starbucks an estimated $12 million in lost profit but spread the message that it cares about its customers, employees, and society.

Related: How to Create a Culture of Ethics and Accountability in the Workplace

4. The Muse Sticking Up for Employees

Ethical dilemmas often aren’t public scandals—even quiet, internal decisions can have enormous impacts. Kathryn Minshew, CEO and co-founder of The Muse , faced one such scenario in the early days of growing the online career platform.

She’d just signed a company to use The Muse’s recruiting platform. It was a major deal, and the young startup desperately needed revenue. But during the onboarding process, Minshew noticed the client’s representatives were talking down to her junior staff members. While they respected her, how they treated her team didn’t sit well.

She spoke with the client about it, effectively providing a warning and a chance to start the relationship on a better note. Still, the poor treatment of her team continued.

Minshew had a decision to make: Take the revenue despite the mistreatment or part ways with the client to support her team. She went with the latter.

“I told them nicely that it didn’t make sense to work together anymore and refunded the unused balance of their money,” Minshew says in an interview with Fast Company . “They tried to argue, but at that point, my mind was made up. I didn’t realize how relieved my team was—and how much they appreciated it—until after it was all done.”

By cutting ties with the client, Minshew fulfilled her ethical responsibility to create an environment that supported her employees’ well-being and right to be treated respectfully. In doing so, she built a strong foundation of trust and demonstrated that she’d have their best interest in mind—even at the business’s expense.

“I think backing your team in situations like that is really important,” Minshew says in the same interview, “but it’s not always easy, especially when you’re early-stage.”

How to Become a More Effective Leader | Access Your Free E-Book | Download Now

How to Develop Ethical Leadership Skills

While these scenarios likely differ from those you face at your organization, ethical leadership’s guiding principles ring true.

To build your ethical leadership skills , consider taking an online business ethics course. In Leadership, Ethics, and Corporate Accountability , Hsieh presents several real-world examples of ethical dilemmas, prompts you to consider how you’d respond to them, and then lets business leaders share how they handled each.

In the course, you also learn how to use frameworks and tools to conceptualize your responsibilities to stakeholders, make judgment calls in gray-area situations, and act decisively to reach optimal outcomes.

By learning from the challenges and triumphs of those who came before you, you can equip yourself to handle any ethical dilemmas that come your way.

Are you interested in learning how to navigate difficult decisions as a leader? Explore Leadership, Ethics, and Corporate Accountability —one of our online leadership and management courses —and download our free guide to becoming a more effective leader.

case study on ethical practices

About the Author

case study on ethical practices

  • Leadership Ethics Cases
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Leadership Ethics
  • Leadership Ethics Resources

Find ethical case studies on leadership ethics, including scenarios for top management on issues such as downsizing and management responsibilities. (For permission to reprint articles, submit requests to  [email protected] .)

The importance of academic institutions in shaping the societal narrative is increasingly showcased by constant media exposure and continuous requests for social commentary. This case study outlines effective methodologies of leadership, ethics, and change management within an organization, for the purpose of motivating and engaging stakeholders to empathize with and carry out a shared directive.

Extensive teaching note based on interviews with Theranos whistleblower Tyler Shultz. The teaching note can be used to explore issues around whistleblowing, leadership, the blocks to ethical behavior inside organizations, and board governance.

Case study on the history making GameStop short and stock price surge that occurred during January 2021.

What did Urban Meyer know and when did he know it?

Case study explores Kevin Johnson's response to an incident where two African Americans were asked to leave a Philadelphia Starbucks.

Three examples of CEOs whose leadership of their firm has been called into question over matters of their personal integrity and behavior.

What should business leaders take away from the disaster?

Business & government struggle over encryption’s proper place.

In many ways, WorldCom is just another case of failed corporate governance, accounting abuses, and outright greed.

  • More pages:

case study on ethical practices

  • SOCIETY OF PROFESSIONAL JOURNALISTS

Home > Ethics > Ethics Case Studies

Ethics Ethics Case Studies

The SPJ Code of Ethics is voluntarily embraced by thousands of journalists, regardless of place or platform, and is widely used in newsrooms and classrooms as a guide for ethical behavior. The code is intended not as a set of "rules" but as a resource for ethical decision-making. It is not — nor can it be under the First Amendment — legally enforceable. For an expanded explanation, please follow this link .

case study on ethical practices

For journalism instructors and others interested in presenting ethical dilemmas for debate and discussion, SPJ has a useful resource. We've been collecting a number of case studies for use in workshops. The Ethics AdviceLine operated by the Chicago Headline Club and Loyola University also has provided a number of examples. There seems to be no shortage of ethical issues in journalism these days. Please feel free to use these examples in your classes, speeches, columns, workshops or other modes of communication.

Kobe Bryant’s Past: A Tweet Too Soon? On January 26, 2020, Kobe Bryant died at the age of 41 in a helicopter crash in the Los Angeles area. While the majority of social media praised Bryant after his death, within a few hours after the story broke, Felicia Sonmez, a reporter for The Washington Post , tweeted a link to an article from 2003 about the allegations of sexual assault against Bryant. The question: Is there a limit to truth-telling? How long (if at all) should a journalist wait after a person’s death before resurfacing sensitive information about their past?

A controversial apology After photographs of a speech and protests at Northwestern University appeared on the university's newspaper's website, some of the participants contacted the newspaper to complain. It became a “firestorm,” — first from students who felt victimized, and then, after the newspaper apologized, from journalists and others who accused the newspaper of apologizing for simply doing its job. The question: Is an apology the appropriate response? Is there something else the student journalists should have done?

Using the ‘Holocaust’ Metaphor People for the Ethical Treatment of Animals, or PETA, is a nonprofit animal rights organization known for its controversial approach to communications and public relations. In 2003, PETA launched a new campaign, named “Holocaust on Your Plate,” that compares the slaughter of animals for human use to the murder of 6 million Jews in WWII. The question: Is “Holocaust on Your Plate” ethically wrong or a truthful comparison?

Aaargh! Pirates! (and the Press) As collections of songs, studio recordings from an upcoming album or merely unreleased demos, are leaked online, these outlets cover the leak with a breaking story or a blog post. But they don’t stop there. Rolling Stone and Billboard often also will include a link within the story to listen to the songs that were leaked. The question: If Billboard and Rolling Stone are essentially pointing readers in the right direction, to the leaked music, are they not aiding in helping the Internet community find the material and consume it?

Reigning on the Parade Frank Whelan, a features writer who also wrote a history column for the Allentown, Pennsylvania, Morning Call , took part in a gay rights parade in June 2006 and stirred up a classic ethical dilemma. The situation raises any number of questions about what is and isn’t a conflict of interest. The question: What should the “consequences” be for Frank Whelan?

Controversy over a Concert Three former members of the Eagles rock band came to Denver during the 2004 election campaign to raise money for a U.S. Senate candidate, Democrat Ken Salazar. John Temple, editor and publisher of the Rocky Mountain News, advised his reporters not to go to the fundraising concerts. The question: Is it fair to ask newspaper staffers — or employees at other news media, for that matter — not to attend events that may have a political purpose? Are the rules different for different jobs at the news outlet?

Deep Throat, and His Motive The Watergate story is considered perhaps American journalism’s defining accomplishment. Two intrepid young reporters for The Washington Post , carefully verifying and expanding upon information given to them by sources they went to great lengths to protect, revealed brutally damaging information about one of the most powerful figures on Earth, the American president. The question: Is protecting a source more important than revealing all the relevant information about a news story?

When Sources Won’t Talk The SPJ Code of Ethics offers guidance on at least three aspects of this dilemma. “Test the accuracy of information from all sources and exercise care to avoid inadvertent error.” One source was not sufficient in revealing this information. The question: How could the editors maintain credibility and remain fair to both sides yet find solid sources for a news tip with inflammatory allegations?

A Suspect “Confession” John Mark Karr, 41, was arrested in mid-August in Bangkok, Thailand, at the request of Colorado and U.S. officials. During questioning, he confessed to the murder of JonBenet Ramsey. Karr was arrested after Michael Tracey, a journalism professor at the University of Colorado, alerted authorities to information he had drawn from e-mails Karr had sent him over the past four years. The question: Do you break a confidence with your source if you think it can solve a murder — or protect children half a world away?

Who’s the “Predator”? “To Catch a Predator,” the ratings-grabbing series on NBC’s Dateline, appeared to catch on with the public. But it also raised serious ethical questions for journalists. The question: If your newspaper or television station were approached by Perverted Justice to participate in a “sting” designed to identify real and potential perverts, should you go along, or say, “No thanks”? Was NBC reporting the news or creating it?

The Media’s Foul Ball The Chicago Cubs in 2003 were five outs from advancing to the World Series for the first time since 1945 when a 26-year-old fan tried to grab a foul ball, preventing outfielder Moises Alou from catching it. The hapless fan's identity was unknown. But he became recognizable through televised replays as the young baby-faced man in glasses, a Cubs baseball cap and earphones who bobbled the ball and was blamed for costing the Cubs a trip to the World Series. The question: Given the potential danger to the man, should he be identified by the media?

Publishing Drunk Drivers’ Photos When readers of The Anderson News picked up the Dec. 31, 1997, issue of the newspaper, stripped across the top of the front page was a New Year’s greeting and a warning. “HAVE A HAPPY NEW YEAR,” the banner read. “But please don’t drink and drive and risk having your picture published.” Readers were referred to the editorial page where White explained that starting in January 1998 the newspaper would publish photographs of all persons convicted of drunken driving in Anderson County. The question: Is this an appropriate policy for a newspaper?

Naming Victims of Sex Crimes On January 8, 2007, 13-year-old Ben Ownby disappeared while walking home from school in Beaufort, Missouri. A tip from a school friend led police on a frantic four-day search that ended unusually happily: the police discovered not only Ben, but another boy as well—15-year-old Shawn Hornbeck, who, four years earlier, had disappeared while riding his bike at the age of 11. Media scrutiny on Shawn’s years of captivity became intense. The question: Question: Should children who are thought to be the victims of sexual abuse ever be named in the media? What should be done about the continued use of names of kidnap victims who are later found to be sexual assault victims? Should use of their names be discontinued at that point?

A Self-Serving Leak San Francisco Chronicle reporters Mark Fainaru-Wada and Lance Williams were widely praised for their stories about sports figures involved with steroids. They turned their investigation into a very successful book, Game of Shadows . And they won the admiration of fellow journalists because they were willing to go to prison to protect the source who had leaked testimony to them from the grand jury investigating the BALCO sports-and-steroids. Their source, however, was not quite so noble. The question: Should the two reporters have continued to protect this key source even after he admitted to lying? Should they have promised confidentiality in the first place?

The Times and Jayson Blair Jayson Blair advanced quickly during his tenure at The New York Times , where he was hired as a full-time staff writer after his internship there and others at The Boston Globe and The Washington Post . Even accusations of inaccuracy and a series of corrections to his reports on Washington, D.C.-area sniper attacks did not stop Blair from moving on to national coverage of the war in Iraq. But when suspicions arose over his reports on military families, an internal review found that he was fabricating material and communicating with editors from his Brooklyn apartment — or within the Times building — rather than from outside New York. The question: How does the Times investigate problems and correct policies that allowed the Blair scandal to happen?

Cooperating with the Government It began on Jan. 18, 2005, and ended two weeks later after the longest prison standoff in recent U.S. history. The question: Should your media outlet go along with the state’s request not to release the information?

Offensive Images Caricatures of the Prophet Muhammad didn’t cause much of a stir when they were first published in September 2005. But when they were republished in early 2006, after Muslim leaders called attention to the 12 images, it set off rioting throughout the Islamic world. Embassies were burned; people were killed. After the rioting and killing started, it was difficult to ignore the cartoons. Question: Do we publish the cartoons or not?

The Sting Perverted-Justice.com is a Web site that can be very convenient for a reporter looking for a good story. But the tactic raises some ethical questions. The Web site scans Internet chat rooms looking for men who can be lured into sexually explicit conversations with invented underage correspondents. Perverted-Justice posts the men’s pictures on its Web site. Is it ethically defensible to employ such a sting tactic? Should you buy into the agenda of an advocacy group — even if it’s an agenda as worthy as this one?

A Media-Savvy Killer Since his first murder in 1974, the “BTK” killer — his own acronym, for “bind, torture, kill” — has sent the Wichita Eagle four letters and one poem. How should a newspaper, or other media outlet, handle communications from someone who says he’s guilty of multiple sensational crimes? And how much should it cooperate with law enforcement authorities?

A Congressman’s Past The (Portland) Oregonian learned that a Democratic member of the U.S. Congress, up for re-election to his fourth term, had been accused by an ex-girlfriend of a sexual assault some 28 years previously. But criminal charges never were filed, and neither the congressman, David Wu, nor his accuser wanted to discuss the case now, only weeks before the 2004 election. Question: Should The Oregonian publish this story?

Using this Process to Craft a Policy It used to be that a reporter would absolutely NEVER let a source check out a story before it appeared. But there has been growing acceptance of the idea that it’s more important to be accurate than to be independent. Do we let sources see what we’re planning to write? And if we do, when?

Join SPJ

SPJ News –  Cicero Independiente and MuckRock, Los Angeles Times, Retro Report win New America Award –  SPJ strongly condemns Oklahoma Dept. of Ed. for denying journalists access to news conference –  $175M California journalism agreement sparks debate on industry impact and inclusivity

case study on ethical practices

McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Case Study UT Star Icon

Wells Fargo and Moral Emotions

In a settlement with regulators, Wells Fargo Bank admitted that it had created as many as two million accounts for customers without their permission.

case study on ethical practices

On September 8, 2016, Wells Fargo, one of the nation’s oldest and largest banks, admitted in a settlement with regulators that it had created as many as two million accounts for customers without their permission. This was fraud, pure and simple. It seems to have been caused by a culture in the bank that made unreasonable demands upon employees. Wells Fargo agreed to pay $185 million in fines and penalties.

Employees had been urged to “cross-sell.” If a customer had one type of account with Wells Fargo, then top brass reasoned, they should have several. Employees were strongly incentivized, through both positive and negative means, to sell as many different types of accounts to customers as possible. “Eight is great” was a motto. But does the average person need eight financial products from a single bank? As things developed, when employees were unable to make such sales, they just made the accounts up and charged customers whether they had approved the accounts or not. The employees used customers’ personal identification numbers without their knowledge to enroll them in various products without their knowledge. Victims were frequently elderly or Spanish speakers.

Matthew Castro, whose father was born in Colombia, felt so bad about pushing sham accounts onto Latino customers than he tried to lessen his guilt by doing volunteer work. Other employees were quoted as saying “it’s beyond embarrassing to admit I am a current employee these days.”

Still other employees were moved to call company hotlines or otherwise blow the whistle, but they were simply ignored or oftentimes punished, frequently by being fired. One employee who sued to challenge retaliation against him was “uncomfortable” and “unsettled” by the practices he saw around him, which prompted him to speak out. “This is a fraud, I cannot be a part of that,” the whistleblower said.

Early prognostications were that CEO John Stumpf would not lose his job over the fiasco. However, as time went on and investigations continued, the forms and amount of wrongdoing seemed to grow and grow. Evidence surfaced that the bank improperly changed the terms of mortgage loans, signed customers up for unauthorized life insurance policies, overcharged small businesses for credit-card processing, and on and on.

In September of 2016, CEO Stumpf appeared before Congress and was savaged by Senators and Representatives of both parties, notwithstanding his agreement to forfeit $41 million in pay. The members of Congress denounced Wells Fargo’s actions as “theft,” “a criminal enterprise,” and an “outrage.” Stumpf simultaneously took “full responsibility,” yet blamed the fraud on ethical lapses of low-level bankers and tellers. He had, he said, led the company with courage. Nonetheless, by October of 2016 Stumpf had been forced into retirement and replaced by Tim Sloan.

Over the next several months, more and more allegations of wrongdoing arose. The bank had illegally repossessed cars from military veterans. It had modified mortgages without customer authorization. It had charged 570,000 customers for auto insurance they did not need. It had ripped off small businesses by charging excessive credit card fees. The total number of fake accounts rose from two million to 3.5 million. The bank also wrongly fined 110,000 mortgage clients for missing a deadline even though the party at fault for the delay was Wells Fargo itself.

At its April 2017 annual shareholders meeting, the firm faced levels of dissent that a Georgetown business school professor, Sandeep Dahiya, called “highly unusual.”

By September, 2017, Wells Fargo had paid $414 million in refunds and settlements and incurred hundreds of millions more in attorneys’ and other fees. This included $108 million paid to the Department of Veterans Affairs for having overcharged military veterans on mortgage refinancing.

In October 2017, new Wells Fargo CEO Tim Sloan was told by Massachusetts Senator Elizabeth Warren, a Democrat, that he should be fired: “You enabled this fake-account scandal. You got rich off it, and then you tried to cover it up.” Republicans were equally harsh. Senator John Kennedy Texas said: “I’m not against big. With all due respect, I’m against dumb.”

Sloan was still CEO when the company had its annual shareholders meeting in April 2018. Shareholder and protestors were both extremely angry with Wells Fargo. By then, the bank had paid an additional $1 billion fine for abuses in mortgage and auto lending. And, in an unprecedented move, the Federal Reserve Board had ordered the bank to cap its asset growth. Disgust with Wells Fargo’s practices caused the American Federation of Teachers, to cut ties with the bank. Some whistleblowers resisted early attempts at quiet settlements with the bank, holding out for a public admission of wrongdoing.

In May 2018, yet another shoe dropped. Wells Fargo’s share price dropped on news that the bank’s employees improperly altered documents of its corporate customers in an attempt to comply with regulatory directions related to money laundering rules.

Ultimately, Wells Fargo removed its cross-selling sales incentives. CEO Sloan, having been informed that lower level employees were suffering stress, panic attacks, and other symptoms apologized for the fact that management initially blamed them for the results of the toxic corporate culture, admitting that cultural weaknesses had caused a major morale problem.

Discussion Questions

1. What moral emotions seem to have been at play in this case? On the part of the bank’s employees? The bank’s victims? The bank’s regulators? The bank’s shareholders?

2. What factors contributed particularly to the outrage and anger that legislators, regulators, customers, and shareholders felt?

3. Clearly inner-directed emotions such as guilt and embarrassment affected the actions of Wells Fargo employees. Were they always sufficient to overcome the employees’ utilitarian calculation: “I need this job”?

4. Did moral emotions motivate some of the whistleblowers? How?

5. In the wake of everything described in the case study, Wells Fargo has fired many employees, clawed back bonuses from executives, replaced many of its directors, dismantled its sales incentive system and made other changes.

  • Do you think these changes were made out of a utilitarian calculation designed to avoid further monetary penalties, or a desire to avoid the shame and embarrassment the bank’s managers and employees were feeling?
  • Or was it a combination of both of these things?
  • If a combination, which do you think played a bigger role? Why?

Related Videos

Moral Emotions

Moral Emotions

Moral emotions are the feelings and intuitions that play a major role in most of our ethical decision making and actions.

Bibliography

“Elizabeth Warren to Wells Fargo CEO: “You Should Be Fired,” http://money.cnn.com/2017/10/03/investing/wells-fargo-hearing-ceo/index.html

“It’s Been a Year Since the Wells Fargo Scandal Broke—and New Problems Are Still Surfacing,” http://www.latimes.com/business/la-fi-wells-fargo-one-year-20170908-story.html

“Wells Fargo’s Reaction to Scandal Fails to Satisfy Angry Lawmakers,” https://www.nytimes.com/2016/09/30/business/dealbook/wells-fargo-ceo-john-stumpf-house-hearing.html

“’Wells Fargo, You’re the Worst’: Scenes from Testy Annual Meeting,” https://www.americanbanker.com/news/wells-fargo-youre-the-worst-scenes-from-testy-annual-meeting

“How Wells Fargo’s Cutthroat Corporate Culture Allegedly Drove Bankers to Fraud,” https://www.vanityfair.com/news/2017/05/wells-fargo-corporate-culture-fraud

“Outburst by Angry Wells Fargo Shareholder Halts Annual Meeting,” http://money.cnn.com/2017/04/25/investing/wells-fargo-shareholder-meeting/index.html

“Wells Fargo Shares Slip on Report that Employees Altered Customer Documents in Its Business-Banking Unit,” https://www.cnbc.com/2018/05/17/wells-fargo-shares-sink-on-report-that-employees-altered-customer-documents-in-its-business-banking-unit.html

“Wells Fargo to Pay $108 Million for Allegedly Overcharging Veterans on Refis,” https://www.housingwire.com/articles/40925-wells-fargo-to-pay-108-million-for-allegedly-overcharging-veterans-on-refis

“For Wells Fargo, Angry Questions About Profiling Latinos,” http://www.chicagotribune.com/business/ct-wells-fargo-fake-accounts-latinos-20161019-story.html

“More Former Wells Fargo Employees Allege They Were Fired After They Tried to Blow the Whistle on Shady Activity at the Bank,” http://money.cnn.com/2017/11/06/investing/wells-fargo-retaliation-whistleblower/index.html

“Inside Wells Fargo, Workers Say the Mood is Grim,” http://money.cnn.com/2016/11/03/investing/wells-fargo-morale-problem/index.html

“Disgust With Wells Fargo You Can Take to the Bank,” https://goodmenproject.com/business-ethics-2/disgust-with-wells-fargo-you-can-take-to-the-bank-wcz/

“The Former Khmer Rouge Slave Who Blew the Whistle on Wells Fargo,” https://www.nytimes.com/2018/03/24/business/wells-fargo-whistleblower-duke-tran.html

Stay Informed

Support our work.

  • Search Input Search Submit
  • Code of Ethics
  • Code of Ethics Case Studies

ACM Code of Ethics and Professional Conduct

Using the Code

Case studies.

The ACM Code of Ethics and Professional Practice (“the Code”) is meant to inform practice and education. It is useful as the conscience of the profession, but also for individual decision-making.

As prescribed by the Preamble of the Code, computing professionals should approach the dilemma with a holistic reading of the principles and evaluate the situation with thoughtful consideration to the circumstances. In all cases, the computing professional should defer to the public good as the paramount consideration. The analyses in the following cases highlight the intended interpretations of members of the 2018 Code task force, and should help guide computing professionals in how to apply the Code to various situations.

Case Study: Malware

Rogue Services touts its web hosting as “cheap, guaranteed uptime, no matter what.” While some of Rogue’s clients are independent web-based retailers, most are focused on malware and spam, which leverage Rogue for continuous delivery. Corrupted advertisements often link to code hosted on Rogue to exploit browser vulnerabilities to infect machines with ransomware. Rogue refuses to intervene with these services despite repeated requests.

case study on ethical practices

Case Study: Medical Implants

Corazón is a medical technology startup that builds implantable heart health monitoring devices. After being approved by multiple countries’ medical device regulation agencies, Corazón quickly gained market share based on the ease of use of the app and the company’s vocal commitment to securing patients’ information. Corazón also worked with several charities to provide free or reduced access to patients living below the poverty line.

case study on ethical practices

Case Study: Abusive Workplace Behavior

A new hire with the interactive technologies team, Diane became the target of team leader Max’s tirades when she committed a code update that introduced a timing glitch in a prototype shortly before a live demo. Diane approached the team’s manager, Jean, about Max’s abusive behavior. Jean agreed that the experience was unpleasant, but that was the price to pay for working in an intense, industry-leading team.

case study on ethical practices

Case Study: Automated Active Response Weaponry

Q Industries is an international defense contractor specializing in autonomous vehicles. As an early pioneer in passive systems, such as bomb-defusing robots and crowd-monitoring drones, Q established itself as a vendor of choice for military and law enforcement applications. Q’s products have been deployed in a variety of settings, including conflict zones and nonviolent protests. Recently, however, Q has begun to experiment with automated active responses.

case study on ethical practices

Case Study: Dark UX Patterns

The change request Stewart received was simple: replace the website’s rounded rectangle buttons with arrows, and adjust the color palette to one that mixes red and green text. But he found the prototype confusing. He suggested to his manager that this design would probably trick users into more expensive options they didn’t want. The response was that these were the changes requested by the client.

case study on ethical practices

Case Study: Malicious Inputs to Content Filters

The U.S. Children’s Internet Protection Act (CIPA) mandates that public schools and libraries employ mechanisms to block inappropriate matter on the grounds that it is deemed harmful to minors. Blocker Plus is an automated Internet content filter designed to help these institutions comply with CIPA’s requirements. During a review session, the development team reviewed a number of complaints about content being blocked inappropriately.

case study on ethical practices

Guiding Members with a Framework of Ethical Conduct

Learn more about ACM’s commitment to ethical standards: the ACM Code of Ethics, Software Engineering Code of Ethics and Professional Practice, and Committee on Professional Ethics (COPE), which is guiding these and other intiatives.

case study on ethical practices

Ask an Ethicist

Ask an Ethicist invites ethics questions related to computing or technology. Have an interesting question, puzzle or conundrum? Submit yours via a form, and the ACM Committee on Professional Ethics (COPE) will answer a selection of them on the site.

case study on ethical practices

Guidance in Addressing Real-World Ethical Challenges

The Integrity Project, created by ACM's Committee on Professional Ethics, is a series of resources designed to aid ethical decision making. It includes case studies demonstrating how the principles can be applied to specific ethical challenges, and an Ask an Ethicist advice column to help computing professionals navigate the sometimes challenging choices that can arise in the course of their work.

case study on ethical practices

Supporting the Professionalism of ACM Members

The ACM Committee on Professional Ethics (COPE) is responsible for promoting ethical conduct among computing professionals by publicizing the Code of Ethics and by offering interpretations of the Code; planning and reviewing activities to educate membership in ethical decision making on issues of professional conduct; and reviewing and recommending updates to the Code of Ethics and its guidelines.

case study on ethical practices

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

O'Mathúna D, Iphofen R, editors. Ethics, Integrity and Policymaking: The Value of the Case Study [Internet]. Cham (CH): Springer; 2022. doi: 10.1007/978-3-031-15746-2_1

Cover of Ethics, Integrity and Policymaking

Ethics, Integrity and Policymaking: The Value of the Case Study [Internet].

Chapter 1 making a case for the case: an introduction.

Dónal O’Mathúna and Ron Iphofen .

Affiliations

Published online: November 3, 2022.

This chapter agues for the importance of case studies in generating evidence to guide and/or support policymaking across a variety of fields. Case studies can offer the kind of depth and detail vital to the nuances of context, which may be important in securing effective policies that take account of influences not easily identified in more generalised studies. Case studies can be written in a variety of ways which are overviewed in this chapter, and can also be written with different purposes in mind. At the same time, case studies have limitations, particularly when evidence of causation is sought. Understanding these can help to ensure that case studies are appropriately used to assist in policymaking. This chapter also provides an overview of the types of case studies found in the rest of this volume, and briefly summarises the themes and topics addressed in each of the other chapters.

1.1. Judging the Ethics of Research

When asked to judge the ethical issues involved in research or any evidence-gathering activity, any research ethicist worth their salt will (or should) reply, at least initially: ‘It depends’. This is neither sophistry nor evasive legalism. Instead, it is a specific form of casuistry used in ethics in which general ethical principles are applied to the specifics of actual cases and inferences made through analogy. It is valued as a structured yet flexible approach to real-world ethical challenges. Case study methods recognise the complexities of depth and detail involved in assessing research activities. Another way of putting this is to say: ‘Don’t ask me to make a judgement about a piece of research until I have the details of the project and the context in which it will or did take place.’ Understanding and fully explicating a context is vital as far as ethical research (and evidence-gathering) is concerned, along with taking account of the complex interrelationship between context and method (Miller and Dingwall 1997 ).

This rationale lies behind this collection of case studies which is one outcome from the EU-funded PRO-RES Project. 1 One aim of this project was to establish the virtues, values, principles and standards most commonly held as supportive of ethical practice by researchers, scientists and evidence-generators and users. The project team conducted desk research, workshops and consulted throughout the project with a wide range of stakeholders (PRO-RES 2021a ). The resulting Scientific, Trustworthy, and Ethical evidence for Policy (STEP) ACCORD was devised, which all stakeholders could sign up to and endorse in the interests of ensuring any policies which are the outcome of research findings are based upon ethical evidence (PRO-RES 2021b ).

By ‘ethical evidence’ we mean results and findings that have been generated by research and other activities during which the standards of research ethics and integrity have been upheld (Iphofen and O’Mathúna 2022 ). The first statement of the STEP ACCORD is that policy should be evidence-based, meaning that it is underpinned by high-quality research, analysis and evidence (PRO-RES 2021b ). While our topic could be said to be research ethics, we have chosen to refer more broadly to evidence-generating activities. Much debate has occurred over the precise definition of research under the apparent assumption that ‘non-research projects’ fall outside the purview of requirements to obtain ethics approval from an ethics review body. This debate is more about the regulation of research than the ethics of research and has contributed to an unbalanced approach to the ethics of research (O’Mathúna 2018 ). Research and evidence-generating activities raise many ethical concerns, some similar and some distinct. When the focus is primarily on which projects need to obtain what sort of ethics approval from which type of committee, the ethical issues raised by those activities themselves can receive insufficient attention. This can leave everyone involved with these activities either struggling to figure out how to manage complex and challenging ethical dilemmas or pushing ahead with those activities confident that their approval letter means they have fulfilled all their ethical responsibilities. Unfortunately, this can lead to a view that research ethics is an impediment and burden that must be overcome so that the important work in the research itself can get going.

The alternative perspective advocated by PRO-RES, and the authors of the chapters in this volume, is that ethics underpins all phases of research, from when the idea for a project is conceived, all the way through its design and implementation, and on to how its findings are disseminated and put into practice in individual decisions or in policy. Given the range of activities involved in all these phases, multiple types of ethical issues can arise. Each occurs in its own context of time and place, and this must be taken into account. While ethical principles and theories have important contributions to make at each of these points, case studies are also very important. These allow for the normative effects of various assumptions and declarations to be judged in context. We therefore asked the authors of this volume’s chapters to identify various case studies which would demonstrate the ethical challenges entailed in various types of research and evidence-generating activities. These illustrative case studies explore various innovative topics and fields that raise challenges requiring ethical reflection and careful policymaking responses. The cases highlight diverse ethical issues and provide lessons for the various options available for policymaking (see Sect.  1.6 . below). Cases are drawn from many fields, including artificial intelligence, space science, energy, data protection, professional research practice and pandemic planning. The issues are examined in different locations, including Europe, India, Africa and in global contexts. Each case is examined in detail and also helps to anticipate lessons that could be learned and applied in other situations where ethical evidence is needed to inform evidence-based policymaking.

1.2. The Case for Cases

Case studies have increasingly been used, particularly in social science (Exworthy and Powell 2012 ). Many reasons underlie this trend, one being the movement towards evidence-based practice. Case studies provide a methodology by which a detailed study can be conducted of a social unit, whether that unit is a person, an organization, a policy or a larger group or system (Exworthy and Powell 2012 ). The case study is amenable to various methodologies, mostly qualitative, which allow investigations via documentary analyses, interviews, focus groups, observations, and more.

At the same time, consensus is lacking over the precise nature of a case study. Various definitions have been offered, but Yin ( 2017 ) provides a widely cited definition with two parts. One is that a case study is an in-depth inquiry into a real-life phenomenon where the context is highly pertinent. The second part of Yin’s definition addresses the many variables involved in the case, the multiple sources of evidence explored, and the inclusion of theoretical propositions to guide the analysis. While Yin’s emphasis is on the case study as a research method, he identifies important elements of broader relevance that point to the particular value of the case study for examining ethical issues.

Other definitions of case studies emphasize their story or narrative aspects (Gwee 2018 ). These stories frequently highlight a dilemma in contextually rich ways, with an emphasis on how decisions can be or need to be made. Case studies are particularly helpful with ethical issues to provide crucial context and explore (and evaluate) how ethical decisions have been made or need to be made. Classic cases include the Tuskegee public health syphilis study, the Henrietta Lacks human cell line case, the Milgram and Zimbardo psychology cases, the Tea Room Trade case, and the Belfast Project in oral history research (examined here in Chap. 10 ). Cases exemplify core ethical principles, and how they were applied or misapplied; in addition, they examine how policies have worked well or not (Chaps. 2 , 3 and 5 ). Cases can examine ethics in long-standing issues (like research misconduct (Chap. 7 ), energy production (Chap. 8 ), or Chap. 11 ’s consideration of researchers breaking the law), or with innovations in need of further ethical reflection because of their novelty (like extended space flight (Chap. 9 ) and AI (Chaps. 13 and 14 ), with the latter looking at automation in legal systems). These case studies help to situate the innovations within the context of widely regarded ethical principles and theories, and allow comparisons to be made with other technologies or practices where ethical positions have been developed. In doing so, these case studies offer pointers and suggestions for policymakers given that they are the ones who will develop applicable policies.

1.3. Research Design and Causal Inference

Not everyone is convinced of the value of the case study. It must be admitted that they have limitations, which we will reflect on shortly. Yet we believe that others go too far in their criticisms, revealing instead some prejudices against the value of the case (Yin 2017 ). In what has become a classic text for research design, Campbell and Stanley ( 1963 ) have few good words for what they call the ‘One Shot Case Study.’ They rank it below two other ‘pre-experimental’ designs—the One-Group Pretest–Posttest and the Static-Group Comparison—and conclude that case studies “have such a total absence of control to be of almost no scientific value” (Campbell and Stanley 1963 , 6). The other designs have, in turn, a baseline and outcome measure and some degree of comparative analysis which provides them some validity. Such a criticism is legitimate if one prioritises the experimental method as the most superior in terms of effectiveness evidence and, as for Campbell and Stanley, one is striving to assess the effectiveness of educational interventions.

What is missing from that assessment is that different methodologies are more appropriate for different kinds of questions. Questions of causation and whether a particular treatment, policy or educational strategy is more effective than another are best answered by experimental methods. While experimental designs are better suited to explore causal relationships, case studies are more suited to explore “how” and “why” questions (Yin 2017 ). It can be more productive to view different methodologies as complementing one another, rather than examining them in hierarchical terms.

The case study approach draws on a long tradition in ethnography and anthropology: “It stresses the importance of holistic perspectives and so has more of a ‘humanistic’ emphasis. It recognises that there are multiple influences on any single individual or group and that most other methods neglect the thorough understanding of this range of influences. They usually focus on a chosen variable or variables which are tested in terms of their influence. A case study tends to make no initial assumptions about which are the key variables—preferring to allow the case to ‘speak for itself’” (Iphofen et al. 2009 , 275). This tradition has sometimes discouraged people from conducting or using case studies on the assumption that they take massive amounts of time and lead to huge reports. This is the case with ethnography, but the case study method can be applied in more limited settings and can lead to high-quality, concise reports.

Another criticism of case studies is that they cannot be used to make generalizations. Certainly, there are limits to their generalisability, but the same is true of experimental studies. One randomized controlled trial cannot be generalised to the whole population without ensuring that its details are evaluated in the context of how it was conducted.

Similarly, it should not be assumed that generalisability can adequately guide practice or policy when it comes to the specifics of an individual case. A case study should not be used to support statistical generalizations (that the same percentage found in the case will be found in the general public). But a case study can be used to expand and generalize theories and thus have much usefulness. It affords a method of examining the specific (complex) interactions occurring in a case which can only be known from the details. Such an analysis can be carried out for individuals, policies or interventions.

The current COVID-19 pandemic demonstrates the dangers of generalising in the wrong context. Some people have very mild cases of COVID-19 or are asymptomatic. Others get seriously ill and even die. Sometimes people generalise from cases they know and assume they will have mild symptoms. Then they refuse to take the COVID-19 vaccine, basically generalising from similar cases. Mass vaccination is recommended for the sake of the health of the public (generalised health) and to limit the spread of a deadly virus. Cases are reported of people having adverse reactions to COVID-19 vaccines, and some people generalise from these that they will not take whatever risks might be involved in receiving the vaccine themselves. It might be theoretically possible to discover which individuals WILL react adversely to immunisation on a population level. But it is highly complex and expensive to do so, and takes an extensive period of time. Given the urgency of benefitting the health of ‘the public’, policymakers have decided that the risks to a sub-group are warranted. Only after the emergence of epidemiological data disclosing negative effects of some vaccines on some individuals will it become more clear which characteristics typify those cases which are likely to experience the adverse effects, and more accurately quantify the risks of experiencing those effects.

Much literature now points to the advantages and disadvantages of case studies (Gomm et al. 2000 ), and how to use them and conduct them with adequate rigour to ensure the validity of the evidence generated (Schell 1992 ; Yin 2011 , 2017 ). At the same time, legitimate critiques have been made of some case studies because they have been conducted without adequate rigor, in unsystematic ways, or in ways that allowed bias to have more influence than evidence (Hammersley 2001 ). Part of the problem here is similar to interviewing, where some will assume that since interviews are a form of conversation, anyone can do it. Case studies have some similarities to stories, but that doesn’t mean they are quick and easy ways to report on events. That view can lead to the situation where “most people feel that they can prepare a case study, and nearly all of us believe we can understand one. Since neither view is well founded, the case study receives a lot of approbation it does not deserve” (Hoaglin et al., cited in Yin 2017 , 16).

Case studies can be conducted and used in a wide range of ways (Gwee 2018 ). Case studies can be used as a research method, as a teaching tool, as a way of recording events so that learning can be applied to practice, and to facilitate practical problem-solving skills (Luck et al. 2006 ). Significant differences exist between a case study that was developed and used in research compared to one used for teaching (Yin 2017 ). A valid rationale for studying a ‘case’ should be provided so that it is clear that the proposed method is suitable to the topic and subject being studied. The unit of study for a case could be an individual person, social group, community, or society. Sometimes that specific case alone will constitute the actual research project. Thus, the study could be of one individual’s experience, with insights and understanding gained of the individual’s situation which could be of use to understand others’ experiences. Often there will be attempts made at a comparison between cases—one organisation being compared to another, with both being studied in some detail, and in terms of the same or similar criteria. Given this variety, it is important to use cases in ways appropriate to how they were generated.

The case study continues to be an important piece of evidence in clinical decision-making in medicine and healthcare. Here, case studies do not demonstrate causation or effectiveness, but are used as an important step in understanding the experiences of patients, particularly with a new or confusing set of symptoms. This was clearly seen as clinicians published case studies describing a new respiratory infection which the world now knows to be COVID-19. Only as case studies were generated, and the patterns brought together in larger collections of cases, did the characteristics of the illness come to inform those seeking to diagnose at the bedside (Borges do Nascimento et al. 2020 ). Indeed case studies are frequently favoured in nursing, healthcare and social work research where professional missions require a focus on the care of the individual and where cases facilitate making use of the range of research paradigms (Galatzer-Levy et al. 2000 ; Mattaini 1996 ; Gray 1998 ; Luck et al. 2006 ).

1.4. Devil’s in the Detail

Our main concern in this collection is not with case study aetiology but rather to draw on the advantages of the method to highlight key ethical issues related to the use of evidence in influencing policy. Thus, we make no claim to causal ‘generalisation’ on the basis of these reports—but instead we seek to help elucidate ethics issues, if even theoretical, and anticipate responses and obstacles in similar situations and contexts that might help decision-making in novel circumstances. A key strength of case studies is their capacity to connect abstract theoretical concepts to the complex realities of practice and the real world (Luck et al. 2006 ). Ethics cases clearly fit this description and allow the contextual details of issues and dilemmas to be included in discussions of how ethical principles apply as policy is being developed.

Since cases are highly focussed on the specifics of the situation, more time can be given over to data gathering which may be of both qualitative and quantitative natures. Given the many variables involved in the ‘real life’ setting, increased methodological flexibility is required (Yin 2017 ). This means seeking to maximise the data sources—such as archives (personal and public), records (such as personal diaries), observations (participant and covert) and interviews (face-to-face and online)—and revisiting all sources when necessary and as case participants and time allows.

1.5. Cases and Policymaking

Case studies allow researchers and practitioners to learn from the specifics of a situation and apply that learning in similar situations. Ethics case studies allow such reflection to facilitate the development of ethical decision-making skills. This volume has major interests in ethics and evidence-generation (research), but also in a third area: policymaking. Cases can influence policymaking, such as how one case can receive widespread attention and become the impetus to create policy that aims to prevent similar cases. For example, the US federal Brady Law was enacted in 1993 to require background checks on people before they purchase a gun (ATF 2021 ). The law was named for White House Press Secretary James Brady, and his case became widely known in the US. He was shot and paralyzed during John Hinckley, Jr.’s 1981 assassination attempt on President Ronald Reagan. Another example, this time in a research context, was how the Tuskegee Syphilis Study led, after its public exposure in 1971, to the US Department of Health, Education and Welfare appointing an expert panel to examine the ethics of that case. This resulted in federal policymakers enacting the National Research Act in 1974, which included setting up a national commission that published the Belmont Report in 1976. This report continues to strongly influence research ethics practice around the world. These examples highlight the power of a case study to influence policymaking.

One of the challenges for policymakers, though, is that compelling cases can often be provided for opposite sides of an issue. Also, while the Belmont Report has been praised for articulating a small number of key ethical principles, how those principles should be applied in specific instances of research remains an ongoing challenge and a point of much discussion. This is particularly relevant for innovative techniques and technologies. Hence the importance of cases interacting with general principles and leading to ongoing reflection and debate over the applicable cases. At the same time, new areas of research and evidence generation activities will lead to questions about how existing ethical principles and values apply. New case studies can help to facilitate that reflection, which can then allow policymakers to consider whether existing policy should be adapted or whether whole new areas of policy are needed.

Case studies also can play an important role in learning from and evaluating policy. Policymakers tend to focus on practical, day-to-day concerns and with the introduction of new programmes (Exworthy and Peckam 2012 ). Time and resources may be scant when it comes to evaluating how well existing policies are performing or reflecting on how policies can be adapted to overcome shortcomings (Hunter 2003 ). Effective policies may exist elsewhere (historically or geographically) and be more easily adapted to a new context instead of starting policymaking from scratch. Case studies can permit learning from past policies (or situations where policies did not exist), and they can illuminate various factors that should be explored in more detail in the context of the current issue or situation. Chaps. 2 , 3 and 5 in this volume are examples of this type of case study.

1.6. The Moral Gain

This volume reflects the ambiguity of ethical dilemmas in contemporary policymaking. Analyses will reflect current debates where consensus has not been achieved yet. These cases illustrate key points made throughout the PRO-RES project: that ethical decision-making is a fluid enterprise, where values, principles and standards must constantly be applied to new situations, new events and new research developments. The cases illustrate how no ‘one point’ exists in the research process where judgements about ethics can be regarded as ‘final.’ Case studies provide excellent ways for readers to develop important decision-making skills.

Research produces novel products and processes which can have broad implications for society, the environment and relationships. Research methods themselves are modified or applied in new ways and places, requiring further ethical reflection. New topics and whole fields of research develop and require careful evaluation and thoughtful responses. New case studies are needed because research constantly generates new issues and new ethics questions for policymaking.

The cases found in this volume address a wide range of topics and involve several disciplines. The cases were selected by the parameters of the PRO-RES project and the Horizon 2020 funding call to which it responded. First, the call was concerned with both research ethics and scientific integrity and each of the cases addresses one or both of these areas. The call sought projects that addressed non-medical research, and the cases here address disciplines such as social sciences, engineering, artificial intelligence and One Health. The call also sought particular attention be given to (a) covert research, (b) working in dangerous areas/conflict zones and (c) behavioral research collecting data from social media/internet sources. Hence, we included cases that addressed each of these areas. Finally, while an EU-funded project can be expected to have a European focus, the issues addressed have global implications. Therefore, we wanted to include cases studies from outside Europe and did so by involving authors from India and Africa to reflect on the volume’s areas of interest.

The first case study offered in this volume (Chap. 2 ) examines a significant policy approach taken by the European Union to address ethics and integrity in research and innovation: Responsible Research and Innovation (RRI). This chapter examines the lessons that can be learned from RRI in a European context. Chapter 3 elaborates on this topic with another policy learning case study, but this time examining RRI in India. One of the critiques made of RRI is that it can be Euro-centric. This case study examines this claim, and also describes how a distinctively Indian concept, Scientific Temper, can add to and contextualise RRI. Chapter 4 takes a different approach in being a case study of the development of research ethics guidance in the United Kingdom (UK). It explores the history underlying the research ethics framework commissioned by the UK Research Integrity Office (UKRIO) and the Association of Research Managers and Administrators (ARMA), and points to lessons that can be learned about the policy-development process itself.

While staying focused on policy related to research ethics, the chapters that follow include case studies that address more targeted concerns. Chapter 5 examines the impact of the European Union’s (EU) General Data Protection Regulation (GDPR) in the Republic of Croatia. Research data collected in Croatia is used to explore the handling of personal data before and after the introduction of GDPR. This case study aims to provide lessons learned that could contribute to research ethics policies and procedures in other European Member States.

Chapter 6 moves from policy itself to the role of policy advisors in policymaking. This case study explores the distinct responsibilities of those elevated to the role of “policy advisor,” especially given the current lack of policy to regulate this field or how its advice is used by policymakers. Next, Chap. 7 straddles the previous chapters’ focus on policy and its evaluation while introducing the focus of the next section on historical case studies. This chapter uses the so-called “race for the superconductor” as a case study by which the PRO-RES ethics framework is used to explore specific ethical dilemmas (PRO-RES 2021b ). This case study is especially useful for policymakers because of how it reveals the multiple difficulties in balancing economic, political, institutional and professional requirements and values.

The next case study continues the use of historical cases, but here to explore the challenges facing innovative research into unorthodox energy technology that has the potential to displace traditional energy suppliers. The wave power case in Chap. 8 highlights how conducting research with integrity can have serious consequences and come with considerable cost. The case also points to the importance of transparency in how evidence is used in policymaking so that trust in science and scientists is promoted at the same time as science is used in the public interest. Another area of cutting-edge scientific innovation is explored in Chap. 9 , but this time looking to the future. This case study examines space exploration, and specifically the ethical issues around establishing safe exposure standards for astronauts embarking on extended duration spaceflights. This case highlights the ethical challenges in policymaking focused on an elite group of people (astronauts) who embark on extremely risky activities in the name of science and humanity.

Chapter 10 moves from the physical sciences to the social sciences. The Belfast Project provides a case study to explore the ethical challenges of conducting research after violent conflict. In this case, researchers promised anonymity and confidentiality to research participants, yet that was overturned through legal proceedings which highlighted the limits of confidentiality in research. This case points to the difficulty of balancing the value of research archives in understanding conflict against the value of providing juridical evidence to promote justice. Another social science case is examined in Chap. 11 , this time in ethnography. This so-called ‘urban explorer’ case study explores the justifications that might exist for undertaking covert research where researchers break the law (in this case by trespassing) in order to investigate a topic that would remain otherwise poorly understood. This case raises a number of important questions for policymakers around: the freedoms that researchers should be given to act in the public interest; when researchers are justified in breaking the law; and what responsibilities and consequences researchers should accept if they believe they are justified in doing so.

Further complexity in research and evidence generation is introduced in Chap. 12 . A case study in One Health is used to explore ethical issues at the intersection of animal, human and environmental ethics. The pertinence of such studies has been highlighted by COVID-19, yet policies lag behind in recognising the urgency and complexity of initiating investigations into novel outbreaks, such as the one discussed here that occurred among animals in Ethiopia. Chapter 13 retains the COVID-19 setting, but returns the attention to technological innovation. Artificial intelligence (AI) is the focus of these two chapters in the volume, here examining the ethical challenges arising from the emergency authorisation of using AI to respond to the public health needs created by the COVID-19 pandemic. Chapter 14 addresses a longer term use of AI in addressing problems and challenges in the legal system. Using the so-called Robodebt case, the chapter explores the reasons why legal systems are turning to AI and other automated procedures. The Robodebt case highlights problems when AI algorithms are built on inaccurate assumptions and implemented with little human oversight. This case shows the massive problems for hundreds of thousands of Australians who became victims of poorly conceived AI and makes recommendations to assist policymakers to avoid similar debacles. The last chapter (Chap. 15 ) draws some general conclusions from all the cases that are relevant when using case studies.

1.7. Into the Future

This volume focuses on ethics in research and professional integrity and how we can be clear about the lessons that can be drawn to assist policymakers. The cases provided cover a wide range of situations, settings, and disciplines. They cover international, national, organisational, group and individual levels of concern. Each case raises distinct issues, yet also points to some general features of research, evidence-generation, ethics and policymaking. All the studies illustrate the difficulties of drawing clear ‘boundaries’ between the research and the context. All these case studies show how in real situations dynamic judgements have to be made about many different issues. Guidelines and policies do help and are needed. But at the same time, researchers, policymakers and everyone else involved in evidence generation and evidence implementation need to embody the virtues that are central to good research. Judgments will need to be made in many areas, for example, about how much transparency can be allowed, or is ethically justified; how much risk can be taken, both with participants’ safety and also with the researchers’ safety; how much information can be disclosed to or withheld from participants in their own interests and for the benefit of the ‘science’; and many others. All of these point to just how difficult it can be to apply common standards across disciplines, professions, cultures and countries. That difficulty must be acknowledged and lead to open discussions with the aim of improving practice. The cases presented here point to efforts that have been made towards this. None of them is perfect. Lessons must be learned from all of them, towards which Chap. 15 aims to be a starting point. Only by openly discussing and reflecting on past practice can lessons be learned that can inform policymaking that aims to improve future practice. In this way, ethical progress can become an essential aspect of innovation in research and evidence-generation.

  • ATF (Bureau of Alcohol, Tobacco, Firearms and Explosives). 2021. Brady law. https://www ​.atf.gov/rules-and-regulations/brady-law . Accessed 1 Jan 2022.
  • Borges do Nascimento, Israel J., Thilo C. von Groote, Dónal P. O’Mathúna, Hebatullah M. Abdulazeem, Catherine Henderson, Umesh Jayarajah, et al. 2020. Clinical, laboratory and radiological characteristics and outcomes of novel coronavirus (SARS-CoV-2) infection in humans: a systematic review and series of meta-analyses. PLoS ONE 15(9):e0239235. https://doi ​.org/10.1371/journal ​.pone.0239235 . [ PMC free article : PMC7498028 ] [ PubMed : 32941548 ]
  • Campbell, D.T., and J.C. Stanley. 1963. Experimental and quasi-experimental designs for research . Chicago: Rand McNally and Company.
  • Exworthy, Mark, and Stephen Peckam. 2012. Policy learning from case studies in health policy: taking forward the debate. In Shaping health policy: case study methods and analysis , ed. Mark Exworthy, Stephen Peckham, Martin Powell, and Alison Hann, 313–328. Bristol, UK: Policy Press.
  • Exworthy, Mark, and Martin Powell. 2012. Case studies in health policy: an introduction. In Shaping health policy: case study methods and analysis , ed. Mark Exworthy, Stephen Peckham, Martin Powell, and Alison Hann, 3–20. Bristol, UK: Policy Press.
  • Galatzer-Levy, R.M., Bachrach, H., Skolnikoff, A., and Wadlron, S. Jr. 2000. The single case method. In Does Psychoanalysis Work? , 230–242. New Haven and London: Yale University Press.
  • Gomm, R., M. Hammersley, and P. Foster, eds. 2000. Case study method: Key issues, key texts . London: Sage.
  • Gray, M. 1998. Introducing single case study research design: an overview. Nurse Researcher 5 (4): 15–24. [ PubMed : 27712405 ]
  • Gwee, June. 2018. The case writer’s toolkit . Singapore: Palgrave Macmillan. [ CrossRef ]
  • Hammersley, M. 2001. Which side was Becker on? Questioning political and epistemological radicalism. Qualitative Research 1 (1): 91–110. [ CrossRef ]
  • Hunter, D.J. 2003. Evidence-based policy and practice: riding for a fall? Journal of the Royal Society of Medicine 96 (4): 194–196. https://www ​.ncbi.nlm ​.nih.gov/pmc/articles/PMC539453/ [ PMC free article : PMC539453 ] [ PubMed : 12668712 ]
  • Iphofen, R., and D. O’Mathúna (eds.). 2022. Ethical evidence and policymaking: interdisciplinary and international research . Bristol, UK: Policy Press.
  • Iphofen, R., A. Krayer, and C.A. Robinson. 2009. Reviewing and reading social care research: from ideas to findings . Bangor: Bangor University.
  • Luck, L., D. Jackson, and K. Usher. 2006. Case study: a bridge across the paradigms. Nursing Inquiry 13 (2): 103–109. [ PubMed : 16700753 ] [ CrossRef ]
  • Mattaini, M.A. 1996. The abuse and neglect of single-case designs. Research on Social Work Practice 6 (1): 83–90. [ CrossRef ]
  • Miller, G., and R. Dingwall. 1997. Context and method in qualitative research . London: Sage. [ CrossRef ]
  • O’Mathúna, Dónal. 2018. The dual imperative in disaster research ethics. In SAGE Handbook of qualitative research ethics , ed. Ron Iphofen and Martin Tolich, 441–454. London: SAGE. [ CrossRef ]
  • PRO-RES. 2021a. The foundational statements for ethical research. http: ​//prores-project ​.eu/the-foundational-statements-for-ethical-research-practice/ . Accessed 1 Jan 2022.
  • PRO-RES. 2021b. Accord. https: ​//prores-project.eu/#Accord . Accessed 1 Jan 2022.
  • Schell, C. 1992. The Value of the Case Study as a Research Strategy . Manchester Business School.
  • Yin, Robert K. 2011. Applications of case study research , 3rd ed. London: Sage.
  • Yin, Robert K. 2017. Case study research and applications: design and methods , 6th ed. London: Sage.

PRO-RES is a European Commission-funded project aiming to PROmote ethics and integrity in non-medical RESearch by building a supported guidance framework for all non-medical sciences and humanities disciplines adopting social science methodologies. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 788352. Open access fees for this volume were paid for through the PRO-RES funding.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

  • Cite this Page O’Mathúna D, Iphofen R. Making a Case for the Case: An Introduction. 2022 Nov 3. In: O'Mathúna D, Iphofen R, editors. Ethics, Integrity and Policymaking: The Value of the Case Study [Internet]. Cham (CH): Springer; 2022. Chapter 1. doi: 10.1007/978-3-031-15746-2_1
  • PDF version of this page (219K)

In this Page

  • Judging the Ethics of Research
  • The Case for Cases
  • Research Design and Causal Inference
  • Devil’s in the Detail
  • Cases and Policymaking
  • The Moral Gain
  • Into the Future

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Review Intersectoral Policy Priorities for Health. [Disease Control Priorities: Im...] Review Intersectoral Policy Priorities for Health. Watkins DA, Nugent R, Saxenian H, yamey G, Danforth K, González-Pier E, Mock CN, Jha P, Alwan A, Jamison DT. Disease Control Priorities: Improving Health and Reducing Poverty. 2017 Nov 27
  • SUPPORT Tools for evidence-informed health Policymaking (STP). [Health Res Policy Syst. 2009] SUPPORT Tools for evidence-informed health Policymaking (STP). Lavis JN, Oxman AD, Lewin S, Fretheim A. Health Res Policy Syst. 2009 Dec 16; 7 Suppl 1(Suppl 1):I1. Epub 2009 Dec 16.
  • SUPPORT Tools for evidence-informed health Policymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking. [Health Res Policy Syst. 2009] SUPPORT Tools for evidence-informed health Policymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking. Oxman AD, Vandvik PO, Lavis JN, Fretheim A, Lewin S. Health Res Policy Syst. 2009 Dec 16; 7 Suppl 1(Suppl 1):S2. Epub 2009 Dec 16.
  • The Effectiveness of Integrated Care Pathways for Adults and Children in Health Care Settings: A Systematic Review. [JBI Libr Syst Rev. 2009] The Effectiveness of Integrated Care Pathways for Adults and Children in Health Care Settings: A Systematic Review. Allen D, Gillen E, Rixson L. JBI Libr Syst Rev. 2009; 7(3):80-129.
  • Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics [ 2014] Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics Peterson K, McCleery E. 2014 May

Recent Activity

  • Making a Case for the Case: An Introduction - Ethics, Integrity and Policymaking Making a Case for the Case: An Introduction - Ethics, Integrity and Policymaking

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Ethical aspects of ChatGPT: An approach to discuss and evaluate key requirements from different ethical perspectives

  • Original Research
  • Open access
  • Published: 10 September 2024

Cite this article

You have full access to this open access article

case study on ethical practices

  • Marc Steen   ORCID: orcid.org/0000-0001-7915-5459 1 ,
  • Joachim de Greeff   ORCID: orcid.org/0000-0002-6503-2658 1 ,
  • Maaike de Boer   ORCID: orcid.org/0000-0002-2775-8351 1 &
  • Cor Veenman   ORCID: orcid.org/0000-0002-2645-1198 1  

There has been growing attention for Large Language Models and conversational agents, and their capabilities and benefits. In addition, there is a need to look at the various costs, harms, and risks involved in their development and deployment. In order to contribute to the development and deployment of ‘trustworthy AI’, we propose to organize ethical reflection and deliberation, following the seven key requirements of the European Commission’s High-Level Expert Group on AI (2019). We propose to look at these requirements through four different ethical perspectives— consequentialism , duty ethics , relational ethics , and virtue ethics; and to look at different levels of the sociotechnical system— individual , organization , and society . We present a case study of ChatGPT, to illustrate how this approach works in practice, and close with a discussion of this approach.

Similar content being viewed by others

case study on ethical practices

Unveiling the ethical positions of conversational AIs: a study on OpenAI’s ChatGPT and Google’s Bard

case study on ethical practices

A multidimensional approach towards addressing existing and emerging challenges in the use of ChatGPT

case study on ethical practices

The role of ChatGPT in disrupting concepts, changing values, and challenging ethical norms: a qualitative study

Explore related subjects.

  • Artificial Intelligence
  • Medical Ethics

Avoid common mistakes on your manuscript.

1 Introduction

The development of Large Language Models (LLMs) has been an incremental process, but particularly the public release of ChatGPT, an LLM-based conversational agent, in November 2022, sparked a worldwide hype and even speculation about impeding Artificial General Intelligence (AGI). Articles in both popular and academic publications have discussed diverse opportunities, challenges, and implications of conversational agents (e.g., Dwivedi et al. 2023 ). The field is developing so fast, that there is hardly time to properly assess what is going on. For many organizations, governments, companies, and citizens, key questions are: What can it do exactly? Is it hype or real? What are the various ethical issues? It is this last question that we aim to (partially) address in this paper. Below, we will discuss several ethical issues aspects of one LLM-based conversational agent: ChatGPT.

The authors have worked in multiple applied research and innovation projects, with numerous clients and partners, on the development and evaluation of AI systems, and aiming to integrate concerns for ethical aspects in these projects. It is from this vantage point that we are interested in the ethical aspects of conversational agents. We have observed that ethical concerns often remain implicit; the people involved rarely explicitly discuss ethical perspectives and aspects. Conversely, we propose that making such perspectives and aspects more explicit, and organizing reflection and deliberation, is necessary, if we want to move ‘from principles to practices’ (Morley et al. 2020 ). Such ethical reflection and deliberation are urgent when AI systems are deployed in practice; especially if people’s safety and fundamental rights are at stake. In this article we discuss an approach to organize ethical reflection and deliberation, around the seven key requirements of the European Commission’s High-Level Expert Group on AI (HLEG) (2019).

There are diverse approaches to integrate ethical aspects in the development and deployment of technologies; methods can be used at the start of development, during development, or after development (Reijers et al. 2018 ). We propose that integrating ethical aspects during development and deployment would be most useful, especially when this is part of an iterative development process, like CRISP-DM (Martínez-Plumed et al. 2021 ; Shearer 2000 ). Furthermore, we propose to use different ethical perspectives more explicitly. Notably, we propose to use consequentialism, duty ethics, relational ethics, and virtue ethics (Van de Poel and Royakkers 2011 ), and to use them in parallel, as complimentary perspectives. Moreover, we understand ethics as an iterative and participatory process of ethical reflection, inquiry, and deliberation (REF removed for review). The task for the people involved is then to make room for such a process and to facilitate relevant people to participate. Such a process can have three (iterative) steps:

Identify issues that are (potentially) at play in the project and reflect on these. A handful of issues works best (if there are more, one can cluster; if there are less, one can explore more.)

Organize dialogues with relevant people, both inside and outside the organization, for example, stakeholders, to inquire into these issues from diverse perspectives and to hear diverse voices.

Make decisions , for example, between different design options and test these in experiments; this promotes transparency and accountability. The key is to steer the project more consciously, explicitly, and carefully.

Our focus is on the first step (identify issues); below, we identify and discuss a range of ethical aspects of one specific LLM-based conversational agent: ChatGPT. The second step (organize dialogues) and the third step (make decisions) are outside the current article’s scope. Below, we will introduce the ingredients of our approach: a modest form of systems thinking; four complementary ethical perspectives; and the HLEG’s seven key requirements. Then we illustrate our approach with a case study of ChatGPT. This case study is also meant to explore how different ethical perspectives are relevant to different key requirements. We close the paper with a discussion of our approach.

2 Systems thinking

In our approach, we follow a modest form of systems thinking (Meadows 2008 ); we understand an AI system as part of a larger sociotechnical system and look at three levels of analysis:

Individual ; how people can interact with a conversational agent and, for example, can benefit or suffer from that;

Organization ; how an organization deploys a conversational agent, for example, in a service they provide;

Society ; how, for example, the deployment of a conversational agent leads to benefits and costs for different groups or for the environment, in terms of the use of material and energy.

Our approach to understand an AI system as part a sociotechnical system and to look at different aggregation levels, is similar to the approach of Weidinger et al. ( 2023 ). Footnote 1 It is also slightly different. Weidinger et al. discuss three layers: Capability , the ‘AI systems and their technical components’ (typically ‘evaluated in isolation’); Human interaction , ‘the experience of people interacting with a given AI system’; and Systemic impact , ‘the impact of an AI system on the broader systems in which it is embedded, such as society, the economy, and the natural environment’. Our approach differs in that we propose not to study the AI system itself. We will look at an AI system always in its context (on the level of the individual, organization, and society). There are topics, however, where it makes sense to turn to ‘technical methods’ (rather than ‘non-technical methods’) (High-Level Expert Group on Artificial Intelligence 2019 , p. 8); for example, in order to evaluate a technical requirement, like robustness (see below).

Furthermore, we propose to discuss the organizational level—a level of analysis that Weidinger et al. do not distinguish. We believe this level of analysis is valuable because it can enable organizations to reflect on how they practically deploy and use LLMs and conversational agents. Critically, this level is where they have agency. Moreover, we propose to look at the interactions between technology and society in terms of reciprocity, rather than in terms of ‘impact’, which incorrectly suggests a one-directional causal relationship. Building on insights from Science and Technology Studies (Oudshoorn and Pinch 2003 ), we acknowledge that reciprocal relationship exists between technology and society: society affects the ways in which technologies are used, and usage of technologies affects processes in society.

In various projects, we have found this systems thinking approach worthwhile: to move back and forth between these aggregation levels: to zoom-out and zoom-in. When people discuss some user interface detail, one can invite them to zoom-out and ask questions about the underlying business model and issues like fairness or inclusion. Or, conversely, if they discuss a concept like fairness in rather abstract terms, one can invite them to zoom-in and discuss how a specific user interface element can promote, or corrode, fairness in terms of accessibility or usability.

3 Ethical perspectives

In ethics of technology, it is common to use different ethical perspectives, notably: consequentialism, duty ethics (deontology), relational ethics, and virtue ethics (Van de Poel and Royakkers 2011 , pp. 77–78). Moreover, in the tradition of applied ethics (Van de Poel and Royakkers 2011 , pp. 105–106) we propose to combine these perspectives. This concurs with what people do in innovation projects; different people can (implicitly! ) use different ethical perspectives at different moments (REF removed for review). They can discuss positive and negative impacts of their project’s outcomes (consequentialism), or they talk about various obligations and regulations, regarding privacy (duty ethics). And sometimes (but less often, according to our observation in projects) they talk about the impact of technology on interactions between people, in customer care (relational ethics), or they reflect on how an application can contribute to people’s abilities to live well together (virtue ethics). Mostly, however, they do that implicitly.

Our contribution is that we make these ethical perspectives more explicit. Critically, these perspectives have different assumptions and logics. One may therefore argue that, in theory, they are incompatible. In practice, however, they can very well be combined (Alfano 2016 , pp. 14–18) (REF removed for review); each perspective can draw attention to a different aspect of the project at hand. A key advantage of this side-by-side approach is that it enables people to discuss more diverse aspects; more than with only one perspective. Similar to walking around an object in order to look at it from different angles; you can see and discuss more diverse aspects. We need to be careful, however, not to confuse or convolute these different perspectives. We need to respect their different assumptions and logics. We must not try, for example, to make calculations with rights, such as to calculate how much one right of one group of people is worth in comparison to another right of another group. That would be inappropriate to both consequentialism and duty ethics.

Please note that this article focuses on applied ethics. It is based on the authors’ experiences of working in AI development and deployment projects, and it is oriented towards the practices of people who work in such projects. This is how we aim to contribute to responsible innovation in AI development and deployment. We appreciate that this practical focus and orientation cannot do justice to the full depth of these four ethical perspectives. Footnote 2 Below are short characterizations—possibly almost caricatures, for readers who are used to more depth—of the four ethical perspectives, in ways that people in the industry typically work with, with examples of the three levels of analysis:

Consequentialism looks at the potential positive and negative consequences of a particular technology or application. It typically aims to maximize positive impacts and to minimize negative impacts. A consequentialist perspective can start on the individual level, to look at the pros and cons for individual users; or on the organization level, to discuss the impacts on one particular organization. We can extend the boundaries of the analysis and look at the effects on the level of society, for instance, on how conversational agents can be used to produce misinformation, very quickly and very cheaply; or we can look at the scale of the planet, at the costs for ‘click workers’ on other continents, typically in poor conditions, and at the costs of mining materials to build the hardware, and of producing energy to train the software.

Duty ethics (or deontology) looks at the obligations for organizations that develop or deploy a technology, for example, the obligation to respect privacy, and at the rights of people who use a technology or are at the receiving end of its application, for example, the right to privacy. Such obligations and rights play, however, not only on the individual and organizational level, but also on the level of society and internationally. Widespread deployment of conversational agents could, over the years, lead to unemployment in specific sectors. Moreover, with regards to workers, societies, and the natural environment, we can discuss policies and legislation that would be needed to prevent or mitigate such harms.

Relational ethics understands people as fundamentally interdependent (Birhane 2021 ; Coeckelbergh 2020 ). Footnote 3 It is concerned with how technologies shape how people interact, and it can help to look critically at the distribution of power. Relational ethics is immediately relevant on the level of individuals, for instance, when people use conversational agents. Relational ethics is also at play on the organizational level, for instance, when using conversational agents becomes the norm and texts gravitate to a particular style and form. Moreover, relational ethics can help to discuss the (unfair) distribution of power, e.g., the issue that most LLMs, and the various conversational agents based on them, are owned by only a handful of US corporations and a handful of Chinese semi-state-owned companies.

Virtue ethics aims to enable people to cultivate relevant virtues and views on technologies as tools that people can use to flourish and to live well together (Vallor 2016 ). It can help to identify virtues that people would need to cultivate. Cultivating a specific virtue entails finding an appropriate form or ‘mean’, between deficiency and excess, given the situation and context. Critically, virtue ethics aims at growth; over time, one can learn to cultivate virtues. On the individual level, we can look at how using a specific technology can either support or hinder people to cultivate specific virtues. Social media can, for instance, corrode people’s self-control, by grabbing their attention. Similarly, conversational agents can erode people’s honesty, when they uncritically use their output. It also plays on the organizational level, for instance, when a service provider deploys a conversational agent. Lastly, widespread adoption of conversational agents can have effects on society. The concept of truth may collapse, because conversational agents are based not on truth, but on statistical probability.

4 Key requirements

Over the years, many frameworks and approaches have been developed to discuss various ethical aspects of AI systems, and to help steer the development and deployment of such systems in directions that are ethically and socially beneficial or preferable (Floridi 2019 ; Floridi et al. 2018 ; Hickok 2020 ; Jobin et al. 2019 ; Morley et al. 2020 ; Sætra and Danaher 2022 ; Van de Poel 2020 ). Jobin et al. ( 2019 ), for example, identified the following recurring topics: transparency, justice, fairness and equity, non-maleficence, responsibility and accountability, and privacy—and beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity.

One framework that we have found particularly useful, is the European Commission’s High-Level Expert Group (HLEG) on Artificial Intelligence’s (2019) Ethics Guidelines for Trustworthy AI . It identifies seven key requirements for the development and deployment of ‘lawful, ethical and robust’ AI systems (pp. 14–20) and recommendations for practically implementing and evaluating these requirements (‘Trustworthy AI Assessment List’) (pp. 26–31). This framework is especially relevant for industry and for applied research and development innovation projects; for promoting responsible innovation. Furthermore, it has a relatively solid basis in theory; the seven key requirements are discussed in relation to four widely accepted ethical principles: respect for human autonomy, prevention of harm, fairness, and explicability (pp. 9–14). Moreover, the Ethics Guidelines for Trustworthy AI was one of the foundations for the EU’s AI Act , Footnote 4 which is expected to have a wide and international impact. Especially because of its practical orientation, we propose to work with these seven key requirements:

Human agency and oversight , including fundamental rights ; the HLEG proposes the principle of respect for human autonomy (2019, p. 12), which they describe as follows: ‘Humans interacting with AI systems must be able to keep full and effective self-determination over themselves […]. AI systems […] should be designed to augment, complement and empower human cognitive, social and cultural skills.’ Human oversight refers to measures that help ‘ensuring that an AI system does not undermine human autonomy’ (HLEG, 2019, p. 16).

Technical robustness and safety ; this requirement refers to resilience to attacks and other security risks; to having effective fallback plans to promote safety; and to accuracy, reliability, and reproducibility. The evaluation of many of these aspects would require technical tests or experiments. In this article, however, we will only identify and discuss these aspects, and not actually conduct tests or experiments.

Privacy and data governance ; various concerns are at play, notably: that privacy sensitive information has probably been part of the training corpus many LLMs; and that users can submit privacy sensitive data through their prompts, thus submitting these data to the organizations that owns these LLMs and the conversational agents built on them. This information can also be used for subsequent finetuning of the model.

Transparency ; the HLEG argues (2019, p. 12) that ‘[e]xplicability is crucial for building and maintaining users’ trust in AI systems. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions—to the extent possible—explainable to those directly and indirectly affected. […] The degree to which explicability is needed is highly dependent on the context and the severity of the consequences if that output is erroneous or otherwise inaccurate.’ It also includes traceability, explainability, and communication. Moreover, it refers not only to the explicability of the AI system itself, but also to the processes in which this AI system is used, the capabilities and purposes of this system, and to communication about these processes, capabilities, and purposes.

Diversity , non-discrimination and fairness ; the HLEG (2019, p. 12) describes fairness as having ‘both a substantive and a procedural dimension. The substantive dimension implies a commitment to: ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation. […] The procedural dimension […] entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them.’ Fairness not only refers narrowly to an application, but also to the processes and organizations in which this application is used (REF removed). Related aspects are: accessibility and universal design, and involving stakeholders in design and deployment.

Societal and environmental well-being ; the HLEG proposes the principle of prevention of harm (2019, p. 12): ‘AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings’; they draw attention to ‘situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information, such as between employers and employees, businesses and consumers or governments and citizens’ and to harms to ‘the natural environment and all living beings.’

Accountability ; the HLEG describes this as ‘the assessment of algorithms, data and design processes’, through either internal or external audits; especially of applications that may affect fundamental rights or safety-critical applications (2019, pp. 19–20). It includes concerns for the auditability of systems and the ability to obtain redress for users; the HLEG recommends ‘accessible mechanisms… that ensure adequate redress’ (2019, p. 20).

5 Case study: ChatGPT

Below, we will illustrate our approach by conducting a case study of ChatGPT. Footnote 5 We chose ChatGPT because it is the most commonly known conversational agent and has become, at least in popular media, almost synonymous with LLMs, or with AI even. Furthermore, we are aware that ChatGPT can have specific ethical issues that other and more recent conversational agents may not have. Nevertheless, we believe that a study ChatGPT and its ethical aspects can be worthwhile and useful also with regards to other and more recent conversational agents.

We are certainly not the first to discuss the ethics of LLMs or conversational agents. Bender et al. ( 2021 ) discussed the costs to the environment, notably, the energy spent on training LLMs and the risk of bias. In order to reduce some of the negative effects of bias, and to increase and promote accountability, they proposed to compile, curate, and document datasets more carefully than is currently typically done, for example, with ‘Datasheets for Datasets’ (Gebru et al. 2021 ). In addition, Stahl and Eke ( 2024 ) provided an overview of various ethical issues, which they grouped into four categories: social justice and rights (democracy, justice, labour, and social solidarity); individual needs (autonomy, informed consent, psychological harm, and ownership and control over data); culture and identity (bias, discrimination and social sorting, cultural differences, and the good life); and environmental impacts (sustainability, pollution and waste, and other environmental harms). Concerning this latter category, Crawford ( 2021 ) critically discussed the costs of creating and using AI systems—costs that normally remain invisible or hidden. She discussed the work of people in cleaning-up data and training models (‘click work’ or ‘ghost work’), often in low-wage countries, the toxic and dangerous working conditions in mines that extract materials like lithium, for computer hardware, and the huge amounts of energy and water, for cooling, that go into training and running software in data centres.

Furthermore, Sison et al. ( 2023 ) proposed that a key ethical problem of ChatGPT is that it can be used as a ‘weapon of mass deception’ and proposed technical (e.g., watermarking) and non-technical measures (e.g., terms of use) to mitigate such misuse. In addition, various authors identified various other ethical concerns: Zhou et al. ( 2023 ) describe ChatGPT as a ‘statistical correlation machine’ (good at correlations; bad at causality) and discuss bias, privacy and security, transparency, abuse, and authorship and copyright; Wu et al. ( 2023 ) discuss security, privacy, and concerns like fairness and bias; and Zhuo et al. ( 2023 ) discuss bias, robustness, reliability, and toxicity.

Many of these topics (above) will appear also in our analysis (below). The added value of our analysis, we propose, is that we follow a systematic approach: we follow the HLEG’s seven key requirements (2019, p. 12) and look at these through four ethical perspectives and on three levels of analysis. Please note that we did not always use all four ethical perspectives; only those that are most relevant for that specific requirement. This is also an exercise to explore which ethical perspectives are most relevant to which requirements.

5.1 Human agency and oversight

The requirement for human agency and oversight builds on the principle of respect for human autonomy (above) and calls for measures to promote this. We can think of measures that enable the people involved in building and training LLMs and conversational agents to oversee and control these systems, and measures that enable the people involved in deployment and utilization to oversee and control these systems.

5.1.1 Consequentialism

Through a consequentialist perspective, we can look at the advantages and disadvantages that an application like ChatGPT can bring. On the level of individuals, people, such as content creators or journalists, can use ChatGPT as a tool to work more efficiently, or to improve their vocabulary, grammar or style (benefits). On the level of the organization, this increase in efficiency can motivate organizations to cut jobs, so that some of these people can lose their jobs (harms). This also can have negative effects on the level of society.

5.1.2 Duty ethics

We can apply a duty ethics perspective to discuss human dignity and autonomy. Immanuel Kant, a key proponent of this tradition, proposed that we need to treat others never only as means, but always as ends in themselves. For ChatGPT, this would mean that using it always aim at empowering people, at augmenting and complementing their capabilities—and not viewing or using people merely as means, as cogs in a larger machine that aims to satisfy other people’s objectives.

What would happen to human dignity and autonomy if increasingly more organisations use ChatGPT to interact with people in their service provisioning, instead of human-to-human communication? One can envision having to execute some task, via a phone with dial-tone menus and voice recognition, or via an online shop’s text chat. If the system works, this can be an empowering experience, for example, because it is accessible 24/7. If it does not, however, this can be frustrating, and it can feel like one’s autonomy, or even dignity, is stunted.

5.1.3 Relational ethics

From a relational ethics perspective, we can look, for example, at the deployment of ChatGPT in service provisioning (above) and discuss how that can affect people’s dignity, autonomy, and oversight. We can also look at how the deployment of ChatGPT changes interactions between people and distributions of power. We propose to discuss these aspects under the header of Diversity , non-discrimination and fairness (below).

5.1.4 Virtue ethics

From a virtue ethics perspective, human agency refers to how people can use specific technologies to cultivate and exercise specific virtues. For ChatGPT, we could look at how people can use it as a tool, and then need to find an appropriate ‘mean’, for example, between using ChatGPT slavishly and uncritically (excess), and hesitating to use ChatGPT at all (deficiency). An appropriate ‘mean’ could entail using ChatGPT as an assistant, critically examining its output, exercising agency and discretion, and consciously selecting what to use and what not to use. Over time, one can learn to use ChatGPT in ways that ‘augment, complement and empower’. Virtue ethics is also relevant on the levels of organization and society. We can look at how the deployment of ChatGPT affects how an organization works, for instance, how it serves its customers. Moreover, we can learn from the effects that social media have had: for individuals, it has corroded people’s self-control—social media, with business models based on advertising, deploy all sorts of mechanisms to grab and monetize people’s attention; and for society, such mechanisms were weaponized to maximize ‘engagement’, which led to fake news, polarization, and the corrosion of democratic processes. We can expect similar, and even worse, effects if tools like ChatGPT are combined with social media.

5.2 Technical robustness and safety

The requirement for robustness and safety calls for measures to promote robustness and safety. One example is the standard type of response that ChatGPT produces when there are specific words, pertaining to sensitive topics, like gender, race or culture, in the user’s prompt. ChatGPT then switches from a statistical procedure to a rule-based procedure. This acts like guardrails. Nevertheless, there are various ways in which bad actors may try to invade or attack ChatGPT. One example is prompt hacking or prompt injection , also referred to as jail break , where one gives prompts to ChatGPT with the purpose of circumventing its guardrails. This can make ChatGPT produce harmful or unsafe outputs. Footnote 6 Technical robustness includes also accuracy, reliability, and reproducibility. ‘Accuracy pertains to an AI system’s ability to make correct judgements’. A ‘reliable AI system is one that works properly with a range of inputs and in a range of situations.’ And reproducibility is concerned with ‘whether an AI experiment exhibits the same behaviour when repeated under the same conditions’ (HLEG, 2019, p. 17).

5.2.1 Consequentialism

Technical robustness and safety, from a consequentialist, can help to find a balance that maximizes positive consequences, for example, a balance between too wide and too narrow guardrails. In addition, ChatGPT has no understanding of our physical world, no common sense, and little notion of truth. For example, ChatGPT produced this sentence: ‘The idea of eating glass may seem alarming to some, but it actually has several unique benefits that make it worth considering as a dietary addition’ (Reddit 2022 ). Clearly, uncritical use of ChatGPT can lead to unsafe situations and serious risks.

5.2.2 Duty ethics

From a duty ethics perspective, technical robustness and safety can be understood in terms of a series of obligations that the organizations and people involved in the production or deployment of ChatGPT would need to fulfil, and a series of rights of the organizations and people who use it, that would need to be respected and protected. OpenAI, that created ChatGPT, needs to fulfil obligations related to robustness and safety; and a person who uses ChatGPT has rights to be protected against harmful or unsafe responses of ChatGPT.

5.2.3 Relational ethics and virtue ethics

As alluded to (above), technical robustness and safety is a relatively technical issue and relational ethics and virtue ethics are relatively less directly relevant for their discussion. Of course, some general remarks can be made. For example, low robustness and safety of ChatGPT can negatively affect the quality of interactions between people, for example, when one person sends a harmful message, created by ChatGPT, to another person; or people’s ability to cultivate relevant virtues, for example, when one aims to cultivate honesty and ChatGPT produces incorrect information.

5.2.4 Technical analysis

A proper discussion of accuracy, reliability, and reproducibility would require also some technical analysis. Like many conversational agents, ChatGPT is prone to ‘hallucinations’; Footnote 7 it produces outputs that sound plausible but are factually incorrect (e.g., Wu et al. 2023 ; Zhou et al. 2023 ; Zhuo et al. 2023 ). Even human experts can have difficulties to detect such ‘hallucinations’. This risk plays on the individual, organization, and societal levels. Accuracy can be tested with a testbed of benchmarks, for example, Google’s BigBench or Huggingface’s Open LLM Leaderboard. Footnote 8

5.3 Privacy and data governance

Regarding privacy, Li et al. ( 2023 ) discuss the following ways in which one can extract personal information about or from people from ChatGPT: with ‘jailbreaking prompts’ that can circumvent a standard response and instead access privacy sensitive information, or ‘multi-step jailbreaking prompts’, where a user takes ChatGPT through a series of steps to by-pass its safety measures. Relatedly, there are concerns regarding the quality, integrity, and protection of data. Training data may contain inaccuracies, errors, and bias (more on bias below). Many LLMs have been trained with data from Common Crawl, Footnote 9 which contains inaccuracies, errors, and bias. Another concern is whether people can access ChatGPT and, for example, change parameters or delete data, so that the system will behave differently. Until now, attacks have been limited to ‘jailbreaking’, but ‘[t]hings could get much worse’ (Burgess 2023 ).

Interestingly, the HLEG (2019) did not discuss copyright. However, copyright is a key concern for conversational agents and their underlying LLMs. For ChatGPT, tons of texts have been collected online, without prior consent of the copyright holders. Unsurprisingly, some authors were not amused. Recently, two Massachusetts-based writers filed a lawsuit about copyrights against OpenAI (Brittain 2023 ). Also, the EU’s AI Act contains regulation that requires organizations to publish summaries of copyrighted data that they have used for training their models.

5.3.1 Consequentialism

If we look at privacy and data governance from a consequentialist perspective, it is most relevant to look at the negative consequences: at risks and harms of breaches of privacy. These risks can play on the levels of the individual, of the organization or of society: individuals can be harmed, when their personal information becomes known to others; organizations can be harmed, when such becomes known to others; and such breaches can lead to wider feelings of unsafety in society.

5.3.2 Duty ethics

A duty ethics perspective is relatively close to a legal perspective. We can therefore turn to Article 8 of the European Convention on Human Rights (ECHR): the right to respect for private and family life, home and correspondence. Footnote 10 In the case of ChatGPT, this leads to an obligation, for those companies and people that produce or deploy ChatGPT, to respect people’s privacy.

5.3.3 Relational ethics

There are various ways to understand privacy. Often, privacy in understood rather narrowly and in a technical sense: as pertaining to the protection of personal data. When we understand privacy more broadly, however, it becomes relevant also to relational ethics and to virtue ethics. We can then understand privacy as a condition for positive interactions between people. A lack of privacy can have chilling effects on interactions between people. In such cases, control over people’s privacy can become a source of power over people, for corporations or states (Véliz 2020 , pp. 50–55).

5.3.4 Virtue ethics

In this broader understanding of privacy, we can also look at it as a condition for one’s personal development and abilities to live well together with others. People need a degree and type of privacy in order to ‘explore new ideas freely, to make up our own minds’ (Véliz 2020 , p. 3). This is critical for a person’s healthy development, which includes the freedom to cultivate and exercise relevant virtues. For ChatGPT, this broader view on privacy, from a relational ethics or virtue ethics perspective, is relatively new and under-explored.

5.4 Transparency

Transparency or explicability, and associated aspects, like traceability and explainability, is partly a technical aspect and would need a technical analysis, involving, for example, experiments. For transparency, we need insight into the model’s data and inner workings. For traceability, we need to trace back how the underlying LLM was developed; notably, where the training data came from. Stanford University provided a comprehensive assessment of the transparency of foundation models. Footnote 11 Similarly, Radboud University maintains a ranked list on the openness of various LLMs. Footnote 12 This relates to requirements for data management; the origin of the training data needs to be clear, notably whether the data were acquired legally, whether copyright was respected, and whether it contains synthetic data. The latter constitutes a special concern. When synthetic data are used to train new models, existing biases are propagated, which can result in LLMs with even more bias (Shumailov et al. 2023 ). Explainability refers to whether the LLM or the conversational agent can provide explanations of how its output came about in a manner that people can understand.

5.4.1 Consequentialism

We would like to propose that, while a consequentialist perspective is relevant to the requirement of transparency, other ethical perspectives are relatively more relevant. A consequentialist perspective would, in rather general terms, help to evaluate and balance the benefits of making ChatGPT more transparent and the costs of insufficient transparency.

5.4.2 Duty ethics

A duty ethics perspective has some overlap with a legal perspective. We can refer to the EU’s AI Law, which has requirements regarding transparency for Generative AI, LLMs, and conversational agents: organizations that develop or deploy such systems are required to disclose that the content was generated by AI, to prevent that the model generates illegal content, and to publish summaries of copyrighted data that were used for training. Furthermore, there are the right to access, to rectification, and to erasure (‘right to forgotten’), in GDPR articles 15, 16, and 17, respectively. For ChatGPT, we can look at whether one’s personal data are in the underlying LLM and to request rectification or erasure. This, however, has not happened so far as we are aware.

5.4.3 Relational ethics

Besides these relatively technical requirements (traceability and explainability), the HLEG also has guidelines for communication (2019, p. 18): ‘AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system. […] Beyond this, the AI system’s capabilities and limitations should be communicated to AI practitioners or end-users in a manner appropriate to the use case at hand.’ A relational ethics perspective can help to look at how people interact with ChatGPT, and with others, through ChatGPT. Let us look at two potential issues. One is the ELIZA effect. The name refers to the chatbot that Joseph Weizenbaum programmed in the 1960s (Berry 2023 ). With a relatively small number of lines of code, the chatbot imitated a (Rogerian) therapist. It prompted users to write about their problems and replied with questions that echoed back specific keywords that the user used. Weizenbaum found that people attributed intelligence and empathy to ELIZA, even after he explained that the software was very basic. With the introduction of ChatGPT, people began to mention the ELIZA effect to discuss how easily people project human qualities on it. Footnote 13 The other issue refers to the Reverse Turing Test— a term that was introduced by Evan Selinger and Frischmann ( 2015 ) (also: Frischmann and Selinger 2018 , pp. 175–183). The original Turing Test is about computers that imitate people. The Reverse Turing Test is about how people, when they interact with computers or when their communication is mediated by computers, can behave robot-like. If one uses ChatGPT uncritically, one produces ‘predictable’ (literally, because that is what ChatGPT does) and somewhat formulaic texts. This can erode human-to-human communication. Both the ELIZA effect and the Reverse Turing Test highlight the need to communicate honestly what ChatGPT can do and cannot do, and how one can use it appropriately.

5.4.4 Virtue ethics

We can turn to virtue ethics to discuss the need for the people involved in the design and application of conversational agents to cultivate virtues that promote transparency, like humility and honesty (see above: to communicate what ChatGPT can and cannot do). Moreover, some might propose that we can apply virtue ethics also to ChatGPT and look at the virtues that ChatGPT would need to express. Footnote 14 When a researcher asked ChatGPT about its capabilities for comprehension, it responded: ‘ChatGPT has a form of comprehension based on patterns it learned from the text it was trained on. It doesn’t truly understand concepts in the way humans do, but it can recognize and mimic patterns of language, information, and context present in its training data’ (Floyd 2023 ) (appropriately in third person, since first person would be false and misleading).

5.4.5 Technical analysis

Some benchmarks exist for the evaluation of transparency, such as BIG-Bench’s show work and casual reasoning . Footnote 15 Another requirement for transparency is that the system adapts its explanation to the stakeholder’s expertise ( accommodation to reader ). Footnote 16

5.5 Diversity, non-discrimination and fairness

For ChatGPT, issues like fairness and non-discrimination can be problematic. We know that bias in training data can lead to bias, stigmatization, and discrimination in the model’s output. Cathy O’Neil ( 2016 ), Eubanks ( 2017 ), Noble ( 2018 ), Benjamin ( 2019 ), and Buolamwini ( 2023 ), for example, have written extensively about that. For ChatGPT, this requirement is relevant because the training data that went into the underlying LLM had biases, for example, regarding race and gender, and these biases lead to biases in ChatGPT’s responses.

5.5.1 Consequentialism

We can look at non-discrimination and fairness through a consequentialist perspective. The costs of discrimination go to the people who are discriminated against, whereas the benefits mostly go to the companies that develop and deploy ChatGPT. Regarding non-discrimination and fairness, we can also point at issues with accessibility. Which people have access to an application like ChatGPT, and which do not? And, critically, looking ahead, which people will have access to more advanced, more useful, and more powerful versions of ChatGPT or similar applications, and which will not?

5.5.2 Duty ethics

We can also look at non-discrimination and fairness from a duty ethics perspective. Emily Bender et al., in their Stochastic Parrots paper (2021), for example, call for more careful compiling and documenting of datasets. This can be understood as a duty for the organizations that develop and deploy ChatGPT, to act fairly and carefully—which follows from the rights of users to be treated fairly and without discrimination. This duty is codified in Article 14 ECHR, Prohibition of discrimination.

5.5.3 Relational ethics

We can turn to relational ethics to look at the ways in which corporations or states can enhance their power. When people use conversational agents to search for information, the corporations and states that own and deploy these applications can grow their power. We saw how social media were used to influence politics and elections. This can only get worse when Generative AI applications are combined with social media. Furthermore, we can look at the requirements for diversity and participation. The HLEG advocates organizing stakeholder participation: ‘to consult stakeholders who may directly or indirectly be affected by the system throughout its life cycle’ and recommend that ‘[i]t is beneficial to solicit regular feedback even after deployment and set up longer term mechanisms for stakeholder participation’ (2019, p. 19). A relational ethics perspective can help to look at who is (not) included in such involvement and at the role of power in negotiations between different stakeholders.

5.5.4 Virtue ethics

A virtue ethics perspective can look at the ways in which ChatGPT can help, or hinder, people to cultivate specific virtues, and how this has broader effects, in organizations and in society. For example, using ChatGPT can corrode virtues like fairness and honesty. If you use ChatGPT uncritically, it can produce texts that are biased and incorrect. This is similar to how using social media corroded many people’s self-control and civility. In addition, virtue ethics can help to look at the virtues that the people involved in design and application would need to develop. For ChatGPT, this would be, for instance, justice : a sensitivity to (un)fairness and the drive to promote fairness. Interestingly, raising such issues will also require courage : to raise a difficult topic during a project meeting that is already packed with topics.

5.5.5 Technical analysis

A proper discussion of non-discrimination, fairness, and bias, will require also various technical analyses. Ideally, these are conducted in tandem with legal and political analyses—similar to analyses that were conducted for the (infamous) COMPAS algorithm (Barabas 2020 ; Binns 2018 ; Lagioia et al. 2023 ).

5.6 Societal and environmental well-being

The requirement for societal and environmental wellbeing refers to the aims to promote benefits for society and the environment, and to prevent and minimize harms to society and the environment.

5.6.1 Consequentialism

A consequentialist perspective can help to look at the various benefits and harms of ChatGPT. Potentially, ChatGPT can help lots of people and lead to more equal opportunities and thus offer benefits—provided, critically, that it is available and accessible to all. Conversely, ChatGPT can bring risks and harms to society and democracy. Organizations and individuals with evil intentions can use ChatGPT to produce tons of disinformation very quickly and very cheaply. We saw how social media were weaponized to distribute fake news and fuel polarization. This can only get worse when they are combined with Generative AI. It is increasingly difficult to spot fake news, especially when it is presented together with synthetic photos or videos. Experts expect that by 2026, no less than 90% of online content will be created or modified with artificial intelligence (AI) (Van der Sloot 2024 ). We also need to look at the costs to people and to nature that follow from the development and deployment of an application like ChatGPT. ‘OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic’, reported TIME magazine (2023). Tragically, these people worked in unhealthy conditions in order to make ChatGPT healthy for others (‘users’). This is very often the case: behind the shiny surface of so-called ‘artificial’ intelligence systems are millions of people (‘ghost workers’), in low-wage countries, labouring, cleaning data, labelling data, fine-tuning models, and moderating content (Crawford 2021 ). Moreover, the development and deployment of an LLM requires lots of materials and lots of energy (Crawford 2021 ). Notoriously, these costs and harms are called as ‘externalities’ by economists: as if they fall outside the analysis.

5.6.2 Duty ethics

A duty ethics perspective can help to look at the obligations of companies that develop or deploy an application like ChatGPT, and at the rights of people who use these applications or are affected by them. This perspective has overlap with a legal perspective because many obligations and rights are codified in law. In this respect, it is relevant to note that the EU has created a series of laws to curb corporations’ power and to promote citizens’ rights: General Data Protection Regulation (2018), Data Governance Act (2022), Digital Services Act (2022), Digital Markets Act (2022), and AI Act (2024).

5.6.3 Relational ethics

A relational ethics perspective can help to look at how the deployment of ChatGPT can affect the ways in which people interact with each other and with the natural environment. We can look at some of the aspects that were discussed under the header of consequentialism (above), also from the perspective of relational ethics. This would draw attention to the effects on people’s abilities to connect to each other, on the quality of their interactions and relationships, and to connect their natural environment—also, it would draw attention to unfair distributions of power.

5.6.4 Virtue ethics

Virtue ethics can help to discuss the need to develop and apply ChatGPT in such ways that it promotes societal and environmental wellbeing. Virtue ethics’ aim is to find ways to live well together. Aristotle teachings were aimed at the polis , Athens. For us, the polis can be at the level of a country, a group of countries, like the EU, or on the level of a the planet. Relevant virtues are: justice , for example, to repair existing injustices of (neo)colonization (‘ghost workers’) and exploitation (materials and energy); and care , a disposition to meet the needs of others and to contribute to the ameliorating of suffering (Vallor 2016 , p. 138). Cultivating such virtues requires efforts on the levels of both individuals and organizations; the latter is critical: organizations shape the practical contexts that can either help or hinder people to cultivate relevant virtues.

5.6.5 Technical analysis

The costs for workers and for the environment can be discussed, assessed, and evaluated (Bender et al. 2021 ; Crawford 2021 ), for example, in terms of materials and energy used. Footnote 17

5.7 Accountability

Accountability can be understood as dependent on transparency (see above). We propose to understand accountability in pragmatic terms: as one agent’s ability to provide an account about some topic to some other agent, so that this other agent can practically use this information for some purpose (Hayes et al. 2023 ). This is in line with Goodin’s understanding of accountability ‘ of some agent to some other agent for some state of affairs’ ( 2008 , p. 156).

5.7.1 Consequentialism

Similar to our discussion of transparency (above), we propose that other perspectives are more immediately relevant to the requirement of accountability. Nevertheless, a consequentialist perspective can be helpful in an analysis of the benefits of promoting accountability and of the costs of a lack of accountability.

5.7.2 Duty ethics

A duty ethics perspective can look at the obligations of organizations and people involved in the development and deployment of ChatGPT, to promote accountability and to take appropriate measures. Similar to the discussion of transparency (above), we can look at the right to access, to rectification, to erasure (‘right to forgotten’) (GDPR articles 15, 16, and 17). Furthermore, the HLEG’s phrasing of redress (‘accessible mechanisms… that ensure adequate redress’) (2019, p. 20) implies that mechanisms for redress need to be ‘accessible’ and ‘adequate’. This means that organizations that develop or deploy ChatGPT need to offer mechanisms to individuals and organizations to ask and obtain redress when they have suffered harms. Currently, ChatGPT has no such mechanisms.

5.7.3 Relational ethics

Relational ethics can be useful to look at the procedural fairness of accountability. This refers to the accessibility and adequacy (see above) of processes through which individuals or organizations can question the system’s outcomes and obtain redress (REF removed for review). In addition, we can look at processes that need to be in place for the protection of whistle-blowers and for communication to a wider public, for example, about cyberattacks on the underlying LLM. These issues play at both the individual level (whistle blowers) and the organisational level (audits). It can be challenging to perform technical benchmarks, due to the variety of organisation circumstances. Furthermore, due to the limited public information on such procedural fairness aspects of ChatGPT, its accountability would appear to be rather limited.

5.7.4 Virtue ethics

Finally, we can use virtue ethics to look at accountability around ChatGPT. For the people and organizations involved in its design and application, relevant virtues would be, for example: justice, care, and courage. Individuals can act out of a feeling of justice , out of care for the people who are harmed by the system, and they need courage to speak up. Furthermore, virtues like humility, honesty, and civility are relevant. The people involved need humility and honesty in how they understand and talk about ChatGPT’s abilities and limitations, as well as civility—which refers to the ability ‘to collectively and wisely deliberate about matters of local, national, and global policy and political action… and to work cooperatively towards those goods of technosocial life that we week and expect to share with others’ (Vallor 2016 , p. 141).

6 Discussion

The introduction of Generative AI, LLMs, and conversational agents has changed our views on both the benefits and the harms that such systems can bring. We proposed to organize a careful and systematic approach to reflect on the ethical aspects involved in the design and application of such systems. We took the seven key requirements for ‘Trustworthy AI’ of the European Commission’s High Level Expert Group (2019) as a basis for our approach. These seven key requirements are broadly endorsed and have been a basis for the EU’s AI Act (2024). Furthermore, we proposed to look at these requirements from four different ethical perspectives, and on different levels of analysis. Moreover, we proposed to embed this approach in an iterative and participatory approach: iterative, because some ethical aspects will only become clear when the system is being developed, for example, as a ‘minimal viable product’, in an agile development process; and participatory, because different stakeholders need to be involved, so they can express their concerns and considerations (REF removed for review).

To demonstrate and illustrate this approach, we applied it to ChatGPT. One objective was also to explore how different ethical perspectives are more or less relevant to the different requirements.

In Table  1 , we report the respective contributions of the four ethical perspectives, and of technical analyses (columns), in relation to the seven key requirements (rows), in our study of ChatGPT:

Consequentialism is useful for many of the requirements, to assess benefits and harms; to maximize benefits and to minimize or prevent harms and risks, and also, for example, to discuss the distribution of benefits of harms over different groups in society.

Duty ethics (deontology) is useful for all requirements, notably to discuss developers’ obligations and users’ rights. This is not entirely surprising because the HLEG (2019), the requirements’ authors, drew from the field of law, which has overlap with duty ethics.

Relational ethics is especially useful for requirements that deal with interactions between people: privacy, transparency, especially communication to the public, diversity, non-discrimination, fairness, societal and environmental wellbeing, and accountability.

Virtue ethics is useful for most requirements, to discuss how technology can enable people to cultivate relevant virtues: human agency, privacy, transparency, fairness, societal and environmental wellbeing, and accountability—typically: both for both users and for developers.

Importantly, we have seen that the combination of the different ethical perspectives can be worthwhile. In our discussion (above) we saw that the different perspectives can provide insights when they are used in parallel. Furthermore, and based on observation in projects in the industry, we found that (some sort of) consequentialism and duty ethics are relatively prevalent in the industry, whereas relational ethics and virtue ethics, especially when they refer to less-technical, people-related aspects, such as communication between people, or people’s virtues and flourishing, are much less prevalent. One potential added value of our approach is to bring these latter two perspectives more to the fore.

Clearly, when it comes to ethics, our current study is very far from complete. Moreover, we would propose that completeness is not a realistic goal in the context of industry. The four ethical perspectives, however, can help stakeholders to discuss the seven key requirements from different angles. This can enable them to aim for benefits and prevent harms, and to find balances between duties and rights. Or to promote human-to-human communication or enable people to cultivate relevant virtues. The key benefit of this approach is that people can make their reflection and deliberation more explicit, careful, and systematic. Still, they cannot, of course, foresee all future consequences, due to the rapid developments of AI systems and due to diverse forces in the market place and in society. It is therefore critical to organize reflection and deliberation as a continuous process. Moreover, different organisations can choose to focus on specific requirements or on specific ethical perspectives, depending on the products or services that they are working on. One organization can, for example, focus on promoting privacy or fairness or transparency, and position their products accordingly. Overall, we hope that our approach can contribute to the design and deployment of trustworthy AI systems. It is both very necessary and entirely possible to organize reflection and deliberation of ethical aspects during the design and deployment of AI systems.

In a recent paper, Gabriel et al. ( 2024 ), also from Google/DeepMind, use similar categories: value alignment, safety and misuse (which correspondents with Capability ); human-assistant interaction (influence, anthropomorphism, appropriate relationships, trust, privacy) (which correspondents with Human interaction ); assistant and society (cooperation, access and opportunity, misinformation, economic impact, environmental impact) (which correspondents with Systemic impact ).

Indeed, one could write an entire article, or book, on each of the 28 combinations in our framework: seven requirement x four ethical perspective; see, e.g., Van der Sloot’s 2017 dissertation on privacy, from a virtue ethics perspective.

We use the term relational ethics to refer to several different approaches, notably: care ethics (Held, 2006), feminist ethics (e.g., Carol Gilligan, Nel Noddings) and various ‘non-western’ perspectives, such as Confucianism (Wong and Wang 2021 ), Ubuntu (Mhlambi 2020 ), and diverse Indigenous cultures (REF removed). Although these approaches are indeed very diverse, they do share an understanding of the human condition as fundamentally relational—rather than viewing people as separate individuals, which we can see as a product of the European Enlightenment (REF removed). In that sense, relational ethics seeks to remedy some of the shortcomings of those ethical perspectives that were developed in the European Enlightenment: consequentialism (Bentham) and deontology (Kant). Currently, relational ethics is being explored and applied in the context of technology development (e.g., Birhane 2021 ; Coeckelbergh 2020 ; REF removed).

Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (passed European Parliament on 13 March 2024, approved by EU Council on 21 May 2024); see: Preamble art. 7, 27, and 165.

We do not discuss the technology underlying ChatGPT. For our current article, a basic understanding of LLMs is sufficient: an LLM is based on lots of texts, collected online, often without permission; an LLM is a statistical model, based on an Artificial Neural Network, with trillions of parameters; when a user types a prompt into the conversational agent, ChatGPT, it returns text, based on probability ( https://help.openai.com/en/articles/6783457-what-is-chatgpt ).

https://www.jailbreakchat.com .

The term ‘hallucination’ is problematic; ChatGPT has not mind and therefore cannot hallucinate. Moreover, in such instances, it actually does what it is programmed to do: produce texts that are statistically probable and that look plausible. A non-existent (‘hallucinated’) literature reference in a scientific article, for example, will have an author name, a title, and a journal volume, issue and page numbers—and thus look very plausible. Fabrication could be a more appropriate term.

https://github.com/google/BIG-bench ; https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard .

https://commoncrawl.org .

The ECHR is immediately relevant for the 46 member states of the Council of Europe. It is also relevant beyond these countries because many other countries have similar legislation to protect human rights.

https://crfm.stanford.edu/fmti/ .

https://opening-up-chatgpt.github.io/ ; openness refers to a specific aspect of transparency: the availability of the model, that is, data, code, and weights, documentation, and access.

https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai ; see also: https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ .

Most, however, would argue that the cultivation of virtues only apply to people— not to machines.

https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md#show-work and https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md#causal-reasoning .

https://github.com/google/BIG-bench/blob/main/bigbench/benchmark_tasks/keywords_to_tasks.md#accommodation-to-reader .

https://www.theverge.com/24066646/ai-electricity-energy-watts-generative-consumption .

Alfano, M.: Moral Psychology: An Introduction. Polity (2016)

Barabas, C.: Beyond Bias: Ethical AI in Criminal Law. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The Oxford Handbook of Ethics of AI. Oxford University Press (2020)

Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. (2021)

Benjamin, R.: Race after technology: Abolitionist tools for the new Jim Code . Polity (2019)

Berry, D.M.: The limits of computation: Joseph Weizenbaum and the ELIZA Chatbot. Weizenbaum J. Digit. Soc. 3 (3) (2023). https://doi.org/10.34669/WI.WJDS/3.3.2

Binns, R.: Fairness in Machine Learning: Lessons from Political Philosophy. Proc. Mach. Learn. Res. 81 , 149–159 (2018)

Google Scholar  

Birhane, A.: Algorithmic injustice: A relational ethics approach. Patterns. 2 (2), 100205 (2021). https://doi.org/10.1016/j.patter.2021.100205

Article   Google Scholar  

Brittain, B.: 29 June 2023). Lawsuit says OpenAI violated US authors’ copyrights to train AI chatbot. Reuters (2023)

Buolamwini, J.: Unmasking AI: My Mission to Protect what is Human in a World of Machines. Penguin Random House (2023)

Burgess, M.: 13 April 2023). The Hacking of ChatGPT Is Just Getting Started. Wired . (2023). https://www.wired.com/story/chatgpt-jailbreak-generative-ai-hacking/

Coeckelbergh, M.: Artificial Intelligence, responsibility attribution, and a relational justification of Explainability. Sci Eng. Ethics. 26 (4), 2051–2068 (2020). https://doi.org/10.1007/s11948-019-00146-8

Crawford, K.: Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press (2021)

Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., Jeyaraj, A., Kar, A.K., Baabdullah, A.M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M.A., Al-Busaidi, A.S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., Wright, R.: Opinion Paper: So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71 , 102642 (2023). https://doi.org/10.1016/j.ijinfomgt.2023.102642

Eubanks, V.: Automating Inequality. St. Martin’s (2017)

Floridi, L.: Translating principles into Practices of Digital Ethics: Five risks of being unethical. Philos. Technol. 32 (2), 185–193 (2019). https://doi.org/10.1007/s13347-019-00354-x

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., Vayena, E.: AI4People—An ethical Framework for a good AI society: Opportunities, risks, principles, and recommendations. Mind. Mach. 28 , 689–707 (2018)

Floyd, C.: From Joseph Weizenbaum to ChatGPT: Critical encounters with dazzling AI technology. Weizenbaum J. Digit. Soc. 3 (3) (2023). https://doi.org/10.34669/WI.WJDS/3.3.3

Frischmann, B., Selinger, E.: Re-engineering Humanity. Cambridge University Press (2018)

Gabriel, I. et al.: The Ethics of Advanced AI Assistants (2024). ). In arXiv https://doi.org/10.48550/arXiv.2404.16244

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Daumé, H. III, Crawford, K.: Datasheets for datasets. Commun. ACM. 64 (12), 86–92 (2021)

Goodin, R.E.: Innovating Democracy: Democratic Theory and Practice after the Deliberative turn. Oxford University Press (2008)

Hayes, P., Van de Poel, I., Steen, M.: Moral transparency of and concerning algorithmic tools. AI Ethics. 3 , 585–600 (2023). https://doi.org/10.1007/s43681-022-00190-4

Hickok, M.: Lessons learned from AI ethics principles for future actions. AI Ethics. (2020). https://doi.org/10.1007/s43681-020-00008-1

High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI. European Commission (2019)

Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1 (9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

Lagioia, F., Rovatti, R., Sartor, G.: Algorithmic fairness through group parities? The case of COMPAS-SAPMOC. AI Soc. 38 , 459–478 (2023). https://doi.org/10.1007/s00146-022-01441-y

Li, H., Guo, D., Fan, W., Xu, M., Huang, J., Meng, F., Song, Y.: Multi-step Jailbreaking Privacy Attacks on ChatGPT. In arXiv . (2023)

Martínez-Plumed, F., Contreras-Ochando, L., Ferri, C., Hernández-Orallo, J., Kull, M., Lachiche, N., Ramírez-Quintana, M.J., Flach, P.: IEEE Trans. Knowl. Data Eng. 33 (8), 3048–3061 (2021). https://doi.org/10.1109/TKDE.2019.2962680 CRISP-DM Twenty Years Later: From Data Mining Processes to Data Science Trajectories

Meadows, D.H.: Thinking in Systems: A Primer. Chelsea Publishing (2008)

Mhlambi, S.: From rationality to relationality: Ubuntu as an ethical and human rights framework for Artificial Intelligence governance (Carr Center Discussion Paper Series, Issue. (2020). https://carrcenter.hks.harvard.edu/files/cchr/files/ccdp_2020-009_sabelo_b.pdf

Morley, J., Floridi, L., Kinsey, L., Elhalal, A.: From what to how: An initial review of publicly available AI Ethics Tools, methods and research to Translate principles into practices. Sci Eng. Ethics. 26 , 2141–2168 (2020). https://doi.org/10.1007/s11948-019-00165-5

Noble, S.U.: Algorithms of Oppression: Now Search Engines Reinforce Racism. New York University (2018)

O’Neil, C.: Weapons of Math Destruction. Penguin (2016)

Oudshoorn, N., Pinch, T.: How Users Matter: The co-construction of Users and Technology. MIT Press (2003)

Reddit: On the benefits of eating glass (Why you can never trust anything you read online, ever again) . (2022)

Reijers, W., Wright, D., Brey, P., Weber, K., Rodrigues, R., O’Sullivan, D., Gordijn, B.: Methods for Practising Ethics in Research and Innovation: A literature review, critical analysis and recommendations. Sci Eng. Ethics. 24 (5), 1437–1481 (2018). https://doi.org/10.1007/s11948-017-9961-8

Sætra, H.S., Danaher, J.: To each technology its own Ethics: The problem of ethical proliferation. Philos. Technol. 35 (4), 93 (2022). https://doi.org/10.1007/s13347-022-00591-7

Selinger, E., Frischmann, B.: 10 August 2015). Will the internet of things result in predictable people? The Guardian . (2015). https://www.theguardian.com/technology/2015/aug/10/internet-of-things-predictable-people

Shearer, C.: The CRISP-DM model: The new blueprint for data mining. J. data Warehous. 5 , 13–22 (2000)

Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., Anderson, R.: The Curse of Recursion: Training on Generated Data Makes Models Forget ( https://doi.org/10.48550/arXiv.2305.17493 ). In: arXiv. (2023)

Sison, A.J.G., Daza, M.T., Gozalo-Brizuela, R., Garrido-Merchán, E.C.: ChatGPT: More than a Weapon of Mass Deception ethical challenges and responses from the human-centered Artificial Intelligence (HCAI) Perspective. Int. J. Human–Computer Interact. 1–20 (2023). https://doi.org/10.1080/10447318.2023.2225931

Stahl, B.C., Eke, D.: The ethics of ChatGPT– exploring the ethical issues of an emerging technology. Int. J. Inf. Manag. 74 , 102700 (2024). https://doi.org/10.1016/j.ijinfomgt.2023.102700

TIME. OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. TIME . (2023)., 18 January https://time.com/6247678/openai-chatgpt-kenya-workers/

Vallor, S.: Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press (2016)

Van de Poel, I.: Embedding values in Artificial Intelligence (AI) systems. Mind. Mach. 30 (3), 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4

Van de Poel, I., Royakkers, L.: Ethics, Technology, and Engineering: An Introduction. Wiley (2011)

Van der Sloot, B.: Privacy as Virtue: Moving Beyond the Individual in the Age of Big Data (Vol. 81). Intersentia. (2017)

Van der Sloot, B.: Regulating the synthetic society: Generative AI, legal questions, and societal challenges . Bloomsbury (2024)

Véliz, C.: Privacy is Power: Why and How You Should Take Back Control of Your Data. Transworld Publishes (2020)

Weidinger, L., Rauh, M., Marchal, N., Manzini, A., Hendricks, L.A., Mateos-Garcia, J., Bergman, S., Kay, J., Griffin, C., Bariach, B., Gabriel, I., Rieser, V., Isaac, W.: Sociotechnical Safety Evaluation of Generative AI Systems (2023). https://doi.org/10.48550/arXiv.2310.11986 ). In arXiv

Wong, P.-H., Wang, T.X. (eds.): Harmonious Technology: A Confucian Ethics of Technology. Routledge (2021)

Wu, X., Duan, R., Ni, J.: Unveiling Security, Privacy, and Ethical Concerns of ChatGPT (2023). https://doi.org/10.48550/arXiv.2307.14192 ). In arXiv

Zhou, J., Müller, H., Holzinger, A., Chen, F.: Ethical ChatGPT: Concerns, Challenges, and Commandments (2023). https://doi.org/10.48550/arXiv.2305.10646 ). In arXiv

Zhuo, T.Y., Huang, Y., Chen, C., Xing, Z.: Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity (2023). https://doi.org/10.48550/arXiv.2301.12867 ). In arXiv

Download references

Author information

Authors and affiliations.

TNO, The Hague, Netherlands

Marc Steen, Joachim de Greeff, Maaike de Boer & Cor Veenman

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marc Steen .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests in relation to the current paper.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Steen, M., Greeff, J., Boer, M. et al. Ethical aspects of ChatGPT: An approach to discuss and evaluate key requirements from different ethical perspectives. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00571-x

Download citation

Received : 19 March 2024

Accepted : 24 August 2024

Published : 10 September 2024

DOI : https://doi.org/10.1007/s43681-024-00571-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Consequentialism
  • Duty ethics
  • Relational ethics
  • Virtue ethics
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Ethical Analysis Of Case Study

    case study on ethical practices

  2. (PDF) Case-based Approaches to Professional Ethics: A systematic

    case study on ethical practices

  3. Ethical Case Study

    case study on ethical practices

  4. (PDF) A Case study on the ethical issues in MFIs

    case study on ethical practices

  5. Seminar 1- Example of ethical case analysis (27.02.2017)

    case study on ethical practices

  6. Case Studies on Ethical Issues

    case study on ethical practices

VIDEO

  1. Ethical Case Study Presentation

  2. ETHICS CASE STUDY|PART-1|HOW TO SOLVE ETHICS CASE STUDY|UPSC|MPPSC|RAS|UPPCS|RAKSHA ACADEMY|

  3. Where to study ethical hacking #ethicalhacking #cyber

  4. CASE STUDY ETHICAL AND NON FINANCIAL CONSIDERATION IN INVESTMENT DECISIONS

  5. Ethical Dilemmas

  6. Ethics case study for UPSC |Ethics classes for UPSC GS Paper 4

COMMENTS

  1. Case Studies

    Case Studies. More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and ...

  2. Business Ethics Cases

    Business Ethics Resources. Business Ethics Cases. Find ethics case studies on bribery, sourcing, intellectual property, downsizing, and other topics in business ethics, corporate governance, and ethical leadership. (For permission to reprint articles, submit requests to [email protected].)

  3. Ethical Business Practices: Case Studies and Lessons Learned

    However, understanding what constitutes ethical behavior and how to implement it can be a complex process. This article explores some case studies that shine a light on ethical business practices, offering valuable lessons for businesses in any industry. Case Study 1: Patagonia's Commitment to Environmental Ethics

  4. PDF The Coca-Cola Company Struggles with Ethical Crises

    Ferrell, and Linda Ferrell. Julian Mathias provided crucial updates and editorial assistance for this case. It is intended for classroom discussion rather than to illustrate effective or ineffective handling of administrative, ethical, or legal decisions by management. (2014) The Coca-Cola Company Struggles with Ethical Crises

  5. The Costco Model

    Download Case Study PDF. Costco is often cited as one of the world's most ethical companies. It has been called a "testimony to ethical capitalism" in large part due to its company practices and treatment of employees. Costco maintains a company code of ethics which states: "The continued success of our company depends on how well each ...

  6. Ethics Cases

    Ethical Considerations for Disability Advocacy, Representation, and Access. Six case studies explore how accessibility intersects with health care, education, and workplace ethics. The cases serve as a foundation for difficult dialogues, in-class discussions, or workshops and should be used by stakeholders involved in disability advocacy ...

  7. Ethics: Articles, Research, & Case Studies on ...

    by Dina Gerdeman. Corporate misconduct has grown in the past 30 years, with losses often totaling billions of dollars. What businesses may not realize is that misconduct often results from managers who set unrealistic expectations, leading decent people to take unethical shortcuts, says Lynn S. Paine. 23 Apr 2024.

  8. Apple Suppliers & Labor Practices

    We have chosen to stay engaged and attempt to drive changes on the ground.". In an effort for greater transparency, Apple has released annual reports detailing their work with suppliers and labor practices. While more recent investigations have shown some improvements to suppliers' working conditions, Apple continues to face criticism as ...

  9. PDF ETHICS IN PRACTICE

    This Ethics in Practice casebook is a great resource for continued professional learning (each case qualifies for 0.25 CE/SER credit) or for use in classroom discussions. It gives you an opportunity to "exercise" your ethical decision-making skills. Just as you need to practice to become proficient at

  10. Building an Ethical Company

    Building an Ethical Company. Create an organization that helps employees behave more honorably. Summary. Just as people can develop skills and abilities over time, they can learn to be more or ...

  11. 4 Examples of Ethical Leadership in Business

    1. Johnson & Johnson's Tylenol Poisonings. A classic case of ethical leadership in business is "the Chicago Tylenol poisonings.". On September 9, 1982, a Chicago-area 12-year-old girl woke up with a cold. Her parents gave her a tablet of extra-strength Tylenol to ease her symptoms and, within hours, she died.

  12. Leadership Ethics Cases

    Leadership Ethics Resources. Leadership Ethics Cases. Find ethical case studies on leadership ethics, including scenarios for top management on issues such as downsizing and management responsibilities. (For permission to reprint articles, submit requests to [email protected].)

  13. Research and Practice of AI Ethics: A Case Study Approach ...

    This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ...

  14. Cases

    Rebecca L. Walker, PhD. Ethically justifying human-centered research with only nonhuman animals as subjects likely requires that the research's benefits to humans must, at least, outweigh harms suffered by the nonhuman animals. AMA J Ethics. 2024;26 (9):E673-678. doi: 10.1001/amajethics.2024.673. Case and Commentary. Aug 2024.

  15. The patient suicide attempt

    Nurses face more and more ethical dilemmas during their practice nowadays, especially when nurses have responsibility to take care of patients with terminal diseases such as cancer [1].The case study demonstrates an ethical dilemma faced by a nursing staff taking care of an end stage aggressive prostate cancer patient Mr Green who confided to the nurse his suicide attempt and ask the nurse to ...

  16. A case study of ethical issue at Gucci in Shenzhen, China

    We shall draw on two very different perspectives to conduct a moral evaluation of the labor management practices in the Gucci case. The first perspective is that of traditional Confucian ethics, the second is modern labor rights theory. 1. Confucianism. The core of Confucian ethics is comprised of five values.

  17. Ethics Case Studies

    EthicsEthics Case Studies. Ethics Case Studies. The SPJ Code of Ethics is voluntarily embraced by thousands of journalists, regardless of place or platform, and is widely used in newsrooms and classrooms as a guide for ethical behavior. The code is intended not as a set of "rules" but as a resource for ethical decision-making.

  18. Wells Fargo and Moral Emotions

    Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series. Concepts Unwrapped Concepts Unwrapped: Sports Edition Ethics Defined ... Disgust with Wells Fargo's practices caused the American Federation of Teachers, to cut ties with the bank. Some whistleblowers ...

  19. Wells Fargo: Fall from Great to Miserable: A Case Study on Corporate

    Wells Fargo seemed to have a perfect board, a lead director and a much acclaimed CEO, apart from seven board committees. External auditors were one of the 'big fours'. This case is intended to stimulate discussion in the class on why corporate governance practices fail, despite a seemingly healthy governing structure.

  20. Code of Ethics Case Studies

    Case Studies. The ACM Code of Ethics and Professional Practice ("the Code") is meant to inform practice and education. It is useful as the conscience of the profession, but also for individual decision-making. As prescribed by the Preamble of the Code, computing professionals should approach the dilemma with a holistic reading of the ...

  21. PDF Sweatshops to Leadership in

    scale changes in its practices. However, as more issues have surfaced and been brought to the attention of the corporation and its consumers, Nike has increased its efforts to be more ethical initsmanufacturing practices. In fact, it has becomesomething of an industry leader in certainareas. This does not mean the company is totallyfree from ...

  22. Making a Case for the Case: An Introduction

    Many reasons underlie this trend, one being the movement towards evidence-based practice. Case studies provide a methodology by which a detailed study can be conducted of a social unit, whether that unit is a person, ... Ethics case studies allow such reflection to facilitate the development of ethical decision-making skills. This volume has ...

  23. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    Case study—life stories on Israeli conflict: The author is a member of the society in conflict. It presents solutions found and emphasizes that it is necessary to work in collaboration with other researchers to better deal with dilemmas: Diniz D. Brazil. 2008: III: To discuss principles of research ethics in Social Sciences: Case study ...

  24. Ethical aspects of ChatGPT: An approach to discuss and ...

    We propose to look at these requirements through four different ethical perspectives— consequentialism, duty ethics, relational ethics, and virtue ethics; and to look at different levels of the sociotechnical system—individual, organization, and society. We present a case study of ChatGPT, to illustrate how this approach works in practice ...