• Dean’s Office
  • External Advisory Council
  • Computing Council
  • Extended Computing Council
  • Undergraduate Advisory Group
  • Break Through Tech AI
  • Building 45 Event Space
  • Infinite Mile Awards: Past Winners
  • Frequently Asked Questions
  • Undergraduate Programs
  • Graduate Programs
  • Educating Computing Bilinguals
  • Online Learning
  • Industry Programs
  • AI Policy Briefs
  • Envisioning the Future of Computing Prize
  • SERC Symposium 2023
  • SERC Case Studies
  • SERC Scholars Program
  • SERC Postdocs
  • Common Ground Subjects
  • For First-Year Students and Advisors
  • For Instructors: About Common Ground Subjects
  • Common Ground Award for Excellence in Teaching
  • New and Incoming Faculty
  • Faculty Resources
  • Faculty Openings
  • Search for: Search
  • MIT Homepage

Case Studies in Social and Ethical Responsibilities of Computing

it ethics case study

The MIT Case Studies in Social and Ethical Responsibilities of Computing (SERC) aims to advance new efforts within and beyond the Schwarzman College of Computing. The specially commissioned and peer-reviewed cases are brief and intended to be effective for undergraduate instruction across a range of classes and fields of study, and may also be of interest for computing professionals, policy specialists, and general readers. The series editors interpret “social and ethical responsibilities of computing” broadly. Some cases focus closely on particular technologies, others on trends across technological platforms. Others examine social, historical, philosophical, legal, and cultural facets that are essential for thinking critically about present-day efforts in computing activities. Special efforts are made to solicit cases on topics ranging beyond the United States and that highlight perspectives of people who are affected by various technologies in addition to perspectives of designers and engineers. New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year and made available via the Knowledge Futures Group’s  PubPub  platform. The SERC case studies are made available for free on an open-access basis , under Creative Commons licensing terms. Authors retain copyright, enabling them to re-use and re-publish their work in more specialized scholarly publications. If you have suggestions for a new case study or comments on a published case, the series editors would like to hear from you! Please reach out to [email protected] .

Winter 2024

it ethics case study

Integrals and Integrity: Generative AI Tries to Learn Cosmology

it ethics case study

How Interpretable Is “Interpretable” Machine Learning?

it ethics case study

AI’s Regimes of Representation: A Community-Centered Study of Text-to-Image Models in South Asia

Past issues, summer 2023.

it ethics case study

Pretrial Risk Assessment on the Ground: Algorithms, Judgments, Meaning, and Policy by Cristopher Moore, Elise Ferguson, and Paul Guerin

it ethics case study

To Search and Protect? Content Moderation and Platform Governance of Explicit Image Material by Mitali Thakor, Sumaiya Sabnam, Ransho Ueno, and Ella Zaslow

Winter 2023

it ethics case study

Emotional Attachment to AI Companions and European Law by Claire Boine

it ethics case study

Algorithmic Fairness in Chest X-ray Diagnosis: A Case Study by Haoran Zhang, Thomas Hartvigsen, and Marzyeh Ghassemi

it ethics case study

The Right to Be an Exception to a Data-Driven Rule by Sarah H. Cen and Manish Raghavan

it ethics case study

Twitter Gamifies the Conversation by C. Thi Nguyen, Meica Magnani, and Susan Kennedy

Summer 2022

it ethics case study

“Porsche Girl”: When a Dead Body Becomes a Meme by Nadia de Vries

it ethics case study

Patenting Bias: Algorithmic Race and Ethnicity Classifications, Proprietary Rights, and Public Data by Tiffany Nichols

it ethics case study

Privacy and Paternalism: The Ethics of Student Data Collection by Kathleen Creel and Tara Dixit

Winter 2022

it ethics case study

Differential Privacy and the 2020 US Census by Simson Garfinkel

it ethics case study

The Puzzle of the Missing Robots by Suzanne Berger and Benjamin Armstrong

it ethics case study

Protections for Human Subjects in Research: Old Models, New Needs? by Laura Stark

it ethics case study

The Cloud Is Material: On the Environmental Impacts of Computation and Data Storage by Steven Gonzalez Monserrate

it ethics case study

Algorithmic Redistricting and Black Representation in US Elections by Zachary Schutzman

Summer 2021

it ethics case study

Hacking Technology, Hacking Communities: Codes of Conduct and Community Standards in Open Source by Christina Dunbar-Hester

it ethics case study

Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle by Harini Suresh and John Guttag

it ethics case study

Identity, Advertising, and Algorithmic Targeting: Or How (Not) to Target Your “Ideal User” by Tanya Kant

it ethics case study

Wrestling with Killer Robots: The Benefits and Challenges of Artificial Intelligence for National Security by Erik Lin-Greenberg

it ethics case study

Public Debate on Facial Recognition Technologies in China by Tristan G. Brown, Alexander Statman, and Celine Sui

Winter 2021

it ethics case study

The Case of the Nosy Neighbors by Johanna Gunawan and Woodrow Hartzog

it ethics case study

Who Collects the Data? A Tale of Three Maps by Catherine D’Ignazio and Lauren Klein

it ethics case study

The Bias in the Machine: Facial Recognition Technology and Racial Disparities by Sidney Perkowitz

it ethics case study

The Dangers of Risk Prediction in the Criminal Justice System by Julia Dressel and Hany Farid

Stanford Computer Ethics Case Studies and Interviews

Case studies.

  • Algorithmic Decision-Making and Accountability
  • Autonomous Vehicles
  • Facial Recognition
  • Power of Private Platforms
  • Joshua Browder interview

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Fostering ethical thinking in computing

Press contact :.

Four stock images arranged in a rectangle: a photo of a person with glow-in-the-dark paint splattered on her face, an aerial photo of New York City at night, photo of a statue of a blind woman holding up scales and a sword, and an illustrated eye with a human silhouette in the pupil

Previous image Next image

Traditional computer scientists and engineers are trained to develop solutions for specific needs, but aren’t always trained to consider their broader implications. Each new technology generation, and particularly the rise of artificial intelligence, leads to new kinds of systems, new ways of creating tools, and new forms of data, for which norms, rules, and laws frequently have yet to catch up. The kinds of impact that such innovations have in the world has often not been apparent until many years later.

As part of the efforts in Social and Ethical Responsibilities of Computing (SERC) within the MIT Stephen A. Schwarzman College of Computing, a new case studies series examines social, ethical, and policy challenges of present-day efforts in computing with the aim of facilitating the development of responsible “habits of mind and action” for those who create and deploy computing technologies.

“Advances in computing have undeniably changed much of how we live and work. Understanding and incorporating broader social context is becoming ever more critical,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “This case study series is designed to be a basis for discussions in the classroom and beyond, regarding social, ethical, economic, and other implications so that students and researchers can pursue the development of technology across domains in a holistic manner that addresses these important issues.”

A modular system

By design, the case studies are brief and modular to allow users to mix and match the content to fit a variety of pedagogical needs. Series editors David Kaiser and Julie Shah, who are the associate deans for SERC, structured the cases primarily to be appropriate for undergraduate instruction across a range of classes and fields of study.

“Our goal was to provide a seamless way for instructors to integrate cases into an existing course or cluster several cases together to support a broader module within a course. They might also use the cases as a starting point to design new courses that focus squarely on themes of social and ethical responsibilities of computing,” says Kaiser, the Germeshausen Professor of the History of Science and professor of physics.

Shah, an associate professor of aeronautics and astronautics and a roboticist who designs systems in which humans and machines operate side by side, expects that the cases will also be of interest to those outside of academia, including computing professionals, policy specialists, and general readers. In curating the series, Shah says that “we interpret ‘social and ethical responsibilities of computing’ broadly to focus on perspectives of people who are affected by various technologies, as well as focus on perspectives of designers and engineers.”

The cases are not limited to a particular format and can take shape in various forms — from a magazine-like feature article or Socratic dialogues to choose-your-own-adventure stories or role-playing games grounded in empirical research. Each case study is brief, but includes accompanying notes and references to facilitate more in-depth exploration of a given topic. Multimedia projects will also be considered. “The main goal is to present important material — based on original research — in engaging ways to broad audiences of non-specialists,” says Kaiser.

The SERC case studies are specially commissioned and written by scholars who conduct research centrally on the subject of the piece. Kaiser and Shah approached researchers from within MIT as well as from other academic institutions to bring in a mix of diverse voices on a spectrum of topics. Some cases focus on a particular technology or on trends across platforms, while others assess social, historical, philosophical, legal, and cultural facets that are relevant for thinking critically about current efforts in computing and data sciences.

The cases published in the inaugural issue place readers in various settings that challenge them to consider the social and ethical implications of computing technologies, such as how social media services and surveillance tools are built; the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings; the biases of risk prediction algorithms in the criminal justice system; and the politicization of data collection.

"Most of us agree that we want computing to work for social good, but which good? Whose good? Whose needs and values and worldviews are prioritized and whose are overlooked?” says Catherine D’Ignazio, an assistant professor of urban science and planning and director of the Data + Feminism Lab at MIT.

D’Ignazio’s case for the series, co-authored with Lauren Klein, an associate professor in the English and Quantitative Theory and Methods departments at Emory University, introduces readers to the idea that while data are useful, they are not always neutral. “These case studies help us understand the unequal histories that shape our technological systems as well as study their disparate outcomes and effects. They are an exciting step towards holistic, sociotechnical thinking and making."

Rigorously reviewed

Kaiser and Shah formed an editorial board composed of 55 faculty members and senior researchers associated with 19 departments, labs, and centers at MIT, and instituted a rigorous peer-review policy model commonly adopted by specialized journals. Members of the editorial board will also help commission topics for new cases and help identify authors for a given topic.

For each submission, the series editors collect four to six peer reviews, with reviewers mostly drawn from the editorial board. For each case, half the reviewers come from fields in computing and data sciences and half from fields in the humanities, arts, and social sciences, to ensure balance of topics and presentation within a given case study and across the series.

“Over the past two decades I’ve become a bit jaded when it comes to the academic review process, and so I was particularly heartened to see such care and thought put into all of the reviews," says Hany Farid, a professor at the University of California at Berkeley with a joint appointment in the Department of Electrical Engineering and Computer Sciences and the School of Information. “The constructive review process made our case study significantly stronger.”

Farid’s case, “The Dangers of Risk Prediction in the Criminal Justice System,” which he penned with Julia Dressel, recently a student of computer science at Dartmouth College, is one of the four commissioned pieces featured in the inaugural issue.

Cases are additionally reviewed by undergraduate volunteers, who help the series editors gauge each submission for balance, accessibility for students in multiple fields of study, and possibilities for adoption in specific courses. The students also work with them to create original homework problems and active learning projects to accompany each case study, to further facilitate adoption of the original materials across a range of existing undergraduate subjects.

“I volunteered to work with this group because I believe that it's incredibly important for those working in computer science to include thinking about ethics not as an afterthought, but integrated into every step and decision that is made, says Annie Snyder, a mathematical economics sophomore and a member of the MIT Schwarzman College of Computing’s Undergraduate Advisory Group. “While this is a massive issue to take on, this project is an amazing opportunity to start building an ethical culture amongst the incredibly talented students at MIT who will hopefully carry it forward into their own projects and workplace.”

New sets of case studies, produced with support from the MIT Press’ Open Publishing Services program, will be published twice a year via the Knowledge Futures Group’s  PubPub platform . The SERC case studies are made available for free on an open-access basis, under Creative Commons licensing terms. Authors retain copyright, enabling them to reuse and republish their work in more specialized scholarly publications.

“It was important to us to approach this project in an inclusive way and lower the barrier for people to be able to access this content. These are complex issues that we need to deal with, and we hope that by making the cases widely available, more people will engage in social and ethical considerations as they’re studying and developing computing technologies,” says Shah.

Share this news article on:

Related links.

  • MIT Case Studies in Social and Ethical Responsibilities of Computing
  • Program in Science, Technology, and Society

Related Topics

  • Technology and society
  • Education, teaching, academics
  • Artificial intelligence
  • Computer science and technology
  • Diversity and inclusion
  • Program in STS
  • History of science
  • Aeronautical and astronautical engineering
  • Electrical Engineering & Computer Science (eecs)
  • Urban studies and planning
  • Human-computer interaction
  • MIT Sloan School of Management
  • School of Architecture and Planning
  • School of Humanities Arts and Social Sciences

Related Articles

Milo Phillips-Brown (left) and Marion Boulicault are part of a team working on transforming technology ethics education at MIT.

3 Questions: Marion Boulicault and Milo Phillips-Brown on ethics in a technical curriculum

MIT Schwarzman College of Computing leadership team (left to right) David Kaiser, Daniela Rus, Dan Huttenlocher, Julie Shah, and Asu Ozdaglar

A college for the computing age

woman in profile

Computing and artificial intelligence: Humanistic perspectives from MIT

(l-r) Julie Shah, Melissa Nobles

3 Questions: The social implications and responsibilities of computing

Previous item Next item

More MIT News

William Deringer smiles and stands next to an ornate wooden door.

Exploring the history of data-driven arguments in public life

Read full story →

Photos of Roger Levy, Tracy Slatyer, and Martin Wainwright

Three from MIT awarded 2024 Guggenheim Fellowships

Carlos Prieto sits, playing cello, in a well-lit room

A musical life: Carlos Prieto ’59 in conversation and concert

Side-by-side headshots of Riyam Al-Msari and Francisca Vasconcelos

Two from MIT awarded 2024 Paul and Daisy Soros Fellowships for New Americans

Cartoon images of people connected by networks, depicts a team working remotely on a project.

MIT Emerging Talent opens pathways for underserved global learners

Two students push the tubular steel Motorsports car into Lobby 13 while a third sits in the car and steers

The MIT Edgerton Center’s third annual showcase dazzles onlookers

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Ethics, Society, & Technology Case Studies

At the McCoy Family Center for Ethics in Society, we believe that researchers, founders, and technologists—present and future—should be expected to confront and gain a deeper understanding of the ethical, social, and political dimensions of technologies. We aim to prepare the next generation of leaders to take on these challenges by integrating ethical and societal implications into the development, deployment, and governance of technology. 

To support this mission, the Ethics, Society, and Technology Initiatives are creating a high-quality production of open-source ethics, policy, and technology case studies for use in university and industry settings. Through the Ethics, Society, and Technology (EST) Case Study Program a talented team of case writers develop and prototype case studies that address pressing ethical and sociotechnical issues in the technology field by drawing on secondary and primary sources of research materials. 

EST case studies explore written stories that wrestle with business decisions, design and technical implementation, regulatory compliance and limitations, and individual participation in the technology industry and workforce. These stories can range from real-life scenarios to fictionalized ones based on composite experiences and stories. 

Previous and current projects include: 

  • Juul Labs: A Design & Marketing Case Study 
  • Generative AI: Navigating the Technology Industry 
  • Fizz: Social Media Growth
  • Facebook: Social Media and Youth Mental Health (coming soon!)
  • Algorithmic Decision-Making and Accountability
  • Autonomous Vehicles
  • Facial Recognition
  • Private Platforms

If you use any of these case studies and have feedback on them, please feel free to send the case writing team an email at estinitiatives [at] stanford.edu (estinitiatives[at]stanford[dot]edu) . 

Advertisement

Advertisement

Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality

  • Original Research/Scholarship
  • Open access
  • Published: 08 March 2021
  • Volume 27 , article number  16 , ( 2021 )

Cite this article

You have full access to this open access article

it ethics case study

  • Mark Ryan   ORCID: orcid.org/0000-0003-4850-0111 1 ,
  • Josephina Antoniou 2 ,
  • Laurence Brooks 3 ,
  • Tilimbe Jiya 4 ,
  • Kevin Macnish 5 &
  • Bernd Stahl 3  

12k Accesses

19 Citations

7 Altmetric

Explore all metrics

This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and applications of AI, there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI + BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature.

Similar content being viewed by others

it ethics case study

Organisational responses to the ethical issues of artificial intelligence

it ethics case study

Ethical AI governance: mapping a research ecosystem

it ethics case study

From Principled to Applied AI Ethics in Organizations: A Scoping Review

Avoid common mistakes on your manuscript.

Introduction

Big Data and Artificial Intelligence (BD + AI) are emerging technologies that offer great potential for business, healthcare, the public sector, and development agencies alike. The increasing impact of these two technologies and their combined potential in these sectors can be highlighted for diverse organisational aspects such as for customisation of organisational processes and for automated decision making. The combination of Big Data and AI, often in the form of machine learning applications, can better exploit the granularity of data and analyse it to offer better insights into behaviours, incidents, and risk, eventually aiming at positive organisational transformation.

Big Data offers fresh and interesting insights into structural patterns, anomalies, and decision-making in a broad range of different applications (Cuquet & Fensel, 2018 ), while AI provides predictive foresight, intelligent recommendations, and sophisticated modelling. The integration and combination of AI + BD offer phenomenal potential for correlating, predicting and prescribing recommendations in insurance, human resources (HR), agriculture, and energy, as well as many other sectors. While BD + AI provides a wide range of benefits, they also pose risks to users, including but not limited to privacy infringements, threats of unemployment, discrimination, security concerns, and increasing inequalities (O’Neil, 2016 ). Footnote 1 Adequate and timely policy needs to be implemented to prevent many of these risks occurring.

One of the main limitations preventing key decision-making for ethical BD + AI use is that there are few rigorous empirical studies carried out on the ethical implications of these technologies across multiple application domains. This renders it difficult for policymakers and developers to identify when ethical issues resulting from BD + AI use are only relevant for isolated domains and applications, or whether there are repeated/universal concerns which can be seen across different sectors. While the field lacks literature evaluating ethical issues Footnote 2 ‘on the ground’, there are even fewer multi-case evaluations.

This paper provides a cohesive multi-case study analysis across ten different application domains, including domains such as government, agriculture, insurance, and the media. It reviews ethical concerns found within these case studies to establish cross-cutting thematic issues arising from the implementation and use of BD + AI. The paper collects relevant literature and proposes a simple classification of ethical issues (short term, medium term, long term), which is then juxtaposed with the ethical concerns highlighted from the multiple-case study analysis. This multiple-case study analysis of BD + AI offers an understanding of current organisational practices.

The work described in this paper makes an important contribution to the literature, based on its empirical findings. By presenting the ethical issues across an array of application areas, the paper provides much-needed rigorous empirical insight into the social and organisational reality of ethics of AI + BD. Our empirical research brings together a collection of domains that gives a broad oversight about issues that underpin the implementation of AI. Through its empirical insights the paper provides a basis for a broader discussion of how these issues can and should be addressed.

This paper is structured in six main sections: this introduction is followed by a literature review, which allows for an integrated review of ethical issues, contrasting them with those found in the cases. This provides the basis for a categorisation or classification of ethical issues in BD + AI. The third section contains a description of the interpretivist qualitative case study methodology used in this paper. The subsequent section provides an overview of the organisations participating in the cases to contrast similarities and divisions, while also comparing the diversity of their use of BD + AI. Footnote 3 The fifth section provides a detailed analysis of the ethical issues derived from using BD + AI, as identified in the cases. The concluding section analyses the differences between theoretical and empirical work and spells out implications and further work.

Literature Review

An initial challenge that any researcher faces when investigating ethical issues of AI + BD is that, due to the popularity of the topic, there is a vast and rapidly growing literature to be considered. Ethical issues of AI + BD are covered by a number of academic venues, including some specific ones such as the AAAI/ACM Conference on AI, Ethics, and Society ( https://dl.acm.org/doi/proceedings/10.1145/3306618 ), policy initiative and many publicly and privately financed research reports (Whittlestone, Nyrup, Alexandrova, Dihal, & Cave, 2019 ). Initial attempts to provide overviews of the area have been published (Jobin, 2019 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ), but there is no settled view on what counts as an ethical issue and why. In this paper we aim to provide a broad overview of issues found through the case studies. This paper puts forward what are commonly perceived to be ethical issues within the literature or concerns that have ethical impacts and repercussions. We explicitly do not apply a particular philosophical framework of ethics but accept as ethical issues those issues that we encounter in the literature. This review is based on an understanding of the current state of the literature by the paper's authors. It is not a structured review and does not claim comprehensive coverage but does share some interesting insights.

To be able to undertake the analysis of ethical issues in our case studies, we sought to categorise the ethical issues found in the literature. There are potentially numerous ways of doing so and our suggestion does not claim to be authoritative. Our suggestion is to order ethical issues in terms of their temporal horizon, i.e., the amount of time it is likely to take to be able to address them. Time is a continuous variable, but we suggest that it is possible to sort the issues into three clusters: short term, medium term, and long term (see Fig.  1 ).

figure 1

Temporal horizon for addressing ethical issues

As suggested by Baum ( 2017 ), it is best to acknowledge that there will be ethical issues and related mitigating activities that cannot exclusively fit in as short, medium or long term.

ather than seeing it as an authoritative classification, we see this as a heuristic that reflects aspects of the current discussion. One reason why this categorisation is useful is that the temporal horizon of ethical issues is a potentially useful variable, with companies often being accused of favouring short-term gains over long-term benefits. Similarly, short-term issues must be able to be addressed on the local level for short-term fixes to work.

Short-term issues

These are issues for which there is a reasonable assumption that they are capable of being addressed in the short term. We do not wish to quantify what exactly counts as short term, as any definition put forward will be contentious when analysing the boundaries and transition periods. A better definition of short term might therefore be that such issues can be expected to be successfully addressed in technical systems that are currently in operation or development. Many of the issues we discuss under the heading of short-term issues are directly linked to some of the key technologies driving the current AI debate, notably machine learning and some of its enabling techniques and approaches such as neural networks and reinforcement learning.

Many of the advantages promised by BD + AI involve the use of personal data, data which can be used to identify individuals. This includes health data; customer data; ANPR data (Automated Number Plate Recognition); bank data; and even includes data about farmers’ land, livestock, and harvests. Issues surrounding privacy and control of data are widely discussed and recognized as major ethical concerns that need to be addressed (Boyd & Crawford, 2012 ; Tene & Polonetsky, 2012 , 2013 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ; Jain, Gyanchandani, & Khare, 2016 ; Mai, 2016 ; Macnish, 2018 ). The concern surrounding privacy can be put down to a combination of a general level of awareness of privacy issues and the recently-introduced General Data Protection Regulation (GDPR). Closely aligned with privacy issues are those relating to transparency of processes dealing with data, which can often be classified as internal, external, and deliberate opaqueness (Burrell, 2016 ; Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 ; Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ).

The Guidelines for Trustworthy AI Footnote 4 were released in 2018 by the High-Level Expert Group on Artificial Intelligence (AI HLEG Footnote 5 ), and address the need for technical robustness and safety, including accuracy, reproducibility, and reliability. Reliability is further linked to the requirements of diversity, fairness, and social impact because it addresses freedom from bias from a technical point of view. The concept of reliability, when it comes to BD + AI, refers to the capability to verify the stability or consistency of a set of results (Bush, 2012 ; Ferraggine, Doorn, & Rivera, 2009 ; Meeker and Hong, 2014 ).

If a technology is unreliable, error-prone, and unfit-for-purpose, adverse ethical issues may result from decisions made by the technology. The accuracy of recommendations made by BD + AI is a direct consequence of the degree of reliability of the technology (Barolli, Takizawa, Xhafa, & Enokido, 2019 ). Bias and discrimination in algorithms may be introduced consciously or unconsciously by those employing the BD + AI or because of algorithms reflecting pre-existing biases (Baroccas and Selbst, 2016 ). Examples of bias have been documented often reflecting “an imbalance in socio-economic or other ‘class’ categories—ie, a certain group or groups are not sampled as much as others or at all” (Panch et al., 2019 ). have the potential to affect levels of inequality and discrimination, and if biases are not corrected these systems can reproduce existing patterns of discrimination and inherit the prejudices of prior decision makers (Barocas & Selbst, 2016 , p. 674). An example of inherited prejudices is documented in the United States, where African-American citizens, more often than not, have been given longer prison sentences than Caucasians for the same crime.

Medium-term issues

Medium-term issues are not clearly linked to a particular technology but typically arise from the integration of AI techniques including machine learning into larger socio-technical systems and contexts. They are thus related to the way life in modern societies is affected by new technologies. These can be based on the specific issues listed above but have their main impact on the societal level. The use of BD + AI may allow individuals’ behaviour to be put under scrutiny and surveillance , leading to infringements on privacy, freedom, autonomy, and self-determination (Wolf, 2015 ). There is also the possibility that the increased use of algorithmic methods for societal decision-making may create a type of technocratic governance (Couldry & Powell, 2014 ; Janssen & Kuk, 2016 ), which could infringe on people’s decision-making processes (Kuriakose & Iyer, 2018 ). For example, because of the high levels of public data retrieval, BD + AI may harm people’s freedom of expression, association, and movement, through fear of surveillance and chilling effects (Latonero, 2018 ).

Corporations have a responsibility to the end-user to ensure compliance, accountability, and transparency of their BD + AI (Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ). However, when the source of a problem is difficult to trace, owing to issues of opacity, it becomes challenging to identify who is responsible for the decisions made by the BD + AI. It is worth noting that a large-scale survey in Australia in 2020 indicated that 57.9% of end-users are not at all confident that most companies take adequate steps to protect user data. The significance of understanding and employing responsibility is an issue targeted in many studies (Chatfield et al., 2017 ; Fothergill et al., 2019 ; Jirotka et al., 2017 ; Pellé & Reber, 2015 ). Trust and control over BD + AI as an issue is reiterated by a recent ICO report demonstrating that most UK citizens do not trust organisations with their data (ICO, 2017 ).

Justice is a central concern in BD + AI (Johnson, 2014 , 2018 ). As a starting point, justice consists in giving each person his or her due or treating people equitably (De George, p. 101). A key concern is that benefits will be reaped by powerful individuals and organisations, while the burden falls predominantly on poorer members of society (Taylor, 2017 ). BD + AI can also reflect human intentionality, deploying patterns of power and authority (Portmess & Tower, 2015 , p. 1). The knowledge offered by BD + AI is often in the hands of a few powerful corporations (Wheeler, 2016 ). Power imbalances are heightened because companies and governments can deploy BD + AI for surveillance, privacy invasions and manipulation, through personalised marketing efforts and social control strategies (Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 , p. 11). They play a role in the ascent of datafication, especially when specific groups (such as corporate, academic, and state institutions) have greater unrestrained access to big datasets (van Dijck, 2014 , p. 203).

Discrimination , in BD + AI use, can occur when individuals are profiled based on their online choices and behaviour, but also their gender, ethnicity and belonging to specific groups (Calders, Kamiran, & Pechenizkiy, 2009 ; Cohen et al., 2014 ; and Danna & Gandy, 2002 ). Data-driven algorithmic decision-making may lead to discrimination that is then adopted by decision-makers and those in power (Lepri, Staiano, Sangokoya, Letouzé, & Oliver, 2017 , p. 4). Biases and discrimination can contribute to inequality . Some groups that are already disadvantaged may face worse inequalities, especially if those belonging to historically marginalised groups have less access and representation (Barocas & Selbst, 2016 , p. 685; Schradie, 2017 ). Inequality-enhancing biases can be reproduced in BD + AI, such as the use of predictive policing to target neighbourhoods of largely ethnic minorities or historically marginalised groups (O’Neil, 2016 ).

BD + AI offers great potential for increasing profit, reducing physical burdens on staff, and employing innovative sustainability practices (Badri, Boudreau-Trudel, & Souissi, 2018 ). They offer the potential to bring about improvements in innovation, science, and knowledge; allowing organisations to progress, expand, and economically benefit from their development and application (Crawford et al., 2014 ). BD + AI are being heralded as monumental for the economic growth and development of a wide diversity of industries around the world (Einav & Levin, 2014 ). The economic benefits accrued from BD + AI may be the strongest driver for their use, but BD + AI also holds the potential to cause economic harm to citizens and businesses or create other adverse ethical issues (Newman, 2013 ).

However, some in the literature view the co-development of employment and automation as somewhat naïve outlook (Zuboff, 2015 ). BD + AI companies may benefit from a ‘post-labour’ automation economy, which may have a negative impact on the labour market (Bossman, 2016 ), replacing up to 47% of all US jobs within the next 20 years (Frey & Osborne, 2017 ). The professions most at risk of affecting employment correlated with three of our case studies: farming, administration support and the insurance sector (Frey & Osborne, 2017 ).

Long-term issues

Long-term issues are those pertaining to fundamental aspects of nature of reality, society, or humanity. For example, that AI will develop capabilities far exceeding human beings (Kurzweil, 2006 ). At this point, sometimes called the ‘ singularity ’ machines achieve human intelligence, are expected to be able to improve on themselves and thereby surpass human intelligence and become superintelligent (Bostrom, 2016 ). If this were to happen, then it might have dystopian consequences for humanity as often depicted in science fiction. Also, it stands to reason that the superintelligent, or even just the normally intelligent machines may acquire a moral status.

It should be clear that these expectations are not universally shared. They refer to what is often called ‘ artificial general intelligence’ (AGI), a set of technologies that emulate human reasoning capacities more broadly. Footnote 6

Furthermore, if we may acquire new capabilities, e.g. by using technical implants to enhance human nature. The resulting being might be called a transhuman , the next step of human evolution or development. Again, it is important to underline that this is a contested idea (Livingstone, 2015 ) but one that has increasing traction in public discourse and popular science accounts (Harari, 2017 ).

We chose this distinction of three groups of issues for understanding how mitigation strategies within organisations can be contextualised. We concede that this is one reading of the literature and that many others are possible. In this account of the literature we tried to make sense of the current discourse to allow us to understand our empirical findings which are introduced in the following sections.

Case Study Methodology

Despite the impressive amount of research undertaken on ethical issues of AI + BD (e.g. Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016 ; Zwitter, 2014 ), there are few case studies exploring such issues. This paper builds upon this research and employs an interpretivist methodology to do so, focusing on how, what, and why questions relevant to the ethical use of BD + AI (Walsham, 1995a , b ). The primary research questions for the case studies were: How do organisations perceive ethical concerns related to BD + AI and in what ways do they deal with them?

We sought to elicit insights from interviews, rather than attempting to reach an objective truth about the ethical impacts of BD + AI. The interpretivist case study approach (Stake 2003) allowed the researchers ‘to understand ‘reality’ as the blending of the various (and sometimes conflicting) perspectives which coexist in social contexts, the common threads that connect the different perspectives and the value systems that give rise to the seeming contradictions and disagreements around the topics discussed. Whether one sees this reality as static (social constructivism) or dynamic (social constructionism) was also a point of consideration, as they both belong in the same “family” approach where methodological flexibility is as important a value as rigour’ (XXX).

Through extensive brainstorming within the research team, and evaluations of relevant literature, 16 social application domains were established as topics for case study analysis. Footnote 7 The project focused on ten out of these application domains in accordance with the partners’ competencies. The case studies have covered ten domains, and each had their own unique focus, specifications, and niches, which added to the richness of the evaluations (Table 1 ).

The qualitative analysis approach adopted in this study focused on these ten standalone operational case studies that were directly related to the application domains presented in Table 1 . These individual case studies provide valuable insights (Yin, 2014 , 2015 ); however, a multiple-case study approach offers a more comprehensive analysis of ethical issues related to BD + AI use (Herriott & Firestone, 1983 ). Thus, this paper adopts a multiple-case study methodology to identify what insights can be obtained from the ten cases, identifies whether any generalisable understandings can be retrieved, and evaluates how different organisations deal with issues pertaining to BD + AI development and use. The paper does not attempt to derive universal findings from this analysis, in line with the principles of interpretive research, but further attempts to gain an in-depth understanding of the implications of selected BD + AI applications.

The data collection was guided by specific research questions identified through each case, including five desk research questions (see appendix 1); 24 interview questions (see appendix 2); and a checklist of 17 potential ethical issues, developed by the project leader Footnote 8 (see appendix 3). A thematic analysis framework was used to ‘highlight, expose, explore, and record patterns within the collected data. The themes were patterns across data sets that were important to describe several ethical issues which arise through the use of BD  +  AI across different types of organisations and application domains’ (XXX).

A workshop was then held after the interviews were carried out. The workshop brought together the experts in the case study team to discuss their findings. This culminated in 26 ethical issues Footnote 9 that were inductively derived from the data collected throughout the interviews (see Fig.  2 and Table 3). Footnote 10 In order to ensure consistency and rigour in the multiple-case study approach, researchers followed a standardised case study protocol (Yin, 2014 ). Footnote 11

figure 2

The Prevalence of Ethical Issues in the Case Studies

Thirteen different organisations were interviewed for 10 case studies, consisting of 22 interviews in total. Footnote 12 These ranged from 30 min to 1 ½ hours in-person or Skype interviews. The participants that were selected for interviews represented a very broad range of application domains and organisations that use BD + AI. The case study organisations were selected according to their relevance to the overall case study domains and considering their fit with the domains and likelihood of providing interesting insights. The interviewees were then selected according to their ability to explain their BD + AI and its role in their organisation. In addition to interviews, a document review provided supporting information about the organisation. Thus, websites and published material were used to provide background to the research.

Findings: Ten Case Studies

This section gives a brief overview of the cases, before analysing their similarities and differences. It also highlights the different types of BD + AI being used, and the types of data used by the BD + AI in the case study organisations, before conducting an ethical analysis of the cases. Table 2 presents an overview of the 10 cases to show the roles of the interviewees, the focus of the technologies being used, and the data retrieved by each organisation’s BD + AI. All interviews were conducted in English.

The types of organisations that were used in the case studies varied extensively. They included start-ups (CS10), niche software companies (CS1), national health insurers (Organisation X in CS6), national energy providers (CS7), chemical/agricultural multinational (CS3), and national (CS9) and international (CS8) telecommunications providers. The case studies also included public (CS2, Organisation 1 and 4 in CS4) and semi-public (Organisation 2 in CS4) organisations, as well as a large scientific research project (CS5).

The types of individuals interviewed also varied extensively. For example, CS6 and CS7 did not have anyone with a specific technical background, which limited the possibility of analysing issues related to the technology itself. Some case studies only had technology experts (such as CS1, CS8, and CS9), who mostly concentrated on technical issues, with much less of a focus on ethical concerns. Other case studies had a combination of both technical and policy-focused experts (i.e. CS3, CS4, and CS5). Footnote 13

Therefore, it must be made fundamentally clear that we are not proposing that all of the interviewees were authorities in the field, or that even collectively they represent a unified authority on the matter, but instead, that we are hoping to show what are the insights and perceived ethical issues of those currently working with AI on the ground view as ethical concerns. While the paper is presenting the ethical concerns found within an array of domains, we do not claim that any individual case study is representative of their entire industry, but instead, our intent was to capture a wide diversity of viewpoints, domains, and applications of AI, to encompass a broad amalgamation of concerns. We should also state that this is not a shortcoming of the study but that it is the normal approach that social science often takes.

The diversity of organisations and their application focus areas also varied. Some organisations focused more so on the Big Data component of their AI, while others more strictly on the AI programming and analytics. Even when organisations concentrated on a specific type of BD + AI, such as Big Data, its use varied immensely, including retrieval (CS1), analysis (CS2), predictive analytics (CS10), and transactional value (Organisation 2 in CS4). Some domains adopted BD + AI earlier and more emphatically than others (such as communications, healthcare, and insurance). Also, the size, investment, and type of organisation played a part in the level of BD + AI innovation (for example, the two large multinationals in CS3 and CS8 had well-developed BD + AI).

The maturity level of BD + AI was also determined by how it was integrated, and its importance, within an organisation. For instance, in organisations where BD + AI were fundamental for the success of the business (e.g. CS1 and CS10), they played a much more important role than in companies where there was less of a reliance (e.g. CS7). In some organisations, even when BD + AI was not central to success, the level of development was still quite advanced because of economic investment capabilities (e.g. CS3 and CS8).

These differences provided important questions to ask throughout this multi-case study analysis, such as: Do certain organisations respond to ethical issues relating to BD + AI in a certain way? Does the type of interviewee affect the ethical issues discussed—e.g. case studies without technical experts, those that only had technical experts, and those that had both? Does the type of BD + AI used impact the types of ethical issues discussed? What significance does the type of data retrieved have on ethical issues identified by the organisations? These inductive ethical questions provided a template for the qualitative analysis in the following section.

Ethical Issues in the Case Studies

Based on the interview data, the ethical issues identified in the case studies were grouped into six specific thematic sections to provide a more conducive, concise, and pragmatic methodology. Those six sections are: control of data, reliability of data, justice, economic issues, role of organisations, and individual freedoms. From the 26 ethical issues, privacy was the only ethical issue addressed in all 10 case studies, which was not surprising because it has received a great deal of attention recently because of the GDPR. Also, security, transparency, and algorithmic bias are regularly discussed in the literature, so we expected them to be significant issues across many of the cases. However, there were many issues that received less attention in the literature—such as access to BD + AI, trust, and power asymmetries—which were discussed frequently in the interviews. In contrast to this, there were ethical issues that were heavily discussed in the literature which received far less attention in the interviews, such as employment, autonomy, and criminal or malicious use of BD + AI (Fig.  2 ).

The ethical analysis was conducted using a combination of literature reviews and interviews carried out with stakeholders. The purpose of the interviews was to ensure that there were no obvious ethical issues faced by stakeholders in their day-to-day activities which had been missed in the academic literature. As such, the starting point was not an overarching normative theory, which might have meant that we looked for issues which fit well with the theory but ignored anything that fell outside of that theory. Instead the combined approach led to the identification of the 26 ethical issues, each labelled based on particular words or phrases used in the literature or by the interviewees. For example, the term "privacy" was used frequently and so became the label for references to and instances of privacy-relevant concerns. In this section we have clustered issues together based on similar problems faced (e.g. accuracy of data and accuracy of algorithms within the category of ‘reliability of data’).

In an attempt to highlight similar ethical issues and improve the overall analysis to better capture similar perspectives, the research team decided to use the method of clustering, a technique often used in data mining to efficiently group similar elements together. Through discussion in the research team, and bearing in mind that the purpose of the clustering process was to form clusters that would enhance understanding of the impact of these ethical issues, we arrived at the following six clusters: the control of data (covering privacy, security, and informed consent); the reliability of data (accuracy of data and accuracy of algorithms); justice (power asymmetries, justice, discrimination, and bias); economic issues (economic concerns, sustainability, and employment); the role of organisations (trust and responsibility); and human freedoms (autonomy, freedom, and human rights). Both the titles and the precise composition of each cluster of issues are the outcome of a reasoned agreement of the research team. However, it should be clear that we could have used different titles and different clustering. The point is not that each cluster forms a distinct group of ethical issues, independent from any other. Rather the ethical issues faced overlap and play into one another, but to present them in a manageable format we have opted to use this bottom-up clustering approach.

Human Freedoms

An interviewee from CS10 stated that they were concerned about human rights because they were an integral part of the company’s ethics framework. This was beneficial to their business because they were required to incorporate human rights to receive public funding by the Austrian government. The company ensured that they would not grant ‘full exclusivity on generated social unrest event data to any single party, unless the data is used to minimise the risk of suppression of unrest events, or to protect the violation of human rights’ (XXX). The company demonstrates that while BD + AI has been criticised for infringing upon human rights in the literature, they also offer the opportunity to identify and prevent human rights abuses. The company’s moral framework definitively stemmed from regulatory and funding requirements, which lends itself to the benefit of effective ethical top-down approaches, which is a divisive topic in the literature, with diverging views about whether top-down or bottom-up approaches are better options for improved AI ethics.

Trust & Responsibility

Responsibility was a concern in 5 of the case studies, confirming the importance it is given in the literature (see Sect.  3 ). Trust appeared in seven of the case studies. The cases focused on concerns found in the literature, such as BD + AI use in policy development, public distrust about automated decision-making and the integrity of corporations utilising datafication methods (van Dijck 2014 ).

Trust and control over BD + AI were an issue throughout the case studies. The organisation from the predictive intelligence case study (CS10) identified that their use of social media data raised trust issues. They converged with perspectives found in the literature that when people feel disempowered to use or be part of the BD + AI development process, they tend to lose trust in the BD + AI (Accenture, 2016 , 2017 ). In CS6, stakeholders (health insurers) trusted the decisions made by BD + AI when they were engaged and empowered to give feedback on how their data was used. Trust is enhanced when users can refuse the use of their data (CS7), which correlates with the literature. Companies discussed the benefits of establishing trustworthy relationships. For example, in CS9, they have “ been trying really hard to avoid the existence of fake [mobile phone] base stations, because [these raise] an issue with the trust that people put in their networks” (XXX).

Corporations need to determine the objective of the data analysis (CS3), what data is required for the BD + AI to work (CS2), and accountability for when it does not work as intended or causes undesirable outcomes (CS4). The issue here is whether the organisation takes direct responsibility for these outcomes, or, if informed consent has been given, can responsibility be shared with the granter of consent (CS3). The cases also raised the question of ‘responsible to whom’, the person whose data is being used or the proxy organisation who has provided data (CS6). For example, in the insurance case study, the company stated that they only had a responsibility towards the proxy organisation and not the sources of the data. All these issues are covered extensively in the literature in most application domains.

Control of Data

Concerns surrounding the control of data for privacy reasons can be put down to a general awareness of privacy issues in the press, reinforced by the recently-introduced GDPR. This was supported in the cases, where interviewees expressed the opinion that the GDPR had raised general awareness of privacy issues (CS1, CS9) or that it had lent weight to arguments concerning the importance of privacy (CS8).

The discussion of privacy ranged from stressing that it was not an issue for some interviewees, because there was no personal information in the data they used (CS4), to its being an issue for others, but one which was being dealt with (CS2 and CS8). One interviewee (CS5) expressed apprehension that privacy concerns conflicted with scientific innovation, introducing hitherto unforeseen costs. This view is not uncommon in scientific and medical innovation, where harms arising from the use of anonymised medical data are often seen as minimal and the potential benefits significant (Manson & O’Neill, 2007 ). In other cases (CS1), there was a confusion between anonymisation (data which cannot be traced back to the originating source) and pseudonymisation (where data can be traced back, albeit with difficulty) of users’ data. A common response from the cases was that providing informed consent for the use of personal data waived some of the rights to privacy of the user.

Consent may come in the form of a company contract Footnote 14 or an individual agreement. Footnote 15 In the former, the company often has the advantage of legal support prior to entering a contract and so should be fully aware of the information provided. In individual agreements, though, the individual is less likely to be legally supported, and so may be at risk of exploitation through not reading the information sufficiently (CS3), or of responding without adequate understanding (CS9). In one case (CS5), referring to anonymised data, consent was implied rather than given: the interviewee suggested that those involved in the project may have contributed data without giving clear informed consent. The interviewee also noted that some data may have been shared without the permission, or indeed knowledge, of those contributing individuals. This was acknowledged by the interviewee as a potential issue.

In one case (CS6), data was used without informed consent for fraud detection purposes. The interviewees noted that their organisation was working within the parameters of national and EU legislation, which allows for non-consensual use of data for these ends. One interviewee in this case stated that informed consent was sought for every novel use of the data they held. However, this was sought from the perceived owner of the data (an insurance company) rather than from the originating individuals. This case demonstrates how people may expect their data to be used without having a full understanding of the legal framework under which the data are collected. For example, data relating to individuals may legally be accessed for fraud detection without notifying the individual and without relying on the individual’s consent.

This use of personal data for fraud detection in CS6 also led to concerns regarding opacity. In both CS6 and CS10 there was transparency within the organisations (a shared understanding among staff as to the various uses of the data) but that did not extend to the public outside those organisations. In some cases (CS5) the internal transparency/external opacity meant that those responsible for developing BD + AI were often hard to meet. Of those who were interviewed in CS5, many did not know the providence of the data or the algorithms they were using. Equally, some organisations saw external opacity as integral to the business environment in which they were operating (CS9, CS10) for reasons of commercial advantage. The interviewee in CS9 cautioned that this approach, coupled with a lack of public education and the speed of transformation within the industry, would challenge any meaningful level of public accountability. This would render processes effectively opaque to the public, despite their being transparent to experts.

Reliability of Data

There can be multiple sources of unreliability in BD + AI. Unreliability originating from faults in the technology can lead to algorithmic bias, which can cause ethical issues such as unfairness, discrimination, and general negative social impact (CS3 and CS6). Considering algorithmic bias as a key input to data reliability, there exist two types of issues that may need to be addressed. Primarily, bias may stem from the input data, referred to as training data, if such data excludes adequate representation of the world, e.g. gender-biased datasets (CS6). Secondly, an inadequate representation of the world may be the result of lack of data, e.g. a correctly designed algorithm to learn from and predict a rare disease, may not have sufficient representative data to achieve correct predictions (CS5). In either case the input data are biased and may result in inaccurate decision-making and recommendations.

The issues of reliability of data stemming from data accuracy and/or algorithmic bias, may escalate depending on their use, as for example in predictive or risk-assessment algorithms (CS10). Consider the risks of unreliable data in employee monitoring situations (CS1), detecting pests and diseases in agriculture (CS3), in human brain research (CS5) or cybersecurity applications (CS8). Such issues are not singular in nature but closely linked to other ethical issues such as information asymmetries, trust, and discrimination. Consequently, the umbrella issue of reliability of data must be approached from different perspectives to ensure the validity of the decision-making processes of the BD + AI.

Data may over-represent some people or social groups who are likely to be already privileged or under-represent disadvantaged and vulnerable groups (CS3). Furthermore, people who are better positioned to gain access to data and have the expertise to interpret them may have an unfair advantage over people devoid of such competencies. In addition, BD + AI can work as a tool of disciplinary power, used to evaluate people’s conformity to norms representing the standards of disciplinary systems (CS5). We focus on the following aspects of justice in our case study analysis: power asymmetries, discrimination, inequality, and access.

The fact that issues of power can arise in public as well as private organisations was discussed in our case studies. The smart city case (CS4) showed that the public organisations were aware of potential problems arising from companies using public data and were trying to put legal safeguards in place to avoid such misuse. As a result of misuse, there is the potential that cities, or the companies with which they contract, may use data in harmful or discriminatory ways. Our case study on the use of BD + AI in scientific research showed that the interviewees were acutely aware of the potential of discrimination (CS10). They stated that biases in the data may not be easy to identify, and may lead to misclassification or misinterpretation of findings, which may in turn skew results. Discrimination refers to the recognition of difference, but it may also refer to unjust treatment of different categories of people based on their gender, sex, religion, race, class, or disability. BD + AI are often employed to distinguish between different cases, e.g. between normal and abnormal behaviour in cybersecurity. Determining whether such classification entails discrimination in the latter sense can be difficult, due to the nature of the data and algorithms involved.

Examples of potential inequality based on BD + AI could be seen in several case studies. The agricultural case (CS3) highlighted the power differential between farmers and companies with potential implications for inequality, but also the global inequality between farmers, linked to farming practices in different countries (CS3). Subsistence farmers in developing countries, for example, might find it more difficult to benefit from these technologies than large agro-businesses. The diverging levels of access to BD + AI entail different levels of ability to benefit from them and counteract possible disadvantages (CS3). Some companies restrict access to their data entirely, and others sell access at a fee, while others offer small datasets to university-based researchers (Boyd & Crawford, 2012 , p. 674).

Economic Issues

One economic impact of BD + AI outlined in the agriculture case study (CS3) focused on whether this technology, and their ethical implementation, were economically affordable. If BD + AI could not improve economic efficiency, they would be rejected by the end-user, whether they were more productive, sustainable, and ethical options. This is striking, as it raises a serious challenge for the AI ethics literature and industry. It establishes that no matter how well intentioned and principled AI ethics guidelines and charters are, unless their implementation can be done in an economically viable way, their implementation will be challenged and resisted by those footing the bill.

The telecommunications case study (CS9) focused on how GDPR legislation may economically impact businesses using BD + AI by creating disparities in competitiveness between EU and non-EU companies developing BD + AI. Owing to the larger data pools of the latter, their BD + AI may prove to be more effective than European-manufactured alternatives, which cannot bypass the ethical boundaries of European law in the same way (CS8). This is something that is also being addressed in the literature and is a very serious concern for the future profitability and development of AI in Europe (Wallace & Castro, 2018 ). The literature notes additional issues in this area that were not covered in the cases. There is the potential that the GDPR will increase costs of European AI companies by having to manually review algorithmic decision-making; the right to explanation could reduce AI accuracy; and the right to erasure could damage AI systems (Wallace & Castro, 2018 , p. 2).

One interviewee stated that public–private BD + AI projects should be conducted in a collaborative manner, rather than a sale-of-service (CS4). However, this harmonious partnership is often not possible. Another interviewee discussed the tension between public and private interests on their project—while the municipality tried to focus on citizen value, the ICT company focused on the project’s economic success. The interviewee stated that the project would have terminated earlier if it were the company’s decision, because it was unprofitable (CS4). This is a huge concern in the literature, whereby private interests will cloud, influence, and damage public decision-making within the city because of their sometimes-incompatible goals (citizen value vs. economic growth) (Sadowski & Pasquale, 2015 ). One interviewee said that the municipality officials were aware of the problems of corporate influence and thus are attempting to implement the approach of ‘data sovereignty’ (CS2).

During our interviews, some viewed BD + AI as complementary to human employment (CS3), collaborative with such employment (CS4), or as a replacement to employment (CS6). The interviewees from the agriculture case study (CS3) stated that their BD + AI were not sufficiently advanced to replace humans and were meant to complement the agronomist, rather than replace them. However, they did not indicate what would happen when the technology is advanced enough, and it becomes profitable to replace the agronomist. The insurance company interviewee (CS6) stated that they use BD + AI to reduce flaws in personal judgment. The literature also supports this viewpoint, where BD + AI is seen to offer the potential to evaluate cases impartially, which is beneficial to the insurance industry (Belliveau, Gray, & Wilson, 2019 ). Footnote 16 The interviewee reiterated this and also stated that BD + AI would reduce the number of people required to work on fraud cases. The interviewee stated that BD + AI are designed to replace these individuals, but did not indicate whether their jobs were secure or whether they would be retrained for different positions, highlighting a concern found in the literature about the replacement and unemployment of workers by AI (Bossman, 2016 ). In contrast to this, a municipality interviewee from CS4 stated that their chat-bots are used in a collaborative way to assist customer service agents, allowing them to concentrate on higher-level tasks, and that there are clear policies set in place to protect their jobs.

Sustainability was only explicitly discussed in two interviews (CS3 and CS4). The agriculture interviewees stated that they wanted to be the ‘first’ to incorporate sustainability metrics into agricultural BD + AI, indicating a competitive and innovative rationale for their company (CS3). Whereas the interviewee from the sustainable development case study (CS4) stated that their goal of using BD + AI was to reduce Co2 emissions and improve energy and air quality. He stated that there are often tensions between ecological and economic goals and that this tension tends to slow down the efforts of BD + AI public–private projects—an observation also supported by the literature (Keeso, 2014 ). This tension between public and private interests in BD + AI projects was a recurring issue throughout the cases, which will be the focus of the next section on the role of organisations.

Discussion and Conclusion

The motivation behind this paper is to come to a better understanding of ethical issues related to BD + AI based on a rich empirical basis across different application domains. The exploratory and interpretive approach chosen for this study means that we cannot generalise from our research to all possible examples of BD + AI, but it does allow us to generalise to theory and rich insights (Walsham, 1995a , b , 2006 ). These theoretical insights can then provide the basis for further empirical research, possibly using other methods to allow an even wider set of inputs to move beyond some of the limitations of the current study.

Organisational Practice and the Literature

The first point worth stating is that there is a high level of consistency both among the case studies and between cases and literature. Many of the ethical issues identified cut across the cases and are interpreted in similar ways by different stakeholders. The frequency distribution of ethical issues indicates that very few, if any, issues are relevant to all cases but many, such as privacy, have a high level of prevalence. Despite appearing in all case studies, privacy was not seen as overly problematic and could be dealt with in the context of current regulatory principles (GDPR). Most of the issues that we found in the literature (see Sect.  2 ) were also present in the case studies. In addition to privacy and data protection, this included accuracy, reliability, economic and power imbalances, justice, employment, discrimination and bias, autonomy and human rights and freedoms.

Beyond the general confirmation of the relevance of topics discussed in the literature, though, the case studies provide some further interesting insights. From the perspective of an individual case some societal factors are taken for granted and outside of the control of individual actors. For example, intellectual property regimes have significant and well-recognised consequences for justice, as demonstrated in the literature. However, there is often little that individuals or organisations can do about them. Even in cases where individuals may be able to make a difference and the problem is clear, it is not always obvious how to do this. Some well-publicised discrimination cases may be easy to recognise, for example where an HR system discriminates against women or where a facial recognition system discriminates against black people. But in many cases, it may be exceedingly difficult to recognise discrimination where it is not clear how a person is discriminated against. If, for example, an image-based medical diagnostic system leads to disadvantages for people with genetic profiles, this may not be easy to identify.

With regards to the classification of the literature suggested in Sect.  2 along the temporal dimension, we can see that the attention of the case study respondents seems to be correlated to the temporal horizon of the issues. The issues we see as short-term figures most prominently, whereas the medium-term issues, while still relevant and recognisable, appear to be less pronounced. The long-term questions are least visible in the cases. This is not very surprising, as the short-term issues are those that are at least potentially capable of being addressed relatively quickly and thus must be accessible on the local level. Organisations deploying or using AI therefore are likely to have a responsibility to address these issues and our case studies have shown that they are aware of this and putting measures in place. This is clearly true for data protection or security issues. The medium-term issues that are less likely to find local resolutions still figure prominently, even though an individual organisation has less influence on how they can be addressed. Examples of this would be questions of unemployment, justice, or fairness. There was little reference to what we call long-term issues, which can partly be explained by the fact that the type of AI user organisations we investigated have very limited influence on how they are perceived and how they may be addressed.

Interpretative Differences on Ethical Issues

Despite general agreement on the terminology used to describe ethical issues, there are often important differences in interpretation and understanding. In the first ethics theme, control of data, the perceptions of privacy ranged from ‘not an issue’ to an issue that was being dealt with. Some of this arose from the question of informed consent and the GDPR. However, a reliance on legislation, such as GDPR, without full knowledge of the intricacies of its details (i.e. that informed consent is only one of several legal bases of lawful data processing), may give rise to a false sense of security over people’s perceived privacy. This was also linked to the issue of transparency (of processes dealing with data), which may be external to the organisation (do people outside understand how an organisation holds and processes their data), or internal (how well does the organisation understand the algorithms developed internally) and sometimes involve deliberate opacity (used in specific contexts where it is perceived as necessary, such as in monitoring political unrest and its possible consequences). Therefore, a clearer and more nuanced understanding of privacy and other ethical terms raised here might well be useful, albeit tricky to derive in a public setting (for an example of complications in defining privacy, see Macnish, 2018 ).

Some issues from the literature were not mentioned in the cases, such as warfare. This can easily be explained by our choice of case studies, none of which drew on work done in this area. It indicates that even a set of 10 case studies falls short of covering all issues.

A further empirical insight is in the category we called ‘role of organisations’, which covers trust and responsibility. Trust is a key term in the discussion of the ethics of AI, prominently highlighted by the focus on trustworthy AI by the EU’s High-Level Expert Group, among others. We put this into the ‘role of organisations’ category because our interaction with the case study respondents suggested that they felt it was part of the role of their organisations to foster trust and establish responsibilities. But we are open to the suggestion that these are concepts on a slightly different level that may provide the link between specific issues in applications and broader societal debate.

Next Steps: Addressing the Ethics of AI and Big Data

This paper is predominantly descriptive, and it aims to provide a theoretically sound and empirically rich account of ethical concerns in AI + BD. While we hope that it proves to be insightful it is only a first step in the broader journey towards addressing and resolving these issues. The categorisation suggested here gives an initial indication of which type of actor may be called upon to address which type of issue. The distinction between micro-, meso- and macro perspectives suggested by Haenlein and Kaplan ( 2019 ) resonates to some degree with our categorisation of issues.

This points to the question what can be done to address these ethical issues and by whom should it be done? We have not touched on this question in the theoretical or empirical part of the paper, but the question of mitigation is the motivating force behind much of the AI + BD ethics research. The purpose of understanding these ethical questions is to find ways of addressing them.

This calls for a more detailed investigation of the ethical nature of the issues described here. As indicated earlier, we did not begin with a specific ethical theoretical framework imposed onto the case studies, but did have some derived ethics concepts which we explored within the context of the cases and allowed others to emerge over the course of the interviews. One issue is the philosophical question whether the different ethical issues discussed here are of a similar or comparable nature and what characterises them as ethical issues. This is not only a philosophical question but also a practical one for policymakers and decision makers. We have alluded to the idea that privacy and data protection are ethical issues, but they also have strong legal implications and can also be human rights issues. It would therefore be beneficial to undertake a further analysis to investigate which of these ethical issues are already regulated and to what degree current regulation covers BD + AI, and how this varies across the various EU nations and beyond.

Another step could be to expand an investigation like the one presented here to cover the ethics of AI + BD debate with a focus on suggested resolutions and policies. This could be achieved by adopting the categorisation and structure presented here and extending it to the currently discussed option for addressing the ethical issues. These include individual and collective activities ranging from technical measures to measure bias in data or individual professional guidance to standardisation, legislation, the creation of a specific regulator and many more. It will be important to understand how these measures are conceptualised as well as which ones are already used to which effect. Any such future work, however, will need to be based on a sound understanding of the issues themselves, which this paper contributes to. The key contribution of the paper, namely the presentation of empirical findings from 10 case studies show in more detail how ethical issues play out in practice. While this work can and should be expanded by including an even broader variety of cases and could be supplemented by other empirical research methods, it marks an important step in the development of our understanding of these ethical issues. This should form a part of the broader societal debate about what these new technologies can and should be used for and how we can ensure that their consequences are beneficial for individuals and society.

Throughout the paper, XXX will be used to anonymise relevant text that may identify the authors, either through the project and/or publications resulting from the individual case studies. All case studies have been published individually. Several the XXX references in the findings refer to these individual publications which provide more detail on the cases than can be provided in this cross-case analysis.

The ethical issues that we discussed throughout the case studies refers to issues broadly construed as ethical issues, or issues that have ethical significance. While some issues may not be directly obvious how they are ethical issues, they may give rise to significant harm relevant to ethics. For example, accuracy of data may not explicitly be an ethical issue, if inaccurate data is used in algorithms, it may lead to discrimination, unfair bias, or harms to individuals.

Such as chat-bots, natural language processing AI, IoT data retrieval, predictive risk analysis, cybersecurity machine-learning, and large dataset exchanges.

https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1 .

https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence .

The type of AI currently in vogue, as outlined earlier, is based on machine learning, typically employing artificial neural networks for big data analysis. This is typically seen as ‘narrow AI’ and it is not clear whether there is a way from narrow to general AI, even if one were to accept that achieving general AI is fundamentally possible.

The 16 social domains were: Banking and securities; Healthcare; Insurance; Retail and wholesale trade; Science; Education; Energy and utilities; Manufacturing and natural resources; Agriculture; Communications, media and entertainment; Transportation; Employee monitoring and administration; Government; Law enforcement and justice; Sustainable development; and Defence and national security.

This increased to 26 ethical issues following a group brainstorming session at the case study workshop.

The nine additional ethical issues from the initial 17 drafted by the project leader were: human rights, transparency, responsibility, ownership of data, algorithmic bias, integrity, human rights, human contact, and accuracy of data.

The additional ethical issues were access to BD + AI, accuracy of data, accuracy of recommendations, algorithmic bias, economic, human contact, human rights, integrity, ownership of data, responsibility, and transparency. Two of the initial ethical concerns were removed (inclusion of stakeholders and environmental impact). The issues raised concerning inclusion of stakeholders were deemed to be sufficiently included in access to BD + AI, and those relating to environmental impact were felt to be sufficiently covered by sustainability.

The three appendices attached in this paper comprise much of this case study protocol.

CS4 evaluated four organisations, but one of these organisations was also part of CS2 – Organisation 1. CS6 analysed two insurance organisations.

Starting out, we aimed to have both policy/ethics-focused experts within the organisation and individuals that could also speak with us about the technical aspects of the organisation’s BD + AI. However, this was often not possible, due to availability, organisations’ inability to free up resources (e.g. employee’s time) for interviews, or lack of designated experts in those areas.

For example, in CS1, CS6, and CS8.

For example, in CS2, CS3, CS4, CS5, CS6, and CS9.

As is discussed elsewhere in this paper, algorithms also hold the possibility of reinforcing our prejudices and biases or creating new ones entirely.

Accenture. (2016). Building digital trust: The role of data ethics in the digital age. Retrieved December 1, 2020 from https://www.accenture.com/t20160613T024441__w__/us-en/_acnmedia/PDF-22/Accenture-Data-Ethics-POV-WEB.pdf .

Accenture. (2017). Embracing artificial intelligence. Enabling strong and inclusive AI driven growth. Retrieved December 1, 2020 from https://www.accenture.com/t20170614T130615Z__w__/us-en/_acnmedia/Accenture/next-gen-5/event-g20-yea-summit/pdfs/Accenture-Intelligent-Economy.pdf .

Antoniou, J., & Andreou, A. (2019). Case study: The Internet of Things and Ethics. The Orbit Journal, 2 (2), 67.

Google Scholar  

Badri, A., Boudreau-Trudel, B., & Souissi, A. S. (2018). Occupational health and safety in the industry 4.0 era: A cause for major concern? Safety Science, 109, 403–411. https://doi.org/10.1016/j.ssci.2018.06.012

Article   Google Scholar  

Barolli, L., Takizawa, M., Xhafa, F., & Enokido, T. (ed.) (2019). Web, artificial intelligence and network applications. In Proceedings of the workshops of the 33rd international conference on advanced information networking and applications , Springer.

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104 (671), 671–732. https://doi.org/10.15779/Z38BG31

Baum, S. D. (2017). Reconciliation between factions focused on near-term and long-term artificial intelligence. AI Society, 2018 (33), 565–572.

Belliveau, K. M., Gray, L. E., & Wilson, R. J. (2019). Busting the Black Box: Big Data Employment and Privacy | IADC LAW. https://www.iadclaw.org/publications-news/defensecounseljournal/busting-the-black-box-big-data-employment-and-privacy/ . Accessed 10 May 2019.

Bossman, J. (2016). Top 9 ethical issues in artificial intelligence. World Economic Forum . https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/ . Accessed 10 May 2019.

Bostrom, N. (2016). Superintelligence: Paths . OUP Oxford.

Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication and Society, 15 (5), 662–679. https://doi.org/10.1080/1369118X.2012.678878

Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data and Society, 3 (1), 2053951715622512.

Bush, T., (2012). Authenticity in Research: Reliability, Validity and Triangulation. Chapter 6 in edited “Research Methods in Educational Leadership and Management”, SAGE Publications.

Calders, T., Kamiran, F., & Pechenizkiy, M. (2009). Building classifiers with independency constraints. In IEEE international conference data mining workshops , ICDMW’09, Miami, USA.

Chatfield, K., Iatridis, K., Stahl, B. C., & Paspallis, N. (2017). Innovating responsibly in ICT for ageing: Drivers, obstacles and implementation. Sustainability, 9 (6), 971. https://doi.org/10.3390/su9060971 .

Cohen, I. G., Amarasingham, R., Shah, A., et al. (2014). The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Affairs, 33 (7), 1139–1147.

Couldry, N., & Powell, A. (2014). Big Data from the bottom up. Big Data and Society, 1 (2), 205395171453927. https://doi.org/10.1177/2053951714539277

Crawford, K., Gray, M. L., & Miltner, K. (2014). Big data| critiquing big data: Politics, ethics, epistemology | special section introduction. International Journal of Communication, 8, 10.

Cuquet, M., & Fensel, A. (2018). The societal impact of big data: A research roadmap for Europe. Technology in Society, 54, 74–86.

Danna, A., & Gandy, O. H., Jr. (2002). All that glitters is not gold: Digging beneath the surface of data mining. Journal of Business Ethics, 40 (4), 373–438.

European Convention for the Protection of HUman Rights and Fundamental Freedoms, pmbl., Nov. 4, 1950, 213 UNTS 221.

Herriott, E. R., & Firestone, W. (1983). Multisite qualitative policy research: Optimizing description and generalizability. Educational Researcher, 12, 14–19. https://doi.org/10.3102/0013189X012002014

Einav, L., & Levin, J. (2014). Economics in the age of big data. Science, 346 (6210), 1243089. https://doi.org/10.1126/science.1243089

Ferraggine, V. E., Doorn, J. H., & Rivera, L. C. (2009). Handbook of research on innovations in database technologies and applications: Current and future trends (pp. 1–1124). IGI Global.

Fothergill, B. T., Knight, W., Stahl, B. C., & Ulnicane, I. (2019). Responsible data governance of neuroscience big data. Frontiers in Neuroinformatics, 13 . https://doi.org/10.3389/fninf.2019.00028

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019

Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61 (4), 5–14.

Harari, Y. N. (2017). Homo deus: A brief history of tomorrow (1st ed.). Vintage.

Book   Google Scholar  

ICO. (2017). Big data, artificial intelligence, machine learning and data protection. Retrieved December 1, 2020 from Information Commissioner’s Office website: https://iconewsblog.wordpress.com/2017/03/03/ai-machine-learning-and-personal-data/ .

Ioannidis, J. P. (2013). Informed consent, big data, and the oxymoron of research that is not research. The American Journal of Bioethics., 2, 15.

Jain, P., Gyanchandani, M., & Khare, N. (2016). Big data privacy: A technological perspective and review. Journal of Big Data, 3 (1), 25.

Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance. Government Information Quarterly, 33 (3), 371–377. https://doi.org/10.1016/j.giq.2016.08.011

Jirotka, M., Grimpe, B., Stahl, B., Hartswood, M., & Eden, G. (2017). Responsible research and innovation in the digital age. Communications of the ACM, 60 (5), 62–68. https://doi.org/10.1145/3064940

Jiya, T. (2019). Ethical Implications Of Predictive Risk Intelligence. ORBIT Journal, 2 (2), 51.

Jiya, T. (2019). Ethical reflections of human brain research and smart information systems. The ORBIT Journal, 2 (2), 1–24.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 4 (16), 263–274.

Johnson, J. A. (2018). Open data, big data, and just data. In J. A. Johnson (Ed.), Toward information justice (pp. 23–49). Berlin: Springer.

Chapter   Google Scholar  

Kancevičienė, N. (2019). Insurance, smart information systems and ethics: a case study. The ORBIT Journal, 2 (2), 1–27.

Keeso, A. (2014). Big data and environmental sustainability: A conversation starter . https://www.google.com/search?rlz=1C1CHBF_nlNL796NL796&ei=YF3VXN3qCMLCwAKp4qjYBQ&q=Keeso+Big+Data+and+Environmental+Sustainability%3A+A+Conversation+Starter&oq=Keeso+Big+Data+and+Environmental+Sustainability%3A+A+Conversation+Starter&gs_l=psy-ab.3...15460.16163..16528...0.0..0.76.371.6......0....1..gws-wiz.......0i71j35i304i39j0i13i30.M_8nNbaL2E8 . Accessed 10 May 2019.

Kuriakose, F., & Iyer, D. (2018). Human Rights in the Big Data World (SSRN Scholarly Paper No. ID 3246969). Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3246969 . Accessed 13 May 2019.

Kurzweil, R. (2006). The singularity is near . Gerald Duckworth & Co Ltd.

Latonero, M. (2018). Big data analytics and human rights. New Technologies for Human Rights Law and Practice. https://doi.org/10.1017/9781316838952.007

Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E., & Oliver, N. (2017). The tyranny of data? the bright and dark sides of data-driven decision-making for social good. In Transparent data mining for big and small data (pp. 3–24). Springer.

Livingstone, D. (2015). Transhumanism: The history of a dangerous idea . CreateSpace Independent Publishing Platform.

Macnish, K. (2018). Government surveillance and why defining privacy matters in a post-snowden world. Journal of Applied Philosophy, 35 (2), 417–432.

Macnish, K., & Inguanzo, A. (2019). Case study-customer relation management, smart information systems and ethics. The ORBIT Journal, 2 (2), 1–24.

Macnish, K., Inguanzo, A. F., & Kirichenko, A. (2019). Smart information systems in cybersecurity. ORBIT Journal, 2 (2), 15.

Mai, J. E. (2016). Big data privacy: The datafication of personal information. The Information Society, 32 (3), 192–199.

Manson, N. C., & O’Neill, O. (2007). Rethinking informed consent in bioethics . Cambridge University Press.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society, 3 (2), 2053951716679679.

Meeker, Q. W., & , Hong, Y. . (2014). Reliability Meets big data: Opportunities and challenges. Quality Engineering, 26 (1), 102–116.

Newman, N. (2013). The costs of lost privacy: Consumer harm and rising economic inequality in the age of google (SSRN Scholarly Paper No. ID 2310146). Rochester: Social Science Research Network. https://papers.ssrn.com/abstract=2310146 . Accessed 10 May 2019.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy . Crown Publishers.

Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorithmic bias: implications for health systems. Journal of global health, 9 (2).

Pellé, S., & Reber, B. (2015). Responsible innovation in the light of moral responsibility. Journal on Chain and Network Science, 15 (2), 107–117. https://doi.org/10.3920/JCNS2014.x017

Portmess, L., & Tower, S. (2015). Data barns, ambient intelligence and cloud computing: The tacit epistemology and linguistic representation of Big Data. Ethics and Information Technology, 17 (1), 1–9. https://doi.org/10.1007/s10676-014-9357-2

Ryan, M. (2019). Ethics of public use of AI and big data. ORBIT Journal, 2 (2), 15.

Ryan, M. (2019). Ethics of using AI and big data in agriculture: The case of a large agriculture multinational. The ORBIT Journal, 2 (2), 1–27.

Ryan, M., & Gregory, A. (2019). Ethics of using smart city AI and big data: The case of four large European cities. The ORBIT Journal, 2 (2), 1–36.

Sadowski, J., & Pasquale, F. A. (2015). The spectrum of control: A social theory of the smart city. First Monday, 20 (7), 16.

Schradie, J. (2017). Big data is too small: Research implications of class inequality for online data collection. In D. June & P. Andrea (Eds.), Media and class: TV, film and digital culture . Abingdon: Taylor and Francis.

Taylor, L. (2017). ‘What is data justice? The case for connecting digital rights and freedoms globally’ In Big data and society (pp. 1–14). https://doi.org/10.1177/2053951717736335 .

Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. The Northwestern Journal of Technology and Intellectual Property, 11, 10.

Tene, O., & Polonetsky, J. (2013). A theory of creepy: technology, privacy and shifting social norms. Yale JL and Technology, 16, 59.

Van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1 (1), 2–14.

Voinea, C., & Uszkai, R. (n.d.). An assessement of algorithmic accountability methods .

Walsham, G. (1995). Interpretive case studies in IS research: nature and method. European Journal of Information Systems, 4 (2), 74–81.

Wallace, N., & Castro, D. (2018) The Impact of the EU’s New Data Protection Regulation on AI, Centre for Data Innovation .

Walsham, G. (1995). Interpretive case-studies in IS research-nature and method. European Journal of Information Systems, 4 (2), 74–81.

Walsham, G. (2006). Doing interpretive research. European Journal of Information Systems, 15 (3), 320–330.

Wheeler, G. (2016). Machine epistemology and big data. In L. McIntyre & A. Rosenburg (Eds.), Routledge Companion to Philosophy of Social Science . Routledge.

Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: A roadmap for research. https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf .

Wolf, B. (2015). Burkhardt Wolf: Big data, small freedom? / Radical Philosophy. Radical Philosophy . https://www.radicalphilosophy.com/commentary/big-data-small-freedom . Accessed 13 May 2019.

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). SAGE.

Yin, R. K. (2015). Qualitative research from start to finish . Guilford Publications.

Zwitter, A. (2014). Big data ethics. Big Data and Society, 1 (2), 51.

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization (April 4, 2015). Journal of Information Technology, 2015 (30), 75–89. https://doi.org/10.1057/jit.2015.5

Download references

Acknowledgements

This SHERPA Project has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 786641. The author(s) acknowledge the contribution of the consortium to the development and design of the case study approach.

Author information

Authors and affiliations.

Wageningen Economic Research, Wageningen University and Research, Wageningen, The Netherlands

UCLan Cyprus, Larnaka, Cyprus

Josephina Antoniou

De Montford University, Leicester, UK

Laurence Brooks & Bernd Stahl

Northampton University, Northampton, UK

Tilimbe Jiya

The University of Twente, Enschede, The Netherlands

Kevin Macnish

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mark Ryan .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1: Desk Research Questions

Number Research Question.

In which sector is the organisation located (e.g. industry, government, NGO, etc.)?

What is the name of the organisation?

What is the geographic scope of the organisation?

What is the name of the interviewee?

What is the interviewee’s role within the organisation?

Appendix 2: Interview Research Questions

No Research Question.

What involvement has the interviewee had with BD + AI within the organisation?

What type of BD + AI is the organisation using? (e.g. IBM Watson, Google Deepmind)

What is the field of application of the BD + AI (e.g. administration, healthcare, retail)

Does the BD + AI work as intended or are there problems with its operation?

What are the innovative elements introduced by the BD + AI (e.g. what has the technology enabled within the organisation?)

What is the level of maturity of the BD + AI ? (i.e. has the technology been used for long at the organisation? Is it a recent development or an established approach?)

How does the BD + AI interact with other technologies within the organisation?

What are the parameters/inputs used to inform the BD + AI ? (e.g. which sorts of data are input, how is the data understood within the algorithm?). Does the BD + AI collect and/or use data which identifies or can be used to identify a living person (personal data)?. Does the BD + AI collect personal data without the consent of the person to whom those data relate?

What are the principles informing the algorithm used in the BD + AI (e.g. does the algorithm assume that people walk in similar ways, does it assume that loitering involves not moving outside a particular radius in a particular time frame?). Does the BD + AI classify people into groups? If so, how are these groups determined? Does the BD + AI identify abnormal behaviour? If so, what is abnormal behaviour to the BD + AI ?

Are there policies in place governing the use of the BD + AI ?

How transparent is the technology to administrators within the organisation, to users within the organisation?

Who are the stakeholders in the organisation?

What has been the impact of the BD + AI on stakeholders?

How transparent is the technology to people outside the organisation?

Are those stakeholders engaged with the BD + AI ? (e.g. are those affected aware of the BD + AI, do they have any say in its operation?). If so, what is the nature of this engagement? (focus groups, feedback, etc.)

In what way are stakeholders impacted by the BD + AI ? (e.g. what is the societal impact: are there issues of inequality, fairness, safety, filter bubbles, etc.?)

What are the costs of using the BD + AI to stakeholders? (e.g. potential loss of privacy, loss of potential to sell information, potential loss of reputation)

What is the expected longevity of this impact? (e.g. is this expected to be temporary or long-term?)

Are those stakeholders engaged with the BD + AI ? (e.g. are those affected aware of the BD + AI, do they have any say in its operation?)

If so, what is the nature of this engagement? (focus groups, feedback, etc.)

Appendix 3: Checklist of Ethical Issues

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ryan, M., Antoniou, J., Brooks, L. et al. Research and Practice of AI Ethics: A Case Study Approach Juxtaposing Academic Discourse with Organisational Reality. Sci Eng Ethics 27 , 16 (2021). https://doi.org/10.1007/s11948-021-00293-x

Download citation

Received : 26 August 2019

Accepted : 10 February 2021

Published : 08 March 2021

DOI : https://doi.org/10.1007/s11948-021-00293-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Smart information systems
  • Big data analytics
  • Artificial intelligence ethics
  • Multiple-case study analysis
  • Philosophy of technology
  • Find a journal
  • Publish with us
  • Track your research

it ethics case study

Princeton Dialogues on AI and Ethics

Princeton University

Case Studies

Princeton Dialogues on AI and Ethics Case Studies

The development of artificial intelligence (AI) systems and their deployment in society gives rise to ethical dilemmas and hard questions. By situating ethical considerations in terms of real-world scenarios, case studies facilitate in-depth and multi-faceted explorations of complex philosophical questions about what is right, good and feasible. Case studies provide a useful jumping-off point for considering the various moral and practical trade-offs inherent in the study of practical ethics.

Case Study PDFs : The Princeton Dialogues on AI and Ethics has released six long-format case studies exploring issues at the intersection of AI, ethics and society. Three additional case studies are scheduled for release in spring 2019.

Methodology : The Princeton Dialogues on AI and Ethics case studies are unique in their adherence to five guiding principles: 1) empirical foundations, 2) broad accessibility, 3) interactiveness, 4) multiple viewpoints and 5) depth over brevity.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

The Ethical Dilemma at the Heart of Big Tech Companies

  • Emanuel Moss
  • Jacob Metcalf

it ethics case study

Doing the right thing can be hard when it’s bad for business.

The central challenge ethics owners in tech companies are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal processes of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, there are pressures to establish or restore predictable processes and outcomes that serve the bottom line. This ratchets up the pressure to fit in and ratchets down the capacity to object to ethically questionable products, which makes it all the more difficult to distinguish between success and failure — moral victories can look like punishment while ethically questionable products earn big bonuses. The tensions that arise from this must be worked through, with one eye on process, but also with the other eye squarely focused on outcomes for the broader society.

If it seems like every week there’s a new scandal about ethics and the tech industry, it’s not your imagination. Even as the tech industry is trying to establish concrete practices and institutions around tech ethics, hard lessons are being learned about the wide gap between the practices of “doing ethics” and what people think of as “ethical”. This helps explain, in part, why it raises eyebrows when Google dissolves its short-lived AI ethics advisory board , in the face of public outcry about including a controversial alumnus of the Heritage Foundation on it, or when organized pressure from Google’s engineering staff results in the cancellation of military contracts .

it ethics case study

  • EM Emanuel Moss is an ethnographic researcher specializing in the social dimensions of AI systems at the Data & Society Research Institute and a PhD candidate in Anthropology at the CUNY Graduate Center.
  • JM Jacob Metcalf is a technology ethics researcher specializing in data analytics and artificial intelligence at Data & Society Research Institute and a consultant at Ethical Resolve.

Partner Center

it ethics case study

AI Ethics Case Studies & AI Incident Registries

AIAAIC Repository: AI, algorithmic and automation incidents collected, dissected, examined, and divulged

AI incident database

AI litigation database

Algorithm Tips: Resources and leads for investigating algorithms in society

Awful AI: Curated list to track current scary usages of AI

Berkeley Haas Center for Equity, Gender, and Leadership : Bias in AI Examples Tracker

CDEI : Review into bias in algorithmic decision-making

Digital Europe:    Case Studies on Artificial Intelligence

Eticas Foundation : Observatory of Algorithms with Social Impact

Fiesler, Casey & Garrett, Natalie & Beard, Nate. (2020). What Do We Teach When We Teach Tech Ethics?: A Syllabi Analysis .

Garrett, Natalie & Beard, Nate & Fiesler, Casey . (2020). More Than "If Time Allows": The Role of Ethics in AI Education

Jeffrey Saltz, Michael Skirpan, Casey Fiesler, Micha Gorelick, Tom Yeh, Robert Heckman, Neil Dewar, and Nathan Beard . 2019. I ntegrating Ethics within Machine Learning Courses . ACM Trans. Comput. Educ. 19, 4, Article 32 (August 2019)

Harvard University : Justice case studies with Michael Sandel

IEEE - ECPAIS : Use Case - Criteria for Addressing Ethical Challenges in Transparency, Accountability, and Privacy of Contract Tracing - Draft

Illinois Institute of Technology - Center for Study of Ethics in the Professions :  Ethics Case Studies library , Case Study Collection

​​MIT Media Lab :  Moral Machine

National Center for Case Study Teaching in Science, University at Buffalo :   Case Collection ​

Santa Clara University, Markkula Center for Applied Ethics :   Technology Ethics Cases

Open Roboethics Institute :   Scenario-based cases

Peltarion :  Deep Learning Opportunities and Best Practice Report

Princeton University, Dialogues on AI & Ethics :  Case study PDFs

" Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits ": A continually-updated list of studies from the CSCW 2021 paper

Project Sherpa :  Case study future scenarios

University of Washington Tech Policy Lab : Designing Tech Policy: Instructional Case Studies for Technologists and Policymaker

Case Study The Distilling of a Biased Algorithmic Decision Systems through a Business Lens

Case Study: The Distilling of a Biased Algorithmic Decision System through a Business Lens

McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Case Study UT Star Icon

Cyber Harassment

After a student defames a middle school teacher on social media, the teacher confronts the student in class and posts a video of the confrontation online.

it ethics case study

In many ways, social media platforms have created great benefits for our societies by expanding and diversifying the ways people communicate with each other, and yet these platforms also have the power to cause harm. Posting hurtful messages about other people is a form of harassment known as cyberbullying. Some acts of cyberbullying may not only be considered slanderous, but also lead to serious consequences. In 2010, Rutgers University student Tyler Clementi jumped to his death a few days after his roommate used a webcam to observe and tweet about Tyler’s sexual encounter with another man. Jane Clementi, Tyler’s mother, stated:

“In this digital world, we need to teach our youngsters that their actions have consequences, that their words have real power to hurt or to help. They must be encouraged to choose to build people up and not tear them down.”

In 2013, Idalia Hernández Ramos, a middle school teacher in Mexico, was a victim of cyber harassment. After discovering that one of her students tweeted that the teacher was a “bitch” and a “whore,” Hernández confronted the girl during a lesson on social media etiquette. Inquiring why the girl would post such hurtful messages that could harm the teacher’s reputation, the student meekly replied that she was upset at the time. The teacher responded that she was very upset by the student’s actions. Demanding a public apology in front of the class, Hernández stated that she would not allow “young brats” to call her those names. Hernández uploaded a video of this confrontation online, attracting much attention.

While Hernández was subject to cyber harassment, some felt she went too far by confronting the student in the classroom and posting the video for the public to see, raising concerns over the privacy and rights of the student. Sameer Hinduja, who writes for the Cyberbullying Research Center, notes, “We do need to remain gracious and understanding towards teens when they demonstrate immaturity.” Confronting instances of a teenager venting her anger may infringe upon her basic rights to freedom of speech and expression. Yet, as Hinduja explains, teacher and student were both perpetrators and victims of cyber harassment. All the concerns of both parties must be considered and, as Hinduja wrote, “The worth of one’s dignity should not be on a sliding scale depending on how old you are.”

Discussion Questions

1. In trying to teach the student a lesson about taking responsibility for her actions, did the teacher go too far and become a bully? Why or why not? Does she deserve to be fired for her actions?

2. What punishment does the student deserve? Why?

3. Who is the victim in this case? The teacher or the student? Was one victimized more than the other? Explain.

4. Do victims have the right to defend themselves against bullies? What if they go through the proper channels to report bullying and it doesn’t stop?

5. How should compassion play a role in judging other’s actions?

6. How are factors like age and gender used to “excuse” unethical behavior? (ie. “Boys will be boys” or “She’s too young/old to understand that what she did is wrong”) Can you think of any other factors that are sometimes used to excuse unethical behavior?

7. How is cyberbullying similar or different from face-to-face bullying? Is one more harmful than the other? Explain.

8. Do you know anyone who has been the victim of cyber-bullying? What types of harm did this person experience? Why or why not? Does she deserve to be fired for her actions?

Related Videos

Causing Harm

Causing Harm

Causing harm explores the types of harm that may be caused to people or groups and the potential reasons we may have for justifying these harms.

Bibliography

Teacher suspended after giving student a twitter lesson http://www.cnn.com/2013/09/12/world/americas/mexico-teacher-twitter/index.html

Pros and Cons of Social Media in the Classroom http://campustechnology.com/Articles/2012/01/19/Pros-and-Cons-of-Social-Media-in-the-Classroom.aspx?Page=1

How to Use Twitter in the Classroom http://thenextweb.com/twitter/2011/06/23/how-to-use-twitter-in-the-classroom/

Twitter is Turning Into a Cyberbullying Playground http://www.takepart.com/article/2012/08/08/twitter-turning-cyberbullying-playground

Can Social Media and School Policies be “Friends”? http://www.ascd.org/publications/newsletters/policy-priorities/vol17/num04/Can-Social-Media-and-School-Policies-be-%C2%A3Friends%C2%A3%C2%A2.aspx

What Are the Free Expression Rights of Students In Public Schools Under the First Amendment? http://www.firstamendmentschools.org/freedoms/faq.aspx?id=12991

Teacher Shames Student in Classroom After Student Bullies Teacher on Twitter http://cyberbullying.us/teacher-shames-student-in-classroom-after-student-bullies-teacher-on-twitter/

Stay Informed

Support our work.

  • Browse All Articles
  • Newsletter Sign-Up

it ethics case study

  • 23 Apr 2024
  • Cold Call Podcast

Amazon in Seattle: The Role of Business in Causing and Solving a Housing Crisis

In 2020, Amazon partnered with a nonprofit called Mary’s Place and used some of its own resources to build a shelter for women and families experiencing homelessness on its campus in Seattle. Yet critics argued that Amazon’s apparent charity was misplaced and that the company was actually making the problem worse. Paul Healy and Debora Spar explore the role business plays in addressing unhoused communities in the case “Hitting Home: Amazon and Mary’s Place.”

it ethics case study

  • 15 Apr 2024

Struggling With a Big Management Decision? Start by Asking What Really Matters

Leaders must face hard choices, from cutting a budget to adopting a strategy to grow. To make the right call, they should start by following their own “true moral compass,” says Joseph Badaracco.

it ethics case study

  • 26 Mar 2024

How Do Great Leaders Overcome Adversity?

In the spring of 2021, Raymond Jefferson (MBA 2000) applied for a job in President Joseph Biden’s administration. Ten years earlier, false allegations were used to force him to resign from his prior US government position as assistant secretary of labor for veterans’ employment and training in the Department of Labor. Two employees had accused him of ethical violations in hiring and procurement decisions, including pressuring subordinates into extending contracts to his alleged personal associates. The Deputy Secretary of Labor gave Jefferson four hours to resign or be terminated. Jefferson filed a federal lawsuit against the US government to clear his name, which he pursued for eight years at the expense of his entire life savings. Why, after such a traumatic and debilitating experience, would Jefferson want to pursue a career in government again? Harvard Business School Senior Lecturer Anthony Mayo explores Jefferson’s personal and professional journey from upstate New York to West Point to the Obama administration, how he faced adversity at several junctures in his life, and how resilience and vulnerability shaped his leadership style in the case, "Raymond Jefferson: Trial by Fire."

it ethics case study

  • 02 Jan 2024

Should Businesses Take a Stand on Societal Issues?

Should businesses take a stand for or against particular societal issues? And how should leaders determine when and how to engage on these sensitive matters? Harvard Business School Senior Lecturer Hubert Joly, who led the electronics retailer Best Buy for almost a decade, discusses examples of corporate leaders who had to determine whether and how to engage with humanitarian crises, geopolitical conflict, racial justice, climate change, and more in the case, “Deciding When to Engage on Societal Issues.”

it ethics case study

  • 12 Dec 2023

Can Sustainability Drive Innovation at Ferrari?

When Ferrari, the Italian luxury sports car manufacturer, committed to achieving carbon neutrality and to electrifying a large part of its car fleet, investors and employees applauded the new strategy. But among the company’s suppliers, the reaction was mixed. Many were nervous about how this shift would affect their bottom lines. Professor Raffaella Sadun and Ferrari CEO Benedetto Vigna discuss how Ferrari collaborated with suppliers to work toward achieving the company’s goal. They also explore how sustainability can be a catalyst for innovation in the case, “Ferrari: Shifting to Carbon Neutrality.” This episode was recorded live December 4, 2023 in front of a remote studio audience in the Live Online Classroom at Harvard Business School.

it ethics case study

  • 11 Dec 2023
  • Research & Ideas

Doing Well by Doing Good? One Industry’s Struggle to Balance Values and Profits

Few companies wrestle with their moral mission and financial goals like those in journalism. Research by Lakshmi Ramarajan explores how a disrupted industry upholds its values even as the bottom line is at stake.

it ethics case study

  • 27 Nov 2023

Voting Democrat or Republican? The Critical Childhood Influence That's Tough to Shake

Candidates might fixate on red, blue, or swing states, but the neighborhoods where voters spend their teen years play a key role in shaping their political outlook, says research by Vincent Pons. What do the findings mean for the upcoming US elections?

it ethics case study

  • 21 Nov 2023

The Beauty Industry: Products for a Healthy Glow or a Compact for Harm?

Many cosmetics and skincare companies present an image of social consciousness and transformative potential, while profiting from insecurity and excluding broad swaths of people. Geoffrey Jones examines the unsightly reality of the beauty industry.

it ethics case study

  • 09 Nov 2023

What Will It Take to Confront the Invisible Mental Health Crisis in Business?

The pressure to do more, to be more, is fueling its own silent epidemic. Lauren Cohen discusses the common misperceptions that get in the way of supporting employees' well-being, drawing on case studies about people who have been deeply affected by mental illness.

it ethics case study

  • 07 Nov 2023

How Should Meta Be Governed for the Good of Society?

Julie Owono is executive director of Internet Sans Frontières and a member of the Oversight Board, an outside entity with the authority to make binding decisions on tricky moderation questions for Meta’s companies, including Facebook and Instagram. Harvard Business School visiting professor Jesse Shapiro and Owono break down how the Board governs Meta’s social and political power to ensure that it’s used responsibly, and discuss the Board’s impact, as an alternative to government regulation, in the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.”

it ethics case study

  • 24 Oct 2023

From P.T. Barnum to Mary Kay: Lessons From 5 Leaders Who Changed the World

What do Steve Jobs and Sarah Breedlove have in common? Through a series of case studies, Robert Simons explores the unique qualities of visionary leaders and what today's managers can learn from their journeys.

it ethics case study

  • 03 Oct 2023
  • Research Event

Build the Life You Want: Arthur Brooks and Oprah Winfrey Share Happiness Tips

"Happiness is not a destination. It's a direction." In this video, Arthur C. Brooks and Oprah Winfrey reflect on mistakes, emotions, and contentment, sharing lessons from their new book.

it ethics case study

  • 12 Sep 2023

Successful, But Still Feel Empty? A Happiness Scholar and Oprah Have Advice for You

So many executives spend decades reaching the pinnacles of their careers only to find themselves unfulfilled at the top. In the book Build the Life You Want, Arthur Brooks and Oprah Winfrey offer high achievers a guide to becoming better leaders—of their lives.

it ethics case study

  • 10 Jul 2023
  • In Practice

The Harvard Business School Faculty Summer Reader 2023

Need a book recommendation for your summer vacation? HBS faculty members share their reading lists, which include titles that explore spirituality, design, suspense, and more.

it ethics case study

  • 01 Jun 2023

A Nike Executive Hid His Criminal Past to Turn His Life Around. What If He Didn't Have To?

Larry Miller committed murder as a teenager, but earned a college degree while serving time and set out to start a new life. Still, he had to conceal his record to get a job that would ultimately take him to the heights of sports marketing. A case study by Francesca Gino, Hise Gibson, and Frances Frei shows the barriers that formerly incarcerated Black men are up against and the potential talent they could bring to business.

it ethics case study

  • 04 Apr 2023

Two Centuries of Business Leaders Who Took a Stand on Social Issues

Executives going back to George Cadbury and J. N. Tata have been trying to improve life for their workers and communities, according to the book Deeply Responsible Business: A Global History of Values-Driven Leadership by Geoffrey Jones. He highlights three practices that deeply responsible companies share.

it ethics case study

  • 14 Mar 2023

Can AI and Machine Learning Help Park Rangers Prevent Poaching?

Globally there are too few park rangers to prevent the illegal trade of wildlife across borders, or poaching. In response, Spatial Monitoring and Reporting Tool (SMART) was created by a coalition of conservation organizations to take historical data and create geospatial mapping tools that enable more efficient deployment of rangers. SMART had demonstrated significant improvements in patrol coverage, with some observed reductions in poaching. Then a new predictive analytic tool, the Protection Assistant for Wildlife Security (PAWS), was created to use artificial intelligence (AI) and machine learning (ML) to try to predict where poachers would be likely to strike. Jonathan Palmer, Executive Director of Conservation Technology for the Wildlife Conservation Society, already had a good data analytics tool to help park rangers manage their patrols. Would adding an AI- and ML-based tool improve outcomes or introduce new problems? Harvard Business School senior lecturer Brian Trelstad discusses the importance of focusing on the use case when determining the value of adding a complex technology solution in his case, “SMART: AI and Machine Learning for Wildlife Conservation.”

it ethics case study

  • 14 Feb 2023

Does It Pay to Be a Whistleblower?

In 2013, soon after the US Securities and Exchange Commission (SEC) had started a massive whistleblowing program with the potential for large monetary rewards, two employees of a US bank’s asset management business debated whether to blow the whistle on their employer after completing an internal review that revealed undisclosed conflicts of interest. The bank’s asset management business disproportionately invested clients’ money in its own mutual funds over funds managed by other banks, letting it collect additional fees—and the bank had not disclosed this conflict of interest to clients. Both employees agreed that failing to disclose the conflict was a problem, but beyond that, they saw the situation very differently. One employee, Neel, perceived the internal review as a good-faith effort by senior management to identify and address the problem. The other, Akash, thought that the entire business model was problematic, even with a disclosure, and believed that the bank may have even broken the law. Should they escalate the issue internally or report their findings to the US Securities and Exchange Commission? Harvard Business School associate professor Jonas Heese discusses the potential risks and rewards of whistleblowing in his case, “Conflicts of Interest at Uptown Bank.”

it ethics case study

  • 17 Jan 2023

Good Companies Commit Crimes, But Great Leaders Can Prevent Them

It's time for leaders to go beyond "check the box" compliance programs. Through corporate cases involving Walmart, Wells Fargo, and others, Eugene Soltes explores the thorny legal issues executives today must navigate in his book Corporate Criminal Investigations and Prosecutions.

it ethics case study

  • 29 Nov 2022

How Will Gamers and Investors Respond to Microsoft’s Acquisition of Activision Blizzard?

In January 2022, Microsoft announced its acquisition of the video game company Activision Blizzard for $68.7 billion. The deal would make Microsoft the world’s third largest video game company, but it also exposes the company to several risks. First, the all-cash deal would require Microsoft to use a large portion of its cash reserves. Second, the acquisition was announced as Activision Blizzard faced gender pay disparity and sexual harassment allegations. That opened Microsoft up to potential reputational damage, employee turnover, and lost sales. Do the potential benefits of the acquisition outweigh the risks for Microsoft and its shareholders? Harvard Business School associate professor Joseph Pacelli discusses the ongoing controversies around the merger and how gamers and investors have responded in the case, “Call of Fiduciary Duty: Microsoft Acquires Activision Blizzard.”

it ethics case study

  • SOCIETY OF PROFESSIONAL JOURNALISTS

Home > Ethics > Ethics Case Studies

Ethics Ethics Case Studies

The SPJ Code of Ethics is voluntarily embraced by thousands of journalists, regardless of place or platform, and is widely used in newsrooms and classrooms as a guide for ethical behavior. The code is intended not as a set of "rules" but as a resource for ethical decision-making. It is not — nor can it be under the First Amendment — legally enforceable. For an expanded explanation, please follow this link .

it ethics case study

For journalism instructors and others interested in presenting ethical dilemmas for debate and discussion, SPJ has a useful resource. We've been collecting a number of case studies for use in workshops. The Ethics AdviceLine operated by the Chicago Headline Club and Loyola University also has provided a number of examples. There seems to be no shortage of ethical issues in journalism these days. Please feel free to use these examples in your classes, speeches, columns, workshops or other modes of communication.

Kobe Bryant’s Past: A Tweet Too Soon? On January 26, 2020, Kobe Bryant died at the age of 41 in a helicopter crash in the Los Angeles area. While the majority of social media praised Bryant after his death, within a few hours after the story broke, Felicia Sonmez, a reporter for The Washington Post , tweeted a link to an article from 2003 about the allegations of sexual assault against Bryant. The question: Is there a limit to truth-telling? How long (if at all) should a journalist wait after a person’s death before resurfacing sensitive information about their past?

A controversial apology After photographs of a speech and protests at Northwestern University appeared on the university's newspaper's website, some of the participants contacted the newspaper to complain. It became a “firestorm,” — first from students who felt victimized, and then, after the newspaper apologized, from journalists and others who accused the newspaper of apologizing for simply doing its job. The question: Is an apology the appropriate response? Is there something else the student journalists should have done?

Using the ‘Holocaust’ Metaphor People for the Ethical Treatment of Animals, or PETA, is a nonprofit animal rights organization known for its controversial approach to communications and public relations. In 2003, PETA launched a new campaign, named “Holocaust on Your Plate,” that compares the slaughter of animals for human use to the murder of 6 million Jews in WWII. The question: Is “Holocaust on Your Plate” ethically wrong or a truthful comparison?

Aaargh! Pirates! (and the Press) As collections of songs, studio recordings from an upcoming album or merely unreleased demos, are leaked online, these outlets cover the leak with a breaking story or a blog post. But they don’t stop there. Rolling Stone and Billboard often also will include a link within the story to listen to the songs that were leaked. The question: If Billboard and Rolling Stone are essentially pointing readers in the right direction, to the leaked music, are they not aiding in helping the Internet community find the material and consume it?

Reigning on the Parade Frank Whelan, a features writer who also wrote a history column for the Allentown, Pennsylvania, Morning Call , took part in a gay rights parade in June 2006 and stirred up a classic ethical dilemma. The situation raises any number of questions about what is and isn’t a conflict of interest. The question: What should the “consequences” be for Frank Whelan?

Controversy over a Concert Three former members of the Eagles rock band came to Denver during the 2004 election campaign to raise money for a U.S. Senate candidate, Democrat Ken Salazar. John Temple, editor and publisher of the Rocky Mountain News, advised his reporters not to go to the fundraising concerts. The question: Is it fair to ask newspaper staffers — or employees at other news media, for that matter — not to attend events that may have a political purpose? Are the rules different for different jobs at the news outlet?

Deep Throat, and His Motive The Watergate story is considered perhaps American journalism’s defining accomplishment. Two intrepid young reporters for The Washington Post , carefully verifying and expanding upon information given to them by sources they went to great lengths to protect, revealed brutally damaging information about one of the most powerful figures on Earth, the American president. The question: Is protecting a source more important than revealing all the relevant information about a news story?

When Sources Won’t Talk The SPJ Code of Ethics offers guidance on at least three aspects of this dilemma. “Test the accuracy of information from all sources and exercise care to avoid inadvertent error.” One source was not sufficient in revealing this information. The question: How could the editors maintain credibility and remain fair to both sides yet find solid sources for a news tip with inflammatory allegations?

A Suspect “Confession” John Mark Karr, 41, was arrested in mid-August in Bangkok, Thailand, at the request of Colorado and U.S. officials. During questioning, he confessed to the murder of JonBenet Ramsey. Karr was arrested after Michael Tracey, a journalism professor at the University of Colorado, alerted authorities to information he had drawn from e-mails Karr had sent him over the past four years. The question: Do you break a confidence with your source if you think it can solve a murder — or protect children half a world away?

Who’s the “Predator”? “To Catch a Predator,” the ratings-grabbing series on NBC’s Dateline, appeared to catch on with the public. But it also raised serious ethical questions for journalists. The question: If your newspaper or television station were approached by Perverted Justice to participate in a “sting” designed to identify real and potential perverts, should you go along, or say, “No thanks”? Was NBC reporting the news or creating it?

The Media’s Foul Ball The Chicago Cubs in 2003 were five outs from advancing to the World Series for the first time since 1945 when a 26-year-old fan tried to grab a foul ball, preventing outfielder Moises Alou from catching it. The hapless fan's identity was unknown. But he became recognizable through televised replays as the young baby-faced man in glasses, a Cubs baseball cap and earphones who bobbled the ball and was blamed for costing the Cubs a trip to the World Series. The question: Given the potential danger to the man, should he be identified by the media?

Publishing Drunk Drivers’ Photos When readers of The Anderson News picked up the Dec. 31, 1997, issue of the newspaper, stripped across the top of the front page was a New Year’s greeting and a warning. “HAVE A HAPPY NEW YEAR,” the banner read. “But please don’t drink and drive and risk having your picture published.” Readers were referred to the editorial page where White explained that starting in January 1998 the newspaper would publish photographs of all persons convicted of drunken driving in Anderson County. The question: Is this an appropriate policy for a newspaper?

Naming Victims of Sex Crimes On January 8, 2007, 13-year-old Ben Ownby disappeared while walking home from school in Beaufort, Missouri. A tip from a school friend led police on a frantic four-day search that ended unusually happily: the police discovered not only Ben, but another boy as well—15-year-old Shawn Hornbeck, who, four years earlier, had disappeared while riding his bike at the age of 11. Media scrutiny on Shawn’s years of captivity became intense. The question: Question: Should children who are thought to be the victims of sexual abuse ever be named in the media? What should be done about the continued use of names of kidnap victims who are later found to be sexual assault victims? Should use of their names be discontinued at that point?

A Self-Serving Leak San Francisco Chronicle reporters Mark Fainaru-Wada and Lance Williams were widely praised for their stories about sports figures involved with steroids. They turned their investigation into a very successful book, Game of Shadows . And they won the admiration of fellow journalists because they were willing to go to prison to protect the source who had leaked testimony to them from the grand jury investigating the BALCO sports-and-steroids. Their source, however, was not quite so noble. The question: Should the two reporters have continued to protect this key source even after he admitted to lying? Should they have promised confidentiality in the first place?

The Times and Jayson Blair Jayson Blair advanced quickly during his tenure at The New York Times , where he was hired as a full-time staff writer after his internship there and others at The Boston Globe and The Washington Post . Even accusations of inaccuracy and a series of corrections to his reports on Washington, D.C.-area sniper attacks did not stop Blair from moving on to national coverage of the war in Iraq. But when suspicions arose over his reports on military families, an internal review found that he was fabricating material and communicating with editors from his Brooklyn apartment — or within the Times building — rather than from outside New York. The question: How does the Times investigate problems and correct policies that allowed the Blair scandal to happen?

Cooperating with the Government It began on Jan. 18, 2005, and ended two weeks later after the longest prison standoff in recent U.S. history. The question: Should your media outlet go along with the state’s request not to release the information?

Offensive Images Caricatures of the Prophet Muhammad didn’t cause much of a stir when they were first published in September 2005. But when they were republished in early 2006, after Muslim leaders called attention to the 12 images, it set off rioting throughout the Islamic world. Embassies were burned; people were killed. After the rioting and killing started, it was difficult to ignore the cartoons. Question: Do we publish the cartoons or not?

The Sting Perverted-Justice.com is a Web site that can be very convenient for a reporter looking for a good story. But the tactic raises some ethical questions. The Web site scans Internet chat rooms looking for men who can be lured into sexually explicit conversations with invented underage correspondents. Perverted-Justice posts the men’s pictures on its Web site. Is it ethically defensible to employ such a sting tactic? Should you buy into the agenda of an advocacy group — even if it’s an agenda as worthy as this one?

A Media-Savvy Killer Since his first murder in 1974, the “BTK” killer — his own acronym, for “bind, torture, kill” — has sent the Wichita Eagle four letters and one poem. How should a newspaper, or other media outlet, handle communications from someone who says he’s guilty of multiple sensational crimes? And how much should it cooperate with law enforcement authorities?

A Congressman’s Past The (Portland) Oregonian learned that a Democratic member of the U.S. Congress, up for re-election to his fourth term, had been accused by an ex-girlfriend of a sexual assault some 28 years previously. But criminal charges never were filed, and neither the congressman, David Wu, nor his accuser wanted to discuss the case now, only weeks before the 2004 election. Question: Should The Oregonian publish this story?

Using this Process to Craft a Policy It used to be that a reporter would absolutely NEVER let a source check out a story before it appeared. But there has been growing acceptance of the idea that it’s more important to be accurate than to be independent. Do we let sources see what we’re planning to write? And if we do, when?

Join SPJ

SPJ News –  SPJ condemns arrest of Fox 7 News photojournalists, calls for law enforcement to allow journalists to report without interference –  Region 3 Mark of Excellence Awards 2023 winners announced –  Region 11 Mark of Excellence Awards 2023 winners announced

it ethics case study

  • Work & Careers
  • Life & Arts
  • Currently reading: Post Office scandal exposes ethical dilemmas of general counsel
  • Meet the top FT 20 in-house legal leaders
  • Emerging AI risks require vigilance from in-house legal counsel
  • Lawyers take frontline role in business response to cyber attacks
  • In-house lawyers grapple with ESG demands
  • Wanted: in-house legal leaders who can interpret world events
  • Geopolitical upheaval recasts chief legal officer role

Post Office scandal exposes ethical dilemmas of general counsel

it ethics case study

  • Post Office scandal exposes ethical dilemmas of general counsel on x (opens in a new window)
  • Post Office scandal exposes ethical dilemmas of general counsel on facebook (opens in a new window)
  • Post Office scandal exposes ethical dilemmas of general counsel on linkedin (opens in a new window)
  • Post Office scandal exposes ethical dilemmas of general counsel on whatsapp (opens in a new window)

Rafe Uddin in London

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Post Office executives played a leading role in publicly defending their organisation over the hundreds of prosecutions it brought against the sub-postmasters who ran its branches, based on the flawed Horizon accounting system.

But, behind the scenes, it was in-house lawyers who took on the task of briefing senior executives on the robustness of its Horizon software. They were also responsible for commissioning relevant audits and setting out the UK state-owned organisation’s approach to litigation. 

More than 900 people were convicted of a range of offences, including theft and false accounting, in cases involving data from Fujitsu’s flawed Horizon system, which was introduced in 1999. More than 700 prosecutions were brought by the Post Office itself.

However, it was another lawyer — James Hartley, partner and head of dispute resolution at law firm Freeths — who represented 555 of the sub-postmasters in a landmark 2019 High Court case in which the extent of the IT scandal emerged. The judge ruled that several “bugs, errors and defects” meant there was a “material risk” that the Horizon system was to blame for faulty data used in the Post Office prosecutions.

it ethics case study

“It’s quite a complex web of obligation, responsibility and culpability,” says Hartley, reflecting on the reach of the affair into the legal profession. “Somewhere along the way, lawyers have stepped over the red line.”

Now, a public inquiry into the scandal is gaining momentum as it takes evidence from senior Post Office executives, government ministers and figures from Fujitsu, ahead of its conclusion this summer.

In the coming months, the inquiry will hear testimony from several former general counsel at the Post Office, each of whom will give evidence against the backdrop of a debate about whether the role of an in-house lawyer needs to be more strictly regulated.

Susan Crichton, the Post Office’s general counsel between 2010 and 2013, will appear today at Aldwych House in London to respond to claims that, under her watch, the business brought prosecutions against sub-postmasters despite concerns surrounding Horizon.

Audio recordings shared with the inquiry, of conversations between Crichton and forensic accountants Second Sight in 2013, suggest she briefed the company’s chief executive that claims made by accused sub-postmasters about the Horizon system were, in fact, true.

Their discussions include the detail, long denied by the Post Office, that third parties could access systems remotely and alter transaction data. Sub-postmasters successfully argued in court that they could not be held solely responsible for any shortfalls because of this third-party access.

Crichton’s evidence is also expected to spell out some of the difficulties that existed for general counsel in raising concerns, particularly when executives fail to act in response.

Chris Aujard, Crichton’s successor, is scheduled to appear at the inquiry tomorrow. Jane MacLeod, who succeeded Aujard, is due to appear in June, shortly after current counsel Ben Foat takes the stand.

Somewhere along the way, lawyers have stepped over the red line James Hartley, Freeths

Contemporaneous documents suggest that there may have been opportunities for the Post Office to prevent litigation.

The Post Office’s general counsel were involved in commissioning half a dozen reports and reviews by external auditors and consultants, including BAE Systems, Deloitte, EY, and Second Sight, in the decade leading up to the 2019 High Court case.

Some of these reports found faults with internal systems and how they were managed. External lawyers in 2013 warned the Post Office that the business was at risk of breaching its obligations as a prosecutor over improper practices, if any decision were made to shred documents, which prevented disclosure.

Richard Moorhead, a professor of law and professional ethics at the University of Exeter, says matters should be reported “up the ladder” and that general counsel need to act as a “moral compass” within an organisation. “They need to speak up if they think things are being done which are improper and ensure the client hears those things,” he says.

Moorhead, who sits on the government-appointed Horizon Compensation Advisory Board, is a vocal critic of the lawyers involved in the Post Office Horizon scandal.

He adds that there were occasions when in-house lawyers at the Post Office should have sought to “blow the whistle” once it became obvious that errors in the Horizon system could account for shortfalls.

General counsel play a prominent role in shaping the legal strategy of a company or organisation and advising executives on the best approach to compliance and handling legal risk. But there is sometimes tension between serving the business and acting in the public’s interest. 

In the aftermath of the Enron and WorldCom fraud scandals in the early 2000s, US regulators introduced new security laws that required general counsel to report adverse information to audit committees, directors and other officials when senior leadership was unresponsive.

[GCs] need to speak up if they think things are being done which are improper and ensure the client hears those things Richard Moorhead, University of Exeter

Brian Cheffins, a professor of corporate law at the University of Cambridge, says the new rules produced a playbook for in-house lawyers who had been “stonewalled internally”, particularly as these individuals could find themselves in “deep water” when misgovernance became evident.

But Cheffins is opposed to plans to set out general counsel’s obligations formally, and warns that doing so risks duplicating duties that already exist elsewhere.

General counsel in the UK operate under the same rules as any solicitor or barrister advising a client, which stipulate acting with integrity in ensuring that senior figures are briefed on unpalatable information. The Horizon affair has reminded lawyers of their duties when advising executives.

Hartley says: “In-house lawyers need to recalibrate their thinking on where that red line is so they know when to turn around to the person they’re advising and say, ‘No, we cannot do that’.”

Post Office general counsel: in the spotlight

Susan Crichton In 2012-2013 she was involved in instructing Second Sight to conduct an independent investigation into Horizon. The forensic accountants raised concerns but these were not actioned by the business despite executives being briefed. Crichton left the Post Office to take on a similar role at TSB Bank in 2013; she retired in 2018.

Chris Aujard After becoming general counsel in 2013, he was tasked with winding down a mediation scheme set up for affected sub-postmasters and removing Second Sight from its role investigating the Post Office. Meeting minutes from 2014 showed he was present when executives discussed setting aside £1mn in “token payments” to mitigate any reputational damage.

Jane MacLeod In position as general counsel when 555 sub-postmasters brought a suit against the Post Office, MacLeod was responsible for overseeing the business’s initial response. The public inquiry will explore her handling of disclosure and response to litigation when she gives evidence in June. She resigned from the Post Office in 2019.

Ben Foat Appointed to general counsel in 2019, Foat previously served as the business’s legal director. He appeared at the inquiry in the middle of last year after widespread disclosure failures resulted in weeks of delays to evidence. Sir Wyn Williams, chair of the inquiry, has since threatened officials with criminal penalties if such problems recur.

Promoted Content

Explore the series.

A photo of a female manager addressing a meeting with her team in an office boardroom

Follow the topics in this article

  • Corporate governance Add to myFT
  • Law Add to myFT
  • Post Office scandal Add to myFT
  • Legal services Add to myFT
  • Rafe Uddin Add to myFT

International Edition

it ethics case study

  • Case Studies
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Technology Ethics
  • Technology Ethics Resources
  • Ethics in Technology Practice

it ethics case study

An Ethics Case Study

it ethics case study

  • What Are These Materials?
  • Overview of Ethics in Tech Practice
  • Ethical Lenses
  • Framework for Ethical Decision Making
  • Ethical Toolkit
  • Sample Design Workflow
  • Sample Workshop Slides
  • Best Ethical Practices in Technology
  • Study Guides
  • Homework Questions

CPPREP4002 - Dispute Resolution Case Studies v1.3

A case study to engage students in the research design and ethics of high-throughput metagenomics

Affiliations.

  • 1 University of North Carolina at Pembroke, Pembroke, North Carolina, USA.
  • 2 University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA.
  • 3 University of Dubuque, Dubuque, Iowa, USA.
  • 4 United States Military Academy, West Point, New York, USA.
  • 5 Barton College, Wilson, North Carolina, USA.
  • 6 Emory University, Atlanta, Georgia, USA.
  • PMID: 38661414
  • PMCID: PMC11044643
  • DOI: 10.1128/jmbe.00074-23

Case studies present students with an opportunity to learn and apply course content through problem solving and critical thinking. Supported by the High-throughput Discovery Science & Inquiry-based Case Studies for Today's Students (HITS) Research Coordination Network, our interdisciplinary team designed, implemented, and assessed two case study modules entitled "You Are What You Eat." Collectively, the case study modules present students with an opportunity to engage in experimental research design and the ethical considerations regarding microbiome research and society. In this manuscript, we provide instructors with tools for adopting or adapting the research design and/or the ethics modules. To date, the case has been implemented using two modalities (remote and in-person) in three courses (Microbiology, Physiology, and Neuroscience), engaging over 200 undergraduate students. Our assessment data demonstrate gains in content knowledge and students' perception of learning following case study implementation. Furthermore, when reflecting on our experiences and student feedback, we identified ways in which the case study could be modified for different settings. In this way, we hope that the "You Are What You Eat" case study modules can be implemented widely by instructors to promote problem solving and critical thinking in the traditional classroom or laboratory setting when discussing next-generation sequencing and/or metagenomics research.

Keywords: active learning; case study; ethics; high-throughput discovery science; metagenomics; microbiome; next-generation sequencing; research design; undergraduate life science education.

IMAGES

  1. Case Study On Ethical Issues In Information Technology

    it ethics case study

  2. PPT

    it ethics case study

  3. Business Ethics: A Case Study Approach (Hardcover)

    it ethics case study

  4. AI ethics case study: Telstra

    it ethics case study

  5. (PDF) How to Do Computer Ethics—A Case Study: The Electronic Mall Bodensee

    it ethics case study

  6. (PDF) A Business Ethics Case Study

    it ethics case study

VIDEO

  1. ETHICS CASE STUDIES-Ethical Dilemmas in Corporate HR Management|LECTURE-3|UPSC CSE MAINS|LevelUp IAS

  2. ethics case study class 1 part 1

  3. Ethics

  4. Ethics

  5. ETHICS CASE STUDY DAY 1 BATCH 3 BY S ANSARI CSE MAINS 2018 28 AUG1

  6. Ethics Case Study: 3- Media Ethics

COMMENTS

  1. Technology Ethics Cases

    This template provides the basics for writing ethics case studies in technology (though with some modification it could be used in other fields as well). AI-Writing Detectors. A Tech Ethics Case Study. Ethical questions arise in interactions among students, instructors, administrators, and providers of AI tools.

  2. Code of Ethics Case Studies

    Case Studies. The ACM Code of Ethics and Professional Practice ("the Code") is meant to inform practice and education. It is useful as the conscience of the profession, but also for individual decision-making. As prescribed by the Preamble of the Code, computing professionals should approach the dilemma with a holistic reading of the ...

  3. PDF Responsible Use of Technology: The Microsoft Case Study

    this White Paper, the first in a series of case studies highlighting tools and processes that facilitate responsible technology product design and development. This initial document on the "Responsible Use of Technology: The Microsoft Case Study" will be followed by other companies' examples of ethical practices and tools in future papers.

  4. Case Studies in Social and Ethical Responsibilities of Computing

    The MIT Case Studies in Social and Ethical Responsibilities of Computing (SERC) aims to advance new efforts within and beyond the Schwarzman College of Computing. The specially commissioned and peer-reviewed cases are brief and intended to be effective for undergraduate instruction across a range of classes and fields of study, and may also be ...

  5. Stanford Computer Ethics Case Studies and Interviews

    Their original use is for an undergraduate course on ethics, public policy, and technology, and they have been designed to prompt discussion about issues at the intersection of those fields. These works are licensed under a Creative Commons Attribution 4.0 International License. Case Studies. Algorithmic Decision-Making and Accountability

  6. The FBI & Apple Security vs. Privacy

    The Justice Department then obtained a court order compelling Apple to help the FBI unlock the phone. Apple CEO, Timothy Cook, publicly challenged the court in an open letter, sparking an intense debate over the balance between maintaining national security and protecting user privacy.

  7. Fostering ethical thinking in computing

    A case studies series from the Social and Ethical Responsibilities of Computing program at MIT delves into a range of topics, from social and ethical implications of computing technologies and the racial disparities that can arise from deploying facial recognition technology in unregulated, real-world settings to the biases of risk prediction algorithms in the criminal justice system and the ...

  8. Ethics, Society, & Technology Case Studies

    To support this mission, the Ethics, Society, and Technology Initiatives are creating a high-quality production of open-source ethics, policy, and technology case studies for use in university and industry settings. Through the Ethics, Society, and Technology (EST) Case Study Program a talented team of case writers develop and prototype case ...

  9. Research and Practice of AI Ethics: A Case Study Approach ...

    This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD + AI)—using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ...

  10. A Template for Technology Ethics Case Studies

    By Brian Patrick Green with Irina Raicu. This template provides the basics for writing ethics case studies in technology (though with some modification it could be used in other fields as well). There's an old saying that "Circumstances make the case.". Because of this, an ethics case study template can only hope to capture most of the ...

  11. Case Studies

    Case studies provide a useful jumping-off point for considering the various moral and practical trade-offs inherent in the study of practical ethics. Case Study PDFs: The Princeton Dialogues on AI and Ethics has released six long-format case studies exploring issues at the intersection of AI, ethics and society. Three additional case studies ...

  12. The Ethical Dilemma at the Heart of Big Tech Companies

    November 14, 2019. HBR Staff/Pexels. Summary. The central challenge ethics owners in tech companies are grappling with is negotiating between external pressures to respond to ethical crises at the ...

  13. AI Ethics Case Studies _ Registries

    University of Washington Tech Policy Lab: Designing Tech Policy: Instructional Case Studies for Technologists and Policymaker. aiethicist.org Includes a list of AI ethics case studies and AI incident registries that can be used for discussion, debate and/or training purposes for ethical and responsible AI development and governance.

  14. Case Studies

    More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and a bibliography.

  15. Cyber Harassment

    In 2013, Idalia Hernández Ramos, a middle school teacher in Mexico, was a victim of cyber harassment. After discovering that one of her students tweeted that the teacher was a "bitch" and a "whore," Hernández confronted the girl during a lesson on social media etiquette. Inquiring why the girl would post such hurtful messages that ...

  16. Ethics: Articles, Research, & Case Studies on Ethics- HBS Working Knowledge

    by Avery Forman. So many executives spend decades reaching the pinnacles of their careers only to find themselves unfulfilled at the top. In the book Build the Life You Want, Arthur Brooks and Oprah Winfrey offer high achievers a guide to becoming better leaders—of their lives. 10 Jul 2023.

  17. Cases

    Erica Kaufman West, MD. Zoonoses are infectious diseases that pass from an animal to a human and now constitute the majority of new and emerging infections. AMA J Ethics. 2024;26 (2):E103-108. doi: 10.1001/amajethics.2024.103. Case and Commentary. Feb 2024.

  18. Ethics Case Studies

    EthicsEthics Case Studies. Ethics Case Studies. The SPJ Code of Ethics is voluntarily embraced by thousands of journalists, regardless of place or platform, and is widely used in newsrooms and classrooms as a guide for ethical behavior. The code is intended not as a set of "rules" but as a resource for ethical decision-making.

  19. Post Office scandal exposes ethical dilemmas of general counsel

    Post Office scandal exposes ethical dilemmas of ... of dispute resolution at law firm Freeths — who represented 555 of the sub-postmasters in a landmark 2019 High Court case in which the extent ...

  20. Ethics in Technology Practice: Case Studies

    Vari Hall, Santa Clara University 500 El Camino Real Santa Clara, CA 95053 408-554-5319 . Maps & Directions; Contact Us; SCU on Facebook; SCU on X (formerly Twitter)

  21. Dispute Resolution Case Studies v1.3 (docx)

    CPPREP4002 - Access and interpret ethical practice in real estate (Release 1) Dispute Resolution Case Studies Dispute Resolution Case Studies At one time or another, you will be faced with others (including clients, customers etc) dissatisfied with the service you have provided. This can occur for several reasons such as failure to meet the client's expectations.

  22. A case study to engage students in the research design and ethics of

    Collectively, the case study modules present students with an opportunity to engage in experimental research design and the ethical considerations regarding microbiome research and society. In this manuscript, we provide instructors with tools for adopting or adapting the research design and/or the ethics modules.

  23. Nursing Reports

    The COVID-19 has caused high morbidity and mortality in vulnerable people, such as those affected by chronic diseases, and case-management nurses (CMNs) are reference professionals for their health care and management. The objective of this study is to better understand the discourse, experiences, and feelings about the professional performance of CMNs during the pandemic. A qualitative study ...