scholarly journals AI and Ethics: Shedding Light on the Black Box

2020 ◽  
Vol 28 ◽  
Author(s):  
Katrina Ingram

Artificial Intelligence (AI) is playing an increasingly prevalent role in our lives. Whether its landing a job interview, getting a bank loan or accessing a government program, organizations are using automated systems informed by AI enabled technologies in ways that have significant consequences for people. At the same time, there is a lack of transparency around how AI technologies work and whether they are ethical, fair or accurate. This paper examines a body of literature related to the ethical considerations surrounding the use of artificial intelligence and the role of ethical codes. It identifies and explores core issues including bias, fairness and transparency and looks at who is setting the agenda for AI ethics in Canada and globally. Lastly, it offers some suggestions for next steps towards a more inclusive discussion.

2019 ◽  
Vol 162 (1) ◽  
pp. 38-39
Author(s):  
Alexandra M. Arambula ◽  
Andrés M. Bur

Artificial intelligence (AI) is quickly expanding within the sphere of health care, offering the potential to enhance the efficiency of care delivery, diminish costs, and reduce diagnostic and therapeutic errors. As the field of otolaryngology also explores use of AI technology in patient care, a number of ethical questions warrant attention prior to widespread implementation of AI. This commentary poses many of these ethical questions for consideration by the otolaryngologist specifically, using the 4 pillars of medical ethics—autonomy, beneficence, nonmaleficence, and justice—as a framework and advocating both for the assistive role of AI in health care and for the shared decision-making, empathic approach to patient care.


2020 ◽  
Vol 12 (12) ◽  
pp. 226
Author(s):  
Laith T. Khrais

The advent and incorporation of technology in businesses have reformed operations across industries. Notably, major technical shifts in e-commerce aim to influence customer behavior in favor of some products and brands. Artificial intelligence (AI) comes on board as an essential innovative tool for personalization and customizing products to meet specific demands. This research finds that, despite the contribution of AI systems in e-commerce, its ethical soundness is a contentious issue, especially regarding the concept of explainability. The study adopted the use of word cloud analysis, voyance analysis, and concordance analysis to gain a detailed understanding of the idea of explainability as has been utilized by researchers in the context of AI. Motivated by a corpus analysis, this research lays the groundwork for a uniform front, thus contributing to a scientific breakthrough that seeks to formulate Explainable Artificial Intelligence (XAI) models. XAI is a machine learning field that inspects and tries to understand the models and steps involved in how the black box decisions of AI systems are made; it provides insights into the decision points, variables, and data used to make a recommendation. This study suggested that, to deploy explainable XAI systems, ML models should be improved, making them interpretable and comprehensible.


2021 ◽  
Vol 13 (4) ◽  
pp. 1974
Author(s):  
Alfred Benedikt Brendel ◽  
Milad Mirbabaie ◽  
Tim-Benjamin Lembcke ◽  
Lennart Hofeditz

With artificial intelligence (AI) becoming increasingly capable of handling highly complex tasks, many AI-enabled products and services are granted a higher autonomy of decision-making, potentially exercising diverse influences on individuals and societies. While organizations and researchers have repeatedly shown the blessings of AI for humanity, serious AI-related abuses and incidents have raised pressing ethical concerns. Consequently, researchers from different disciplines widely acknowledge an ethical discourse on AI. However, managers—eager to spark ethical considerations throughout their organizations—receive limited support on how they may establish and manage AI ethics. Although research is concerned with technological-related ethics in organizations, research on the ethical management of AI is limited. Against this background, the goals of this article are to provide a starting point for research on AI-related ethical concerns and to highlight future research opportunities. We propose an ethical management of AI (EMMA) framework, focusing on three perspectives: managerial decision making, ethical considerations, and macro- as well as micro-environmental dimensions. With the EMMA framework, we provide researchers with a starting point to address the managing the ethical aspects of AI.


Author(s):  
Garret Merriam

Artificial Emotional Intelligence research has focused on emotions in a limited “black box” sense, concerned only with emotions as ‘inputs/outputs’ for the system, disregarding the processes and structures that constitute the emotion itself. We’re teaching machines to act as if they can feel emotions without the capacity to actually feel emotions. Serous moral and social problems will arise if we stick with the black box approach. As A.I.’s become more integrated with our lives, humans will require more than mere emulation of emotion; we’ll need them to have ‘the real thing.’ Moral psychology suggests emotions are necessary for moral reasoning and moral behavior. Socially, the role of ‘affective computing’ foreshadows the intimate ways humans will expect emotional reciprocity from their machines. Three objections are considered and responded to: (1) it’s not possible, (2) not necessary, and (3) too dangerous to give machines genuine emotions.


2020 ◽  
Vol 7 (2) ◽  
pp. 205395172093670
Author(s):  
Nicole Dewandre

In The Black Box Society, Frank Pasquale develops a critique of asymmetrical power: corporations’ secrecy is highly valued by legal orders, but persons’ privacy is continually invaded by these corporations. This response proceeds in three stages. I first highlight important contributions of The Black Box Society to our understanding of political and legal relationships between persons and corporations. I then critique a key metaphor in the book (the one-way mirror, Pasquale’s image of asymmetrical surveillance), and the role of transparency and ‘watchdogging’ in its primary policy prescriptions. I then propose ‘relational selfhood’ as an important new way of theorizing interdependence in an era of artificial intelligence and Big Data, and promoting optimal policies in these spheres.


2019 ◽  
Vol 28 (01) ◽  
pp. 035-040 ◽  
Author(s):  
Craig Kuziemsky ◽  
Anthony J. Maeder ◽  
Oommen John ◽  
Shashi B. Gogia ◽  
Arindam Basu ◽  
...  

Objectives: This paper provides a discussion about the potential scope of applicability of Artificial Intelligence methods within the telehealth domain. These methods are focussed on clinical needs and provide some insight to current directions, based on reports of recent advances. Methods: Examples of telehealth innovations involving Artificial Intelligence to support or supplement remote health care delivery were identified from recent literature by the authors, on the basis of expert knowledge. Observations from the examples were synthesized to yield an overview of contemporary directions for the perceived role of Artificial Intelligence in telehealth. Results: Two major focus areas for related contemporary directions were established. These were first, quality improvement for existing clinical practice and service delivery, and second, the development and support of new models of care. Case studies from each focus area have been chosen for illustration purposes. Conclusion: Examples of the role of Artificial Intelligence in delivery of health care remotely include use of tele-assessment, tele-diagnosis, tele-interactions, and tele-monitoring. Further developments of underlying algorithms and validation of methods will be required for wider adoption. Certain key social and ethical considerations also need consideration more generally in the health system, as Artificial-Intelligence-enabled-telehealth becomes more commonplace.


AI & Society ◽  
2020 ◽  
Vol 35 (4) ◽  
pp. 917-926 ◽  
Author(s):  
Karl de Fine Licht ◽  
Jenny de Fine Licht

Abstract The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring.


2021 ◽  
Vol 9 ◽  
Author(s):  
Eduardo Eiji Maeda ◽  
Päivi Haapasaari ◽  
Inari Helle ◽  
Annukka Lehikoinen ◽  
Alexey Voinov ◽  
...  

Modeling is essential for modern science, and science-based policies are directly affected by the reliability of model outputs. Artificial intelligence has improved the accuracy and capability of model simulations, but often at the expense of a rational understanding of the systems involved. The lack of transparency in black box models, artificial intelligence based ones among them, can potentially affect the trust in science driven policy making. Here, we suggest that a broader discussion is needed to address the implications of black box approaches on the reliability of scientific advice used for policy making. We argue that participatory methods can bridge the gap between increasingly complex scientific methods and the people affected by their interpretations


2021 ◽  
pp. 355-376
Author(s):  
Tjerk Timan ◽  
Charlotte van Oirsouw ◽  
Marissa Hoekstra

AbstractIn recent debates around the regulation of artificial intelligence, its foundations, being data, are often overlooked. In order for AI to have any success but also for it to become transparent, explainable and auditable where needed, we need to make sure the data regulation and data governance around it is of the highest quality standards in relation to the application domain. One of the challenges is that AI regulation might – and needs to – rely heavily on data regulation, yet data regulation is highly complex. This is both a strategic problem for Europe and a practical problematic: people, institutions, governments and companies might increasingly need and want data for AI, and both will affect each other technically, socially but also regulatory. At the moment, there is an enormous disconnect between regulating AI, because this happens mainly through ethical frameworks, and concrete data regulation. The role of data regulation seems to be largely ignored in the AI ethics debate, Article 22 GDPR being perhaps the only exception. In this chapter, we will provide an overview of current data regulations that serve as inroads to fill this gap.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
M Afnan ◽  
Y Liu ◽  
V Conitzer ◽  
C Rudin ◽  
A Mishra ◽  
...  

Abstract Study question What are the epistemic and ethical considerations of clinically implementing Artificial Intelligence (AI) algorithms in embryo selection? Summary answer AI embryo selection algorithms used to date are “black-box” models with significant epistemic and ethical issues, and there are no trials assessing their clinical effectiveness. What is known already The innovation of time-lapse imaging offers the potential to generate vast quantities of data for embryo assessment. Computer Vision allows image data to be analysed using algorithms developed via machine learning which learn and adapt as they are exposed to more data. Most algorithms are developed using neural networks and are uninterpretable (or “black box”). Uninterpretable models are either too complicated to understand or proprietary, in which case comprehension is impossible for outsiders. In the IVF context, these outsiders include doctors, embryologists and patients, which raises ethical questions for its use in embryo selection. Study design, size, duration We performed a scoping review of articles evaluating AI for embryo selection in IVF. We considered the epistemic and ethical implications of current approaches. Participants/materials, setting, methods We searched Medline, Embase, ClinicalTrials.gov and the EU Clinical Trials Register for full text papers evaluating AI for embryo selection using the following key words: artificial intelligence* OR AI OR neural network* OR machine learning OR support vector machine OR automatic classification AND IVF OR in vitro fertilisation OR embryo*, as well as relevant MeSH and Emtree terms for Medline and Embase respectively. Main results and the role of chance We found no trials evaluating clinical effectiveness either published or registered. We found efficacy studies which looked at 2 types of outcomes – accuracy for predicting pregnancy or live birth and agreement with embryologist evaluation. Some algorithms were shown to broadly differentiate well between “good-” and “poor-” quality embryos but not between embryos of similar quality, which is the clinical need. Almost universally, the AI models were opaque (“black box”) in that at least some part of the process was uninterpretable. “Black box” models are problematic for epistemic and ethical reasons. Epistemic concerns include information asymmetries between algorithm developers and doctors, embryologists and patients; the risk of biased prediction caused by known and/or unknown confounders during the training process; difficulties in real-time error checking due to limited interpretability; the economics of buying into commercial proprietary models, brittle to variation in the treatment process; and an overall difficulty troubleshooting. Ethical pitfalls include the risk of misrepresenting patient values; concern for the health and well-being of future children; the risk of disvaluing disability; possible societal implications; and a responsibility gap, in the event of adverse events. Limitations, reasons for caution Our search was limited to the two main medical research databases. Although we checked article references for more publications, we were less likely to identify studies that were not indexed in Medline or Embase, especially if they were not cited in studies identified in our search. Wider implications of the findings It is premature to implement AI for embryo selection outside of a clinical trial. AI for embryo selection is potentially useful, but must be done carefully and transparently, as the epistemic and ethical issues are significant. We advocate for the use of interpretable AI models to overcome these issues. Trial registration number not applicable


Sign in / Sign up

Export Citation Format

Share Document