scholarly journals The Long Road Toward Tracking the Trackers and De-biasing: A Consensus on Shaking the Black Box and Freeing From Bias

2019 ◽  
Vol 11 (1) ◽  
pp. 27
Author(s):  
George Bouchagiar

Automated decision making is both promising and threatening. Processing the biggest data possible may lead to societal advances but also violate human rights. There is, then, an acute need to protect individuals without impeding major benefits. Non-human agents may be biased; and they may not lend themselves to easy explanations. Instead of focusing on interpreting models, there seems to be a shift toward a concept of risk assessments. Opaque systems are aimed at predicting, or forecasting, future situations. This challenges human values and ethical principles. Even though incorporating ethics in machines is an old subject of legal discussion, consensus has not yet been reached; for theories and values may be controversial. This paper examines whether there could be an agreement on fundamental principles. A commonly understood basis could allow for fair and proportionate mechanisms to address crucial aspects of partiality and opacity in automated decision making. It could trigger a shift toward a concept of ‘tracking the trackers’ and a discussion on a ‘right to an unbiased decision maker’.

2021 ◽  
pp. 45-64
Author(s):  
Petra Molnar

AbstractPeople on the move are often left out of conversations around technological development and become guinea pigs for testing new surveillance tools before bringing them to the wider population. These experiments range from big data predictions about population movements in humanitarian crises to automated decision-making in immigration and refugee applications to AI lie detectors at European airports. The Covid-19 pandemic has seen an increase of technological solutions presented as viable ways to stop its spread. Governments’ move toward biosurveillance has increased tracking, automated drones, and other technologies that purport to manage migration. However, refugees and people crossing borders are disproportionately targeted, with far-reaching impacts on various human rights. Drawing on interviews with affected communities in Belgium and Greece in 2020, this chapter explores how technological experiments on refugees are often discriminatory, breach privacy, and endanger lives. Lack of regulation of such technological experimentation and a pre-existing opaque decision-making ecosystem creates a governance gap that leaves room for far-reaching human rights impacts in this time of exception, with private sector interest setting the agenda. Blanket technological solutions do not address the root causes of displacement, forced migration, and economic inequality – all factors exacerbating the vulnerabilities communities on the move face in these pandemic times.


2019 ◽  
Vol 16 (1) ◽  
pp. 130-141
Author(s):  
Karen E Smith

Abstract Foreign policy analysis (FPA) opens the “black box” of the state and provides explanations of how and why foreign policy decisions are made, which puts individuals and groups (from committees to ministries) at the center of analysis. Yet the sex of the decision-maker and the gendered nature of the decision-making process have generally been left out of the picture. FPA has not addressed questions regarding the influence of women in foreign policy decision-making processes or the effects of gender norms on decision-making; indeed, FPA appears to be almost entirely gender-free. This article argues that “gendering” FPA is long overdue and that incorporating gender into FPA frameworks can provide a richer and more nuanced picture of foreign policy–making.


2021 ◽  
Author(s):  
Joanna Mazur

The author verifies the hypothesis concerning the possibility of using algorithms – applied in automated decision making in public sector – as information which is subject to the law governing the right to access information or the right to access official documents in European law. She discusses problems caused by the approach to these laws in the European Union, as well as lack of conformity of the jurisprudence between the Court of Justice of the European Union and the European Court of Human Rights.


Author(s):  
Joanna Mazur

ABSTRACT Due to the concerns which are raised regarding the impact of automated decision-making (ADM) on transparency and their potential discriminatory character, it is worth examining the possibility of applying legal measures which could serve to increase transparency of ADM systems. The article explores the possibility to consider algorithms used in ADM systems as documents subjected to the right to access documents in European Union (EU) law. It is focused on contrasting and comparing the approach based on the right to access public documents developed by the Court of Justice of European Union (CJEU) with the approach to the right to access public information as interpreted by the European Court of Human Rights (ECtHR). The analysis shows discrepancies in the perspectives presented by these Courts which result in a limited scope of the right to access public documents in EU law. Pointing out these differences may provide a motivation to clarify the meaning of the right to access information in EU law, the CJEU’s approach remaining as for now incoherent. The article presents the arguments for and ways of bringing together the approaches of the CJEU and the ECtHR in the light of a decreasing level of transparency resulting from the use of ADM in the public sector. It shows that in order to ensure compliance with EU law, it is necessary to rethink the role which the right to access information plays in the human rights catalogue.


Author(s):  
Mark Elliott ◽  
Jason Varuhas

This chapter examines the notions of impartiality (and bias) and independence. It first provides an overview of the scope and rationale of the rule against bias before discussing the connection between impartiality and procedural fairness. It then reviews the ‘automatic disqualification rule’ by which a decision-maker can be disqualified if he/she has a sufficient financial interest in the outcome of the decision-making process. It also explores the apprehension of bias and the ‘fair-minded observer rule’, along with the political dimensions of the rule against bias. Finally, it considers Article 6 of the European Convention on Human Rights in an administrative context and when Article 6(1) applies to administrative decision-making. A number of relevant cases are cited throughout the chapter, including R v. Sussex Justices, ex parte McCarthy [1924] 1 KB 256.


2018 ◽  
Author(s):  
Susan D. Franck ◽  
Anne van Aaken ◽  
James Freda ◽  
Chris Guthrie ◽  
Jeffrey J. Rachlinski

66 Emory Law Journal 1115 (2017)Arbitrators are lead actors in global dispute resolution. They are to global dispute resolution what judges are to domestic dispute resolution. Despite its global significance, arbitral decision making is a black box. This Article is the first to use original experimental research to explore how international arbitrators decide cases. We find that arbitrators often make intuitive and impressionistic decisions, rather than fully deliberative decisions. We also find evidence that casts doubt on the conventional wisdom that arbitrators render “split the baby” decisions. Although direct comparisons are difficult, we find that arbitrators generally perform at least as well as, but never demonstrably worse than, national judges analyzed in earlier research. There may be reasons to prefer judges to international arbitrators, but the quality of judgment and decision making, at least as measured in these experimental studies, is not one of them. Thus, normative debates about global dispute resolution should focus on using structural safeguards and legal protections to enhance quality decision-making, regardless of decision maker identity or title.


Legal Studies ◽  
2021 ◽  
pp. 1-20
Author(s):  
Rebecca Schmidt ◽  
Colin Scott

Abstract Discretion gives decision makers choices as to how resources are allocated, or how other aspects of state largesse or coercion are deployed. Discretionary state power challenges aspects of the rule of law, first by transferring decisions from legislators to departments, agencies and street-level bureaucrats and secondly by risking the uniform application of key fairness and equality norms. Concerns to find alternative and decentred forms of regulation gave rise to new types of regulation, sometimes labeled ‘regulatory capitalism’. Regulatory capitalism highlights the roles of a wider range of actors exercising powers and a wider range of instruments. It includes also new forms of discretion, for example over automated decision making processes, over the formulation and dissemination of league tables or over the use of behavioural measures. This paper takes a novel approach by linking and extending the significant literature on these changing patterns of regulatory administration with consideration of the changing modes of deployment of discretion. Using this specific lens, we observe two potentially contradictory trends: an increase in determining and structuring administrative decision, leading to a more transparent use of discretion; and the increased use of automated decision making processes which have the potential of producing a less transparent black box scenario.


Lex Russica ◽  
2020 ◽  
Vol 73 (6) ◽  
pp. 139-148
Author(s):  
D. L. Kuteynikov ◽  
O. A. Izhaev ◽  
S. S. Zenin ◽  
V. A. Lebedev

The paper examines the European and American legal approaches based on legislation regulating the use of computer algorithms, i.e. systems for automated decision-making of legally significant decisions. It is established that these jurisdictions apply essentially different concepts.The European approach provides for regulating the use of automated decision-making systems through legislation on personal data. The authors conclude that the general data protection regulation does not impose a legal obligation on the controllers to disclose technical information, i.e. to open a "black box", to the subject of personal data, in respect of which the algorithm makes a decision. This may happen in the future, when the legislative authorities specify the provisions of this Regulation, according to which the controller must provide the subject of personal data with meaningful information about the logic of decisions taken in relation to it.In the United States, issues of transparency and accountability of algorithms are regulated by various antidiscrimination acts that regulate certain areas of human activity. At the same time, they are fragmentary and their totality does not represent a complex, interconnected system of regulatory legal acts. In practice, legal regulation is carried out ad hoc with reference to certain legal provisions prohibiting the processing of sensitive types of personal data.The paper states that the legal regulation of algorithmic transparency and accountability is in its infancy in Russia. The existing legislation on personal data suggests that the domestic approach to solving the "black box" problem is close to the European one. When developing and adopting relevant regulatory legal acts, it is necessary to proceed from the fact that the subject of personal data should have the right to receive information explaining the logic of the decision made in relation to itin an accessible form.


Author(s):  
Yu. S. Kharitonova ◽  
◽  
V. S. Savina ◽  
F. Pagnini ◽  
◽  
...  

Introduction: this paper focuses on the legal problems of applying the artificial intelligence technology when solving socio-economic problems. The convergence of two disruptive technologies – Artificial Intelligence (AI) and Data Science – has created a fundamental transformation of social relations in various spheres of human life. A transformational role was played by classical areas of artificial intelligence such as algorithmic logic, planning, knowledge representation, modeling, autonomous systems, multiagent systems, expert systems (ES), decision support systems (DSS), simulation, pattern recognition, image processing, and natural language processing (NLP), as well as by special areas such as representation learning, machine learning, optimization, statistical modeling, mathematical modeling, data analytics, knowledge discovery, complexity science, computational intelligence, event analysis, behavior analysis, social network analysis, and also deep learning and cognitive computing. The mentioned AI and Big Data technologies are used in various business spheres to simplify and accelerate decision-making of different kinds and significance. At the same time, self-learning algorithms create or reproduce inequalities between participants in circulation, lead to discrimination of all kinds due to algorithmic bias. Purpose: to define the areas and directions of legal regulation of algorithmic bias in the application of artificial intelligence from the legal perspective, based on the analysis of Russian and foreign scientific concepts. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods such as the legal-dogmatic method and the method of interpretation of legal norms. Results: artificial intelligence has many advantages (it allows us to improve creativity, services and lifestyle, to enhance the security, helps in solving various problems), but at the same time it causes numerous concerns due to the harmful effects on individual autonomy, privacy, and fundamental human rights and freedoms. Algorithmic bias exists even when the algorithm developer has no intention to discriminate, and even when the recommendation system does not accept demographic information as input: even in the absence of this information, due to thorough analysis of the similarities between products and users, the algorithm may recommend a product to a very homogeneous set of users. The identified problems and risks of AI bias should be taken into consideration by lawyers and developers and should be mitigated to the fullest extent possible, both when developing ethical principles and requirements and in the field of legal policy and law at the national and supranational levels. The legal community sees the opportunity to solve the problem of algorithmic bias through various kinds of declarations, policies, and standards to be followed in the development, testing, and operation of AI systems. Conclusions: if left unaddressed, biased algorithms could lead to decisions that would have a disparate collective impact on specific groups of people even without the programmer’s intent to make a distinction. The study of the anticipated and unintended consequences of applying AI algorithms is especially necessary today because the current public policy may be insufficient to identify, mitigate, and remedy the effects of such non-obvious bias on participants in legal relations. Solving the issues of algorithmic bias by technical means alone will not lead to the desired results. The world community recognizes the need to introduce standardization and develop ethical principles, which would ensure proper decision-making with the application of artificial intelligence. It is necessary to create special rules that would restrict algorithmic bias. Regardless of the areas where such violations are revealed, they have standard features of unfair behavior of the participants in social relations and can be qualified as violations of human rights or fair competition. Minimization of algorithmic bias is possible through the obligatory introduction into circulation of data in the form that would not allow explicit or implicit segregation of various groups of society, i.e. it should become possible to analyze only data without any explicit attributes of groups, data in their full diversity. As a result, the AI model would be built on the analysis of data from all socio-legal groups of society.


Sign in / Sign up

Export Citation Format

Share Document