Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation

2017 ◽  
Vol 7 (4) ◽  
pp. 243-265 ◽  
Author(s):  
Gianclaudio Malgieri ◽  
Giovanni Comandé
2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Marta Choroszewicz ◽  
Beata Mäihäniemi

This article uses the sociolegal perspective to address current problems surrounding data protection and the experimental use of automated decision-making systems. This article outlines and discusses the hard laws regarding national adaptations of the European General Data Protection Regulation and other regulations as well as the use of automated decision-making in the public sector in six European countries (Denmark, Sweden, Germany, Finland, France, and the Netherlands). Despite its limitations, the General Data Protection Regulation has impacted the geopolitics of the global data market by empowering citizens and data protection authorities to voice their complaints and conduct investigations regarding data breaches. We draw on the Esping-Andersen welfare state typology to advance our understanding of the different approaches of states to citizens’ data protection and data use for automated decision-making between countries in the Nordic regime and the Conservative-Corporatist regime. Our study clearly indicates a need for additional legislation regarding the use of citizens’ data for automated decision-making and regulation of automated decision-making. Our results also indicate that legislation in Finland, Sweden, and Denmark draws upon the mutual trust between public administrations and citizens and thus offers only general guarantees regarding the use of citizens’ data. In contrast, Germany, France, and the Netherlands have enacted a combination of general and sectoral regulations to protect and restrict citizens’ rights. We also identify some problematic national policy responses to the General Data Protection Regulation that empower governments and related institutions to make citizens accountable to states’ stricter obligations and tougher sanctions. The article contributes to the discussion on the current phase of the developing digital welfare state in Europe and the role of new technologies (i.e., automated decision-making) in this phase. We argue that states and public institutions should play a central role in strengthening the social norms associated with data privacy and protection as well as citizens’ right to social security.


2021 ◽  
Vol 46 (3-4) ◽  
pp. 321-345
Author(s):  
Robert Grzeszczak ◽  
Joanna Mazur

Abstract The development of automated decision-making technologies creates the threat of de-iuridification: replacement of the legal acts’ provisions with automated, technological solutions. The article examines how selected provisions of the General Data Protection Regulation concerning, among other things, data protection impact assessments, the right to not be subject to automated decision-making, information obligations and the right to access are applied in the Polish national legal order. We focus on the institutional and procedural solutions regarding the involvement of expert bodies and other stakeholders in the process of specification of the norms included in the gdpr and their enforcement. We argue that the example of Poland shows that the solutions adopted in the gdpr do not shift the balance concerning regulatory power in regard to automated decision-making to other stakeholders and as such do not favor of a more participative approach to the regulatory processes.


2017 ◽  
Author(s):  
Michael Veale ◽  
Lilian Edwards

Cite as: Michael Veale and Lilian Edwards, 'Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling' (forthcoming) Computer Law and Security ReviewThe new Article 29 Data Protection Working Party’s draft guidance on automated decision-making and profiling seeks to clarify the European data protection (DP) law’s little-used right to prevent automated decision-making, as well as the provisions around profiling more broadly, in the run-up to the General Data Protection Regulation. In this paper, we analyse these new guidelines in the context of recent scholarly debates and technological concerns. They foray into the less-trodden areas of bias and non-discrimination, the significance of advertising, the nature of “solely” automated decisions, impacts upon groups and the inference of special categories of data — at times, appearing more to be making or extending rules than to be interpreting them. At the same time, they provide only partial clarity — and perhaps even some extra confusion — around both the much discussed “right to an explanation” and the apparent prohibition on significant automated decisions concerning children. The Working Party appear to feel less mandated to adjudicate in these conflicts between the recitals and the enacting articles than to explore altogether new avenues. Nevertheless, the directions they choose to explore are particularly important ones for the future governance of machine learning and artificial intelligence in Europe and beyond.


2020 ◽  
Vol 11 (1) ◽  
pp. 18-50 ◽  
Author(s):  
Maja BRKAN ◽  
Grégory BONNET

Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term “explainable AI (XAI)”. Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.


2016 ◽  
Vol 19 ◽  
pp. 252-286
Author(s):  
Orla LYNSKEY

AbstractEU data protection law has, to date, been monitored and enforced in a decentralised way by independent supervisory authorities in each Member State. While the independence of these supervisory authorities is an essential element of EU data protection law, this decentralised governance structure has led to competing claims from supervisory authorities regarding the national law applicable to a data processing operation and the national authority responsible for enforcing the data protection rules. These competing claims – evident in investigations conducted into the data protection compliance of Google and Facebook – jeopardise the objectives of the EU data protection regime. The new General Data Protection Regulation will revolutionise data protection governance by providing for a centralised decision-making body, the European Data Protection Board. While this agency will ensure the ‘Europeanisation’ of data protection law, given the nature and the extent of this Board’s powers, it marks another significant shift in the EU’s agency-creating process and must, therefore, also be considered in its broader EU context.


2021 ◽  
Vol 1 (1) ◽  
pp. 16-28
Author(s):  
Gianclaudio Malgieri

Abstract This paper argues that if we want a sustainable environment of desirable AI systems, we should aim not only at transparent, explainable, fair, lawful, and accountable algorithms, but we also should seek for “just” algorithms, that is, automated decision-making systems that include all the above-mentioned qualities (transparency, explainability, fairness, lawfulness, and accountability). This is possible through a practical “justification” statement and process (eventually derived from algorithmic impact assessment) through which the data controller proves, in practical ways, why the AI system is not unfair, not discriminatory, not obscure, not unlawful, etc. In other words, this justification (eventually derived from data protection impact assessment on the AI system) proves the legality of the system with respect to all data protection principles (fairness, lawfulness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, and accountability). All these principles are necessary components of a broader concept of just algorithmic decision-making and is already required by the GDPR, in particular considering: the data protection principles (Article 5), the need to enable (meaningful) contestations of automated decisions (Article 22) and the need to assess the AI system necessity, proportionality and legality under the Data Protection Impact Assessment model framework. (Article 35).


2018 ◽  
Author(s):  
Margot Kaminski

Many have called for algorithmic accountability: laws governing decision-making by complex algorithms, or AI. The EU’s General Data Protection Regulation (GDPR) now establishes exactly this. The recent debate over the right to explanation (a right to information about individual decisions made by algorithms) has obscured the significant algorithmic accountability regime established by the GDPR. The GDPR’s provisions on algorithmic accountability, which include a right to explanation, have the potential to be broader, stronger, and deeper than the preceding requirements of the Data Protection Directive. This Essay clarifies, largely for a U.S. audience, what the GDPR actually requires, incorporating recently released authoritative guidelines.


Sign in / Sign up

Export Citation Format

Share Document