Developing a Digital Welfare State: Data Protection and the Use of Automated Decision-Making in the Public Sector across Six EU Countries

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Marta Choroszewicz ◽  
Beata Mäihäniemi

This article uses the sociolegal perspective to address current problems surrounding data protection and the experimental use of automated decision-making systems. This article outlines and discusses the hard laws regarding national adaptations of the European General Data Protection Regulation and other regulations as well as the use of automated decision-making in the public sector in six European countries (Denmark, Sweden, Germany, Finland, France, and the Netherlands). Despite its limitations, the General Data Protection Regulation has impacted the geopolitics of the global data market by empowering citizens and data protection authorities to voice their complaints and conduct investigations regarding data breaches. We draw on the Esping-Andersen welfare state typology to advance our understanding of the different approaches of states to citizens’ data protection and data use for automated decision-making between countries in the Nordic regime and the Conservative-Corporatist regime. Our study clearly indicates a need for additional legislation regarding the use of citizens’ data for automated decision-making and regulation of automated decision-making. Our results also indicate that legislation in Finland, Sweden, and Denmark draws upon the mutual trust between public administrations and citizens and thus offers only general guarantees regarding the use of citizens’ data. In contrast, Germany, France, and the Netherlands have enacted a combination of general and sectoral regulations to protect and restrict citizens’ rights. We also identify some problematic national policy responses to the General Data Protection Regulation that empower governments and related institutions to make citizens accountable to states’ stricter obligations and tougher sanctions. The article contributes to the discussion on the current phase of the developing digital welfare state in Europe and the role of new technologies (i.e., automated decision-making) in this phase. We argue that states and public institutions should play a central role in strengthening the social norms associated with data privacy and protection as well as citizens’ right to social security.

2021 ◽  
Vol 46 (3-4) ◽  
pp. 321-345
Author(s):  
Robert Grzeszczak ◽  
Joanna Mazur

Abstract The development of automated decision-making technologies creates the threat of de-iuridification: replacement of the legal acts’ provisions with automated, technological solutions. The article examines how selected provisions of the General Data Protection Regulation concerning, among other things, data protection impact assessments, the right to not be subject to automated decision-making, information obligations and the right to access are applied in the Polish national legal order. We focus on the institutional and procedural solutions regarding the involvement of expert bodies and other stakeholders in the process of specification of the norms included in the gdpr and their enforcement. We argue that the example of Poland shows that the solutions adopted in the gdpr do not shift the balance concerning regulatory power in regard to automated decision-making to other stakeholders and as such do not favor of a more participative approach to the regulatory processes.


2017 ◽  
Author(s):  
Michael Veale ◽  
Lilian Edwards

Cite as: Michael Veale and Lilian Edwards, 'Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling' (forthcoming) Computer Law and Security ReviewThe new Article 29 Data Protection Working Party’s draft guidance on automated decision-making and profiling seeks to clarify the European data protection (DP) law’s little-used right to prevent automated decision-making, as well as the provisions around profiling more broadly, in the run-up to the General Data Protection Regulation. In this paper, we analyse these new guidelines in the context of recent scholarly debates and technological concerns. They foray into the less-trodden areas of bias and non-discrimination, the significance of advertising, the nature of “solely” automated decisions, impacts upon groups and the inference of special categories of data — at times, appearing more to be making or extending rules than to be interpreting them. At the same time, they provide only partial clarity — and perhaps even some extra confusion — around both the much discussed “right to an explanation” and the apparent prohibition on significant automated decisions concerning children. The Working Party appear to feel less mandated to adjudicate in these conflicts between the recitals and the enacting articles than to explore altogether new avenues. Nevertheless, the directions they choose to explore are particularly important ones for the future governance of machine learning and artificial intelligence in Europe and beyond.


2020 ◽  
Vol 11 (1) ◽  
pp. 18-50 ◽  
Author(s):  
Maja BRKAN ◽  
Grégory BONNET

Understanding of the causes and correlations for algorithmic decisions is currently one of the major challenges of computer science, addressed under an umbrella term “explainable AI (XAI)”. Being able to explain an AI-based system may help to make algorithmic decisions more satisfying and acceptable, to better control and update AI-based systems in case of failure, to build more accurate models, and to discover new knowledge directly or indirectly. On the legal side, the question whether the General Data Protection Regulation (GDPR) provides data subjects with the right to explanation in case of automated decision-making has equally been the subject of a heated doctrinal debate. While arguing that the right to explanation in the GDPR should be a result of interpretative analysis of several GDPR provisions jointly, the authors move this debate forward by discussing the technical and legal feasibility of the explanation of algorithmic decisions. Legal limits, in particular the secrecy of algorithms, as well as technical obstacles could potentially obstruct the practical implementation of this right. By adopting an interdisciplinary approach, the authors explore not only whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.


Author(s):  
Aritz ROMEO RUIZ

Laburpena: Lan honen helburua da administrazio publikoak datu pertsonalen tratamenduan duen erantzukizun proaktiboaren printzipioaren analisia eskaintzea, eta ikuspegi juridikoa ematea praktikan errazago aplikatzeko. Lana lau ataletan egituratuta dago. Lehenengoan, datu pertsonalen babesa arautzen duen esparru berriaren aurkezpen orokorra egiten da; hau da, Datuak Babesteko Erregelamendu Orokorrak (EB) ezartzen duen araudi berria aurkezten da. Bigarren atala erantzukizun proaktiboari buruzkoa da, administrazio publikoek datu pertsonalak tratatzeko oinarrizko printzipio gisa. Hirugarrenak proposatzen ditu administrazio publikoek praktikan erantzukizun proaktiboaren printzipioa betetzeko kontuan har ditzaketen hainbat neurri. Azkenik, laugarren atalak gogoeta egiten du antolamendu-aldaketak egiteko beharrari buruz, Erregelamendu Orokorraren printzipioak betetzen dituztela ziurtatzeko eta herritarrek eskubideak balia ditzaten ziurtatzeko; horrez gain, aipamen berezia egiten dio datuak babesteko ordezkariaren figurari. Ondorioztatzen den ideia nagusia da garrantzitsua dela administrazio publikoek datuak babesteko politika bat diseinatzea, lehenetsita aplikatuko dena, eta ez bakarrik erantzukizun politikoak dituztenei, baizik eta sektore publikoan lan egiten duten pertsona guztiei eragingo diena. Resumen: El presente trabajo tiene como objetivo ofrecer un análisis del principio de responsabilidad proactiva en el tratamiento de datos personales por parte de la administración pública, y pretende aportar una visión jurídica para facilitar su aplicación en la práctica. El trabajo está estructurado en cuatro apartados. En el primero de ellos se presenta, en términos generales, el nuevo marco regulador de la protección de datos personales, que es consecuencia del Reglamento (UE) General de Protección de Datos. El segundo apartado está dedicado a la responsabilidad proactiva como principio básico del tratamiento de datos personales por las administraciones públicas. El tercero propone una serie de medidas que las administraciones públicas pueden tener en cuenta para cumplir con el principio de responsabilidad proactiva en la práctica. Finalmente, el apartado cuarto aporta una reflexión sobre la necesidad de introducir cambios organizacionales para asegurar el cumplimiento de los principios del Reglamento General de Protección de datos y del ejercicio de derechos por la ciudadanía, con una especial mención a la figura del delegado o delegada de protección de datos. La principal idea que se concluye es la importancia de que las administraciones públicas diseñen una política de protección de datos que se aplique por defecto, e implique, no sólo a quienes ejercen responsabilidades políticas, sino a todas las personas que trabajan en el sector público. Abstract: The present work aims to offer an analysis of the principle of proactive responsibility in the treatment of personal data by the public administration, and aims to provide a legal vision to facilitate its practical implementation. The work is structured in four sections. The first of these presents, in general terms, the new regulatory framework for the protection of personal data, which is a consequence of the General Data Protection Regulation (EU). The second section is dedicated to proactive responsibility as a basic principle of the processing of personal data by public administrations. The third proposes a series of measures that public administrations can take into account to comply with the principle of proactive responsibility in practice. Finally, the fourth section provides a reflection on the need to introduce organizational changes to ensure compliance with the principles of the General Data Protection Regulation and the exercise of rights by citizens, with special reference to the figure of the Data Protection Officer. The main idea that is concluded is the importance for public administrations to design a data protection policy that is applied by default, and involves not only those who exercise political responsibilities, but also all those who work in the public sector.


2020 ◽  
pp. 089443932098043
Author(s):  
Agneta Ranerup ◽  
Helle Zinner Henriksen

The introduction of robotic process automation (RPA) into the public sector has changed civil servants’ daily life and practices. One of these central practices in the public sector is discretion. The shift to a digital mode of discretion calls for an understanding of the new situation. This article presents an empirical case where automated decision making driven by RPA has been implemented in social services in Sweden. It focuses on the aspirational values and effects of the RPA in social services. Context, task, and activities are captured by a detailed analysis of humans and technology. This research finds that digitalization in social services has a positive effect on civil servants’ discretionary practices mainly in terms of their ethical, democratic, and professional values. The long-term effects and the influence on fair and uniform decision making also merit future research. In addition, the article finds that a human–technology hybrid actor redefines social assistance practices. Simplifications are needed to unpack the automated decision-making process because of the technological and theoretical complexities.


2016 ◽  
Vol 19 ◽  
pp. 252-286
Author(s):  
Orla LYNSKEY

AbstractEU data protection law has, to date, been monitored and enforced in a decentralised way by independent supervisory authorities in each Member State. While the independence of these supervisory authorities is an essential element of EU data protection law, this decentralised governance structure has led to competing claims from supervisory authorities regarding the national law applicable to a data processing operation and the national authority responsible for enforcing the data protection rules. These competing claims – evident in investigations conducted into the data protection compliance of Google and Facebook – jeopardise the objectives of the EU data protection regime. The new General Data Protection Regulation will revolutionise data protection governance by providing for a centralised decision-making body, the European Data Protection Board. While this agency will ensure the ‘Europeanisation’ of data protection law, given the nature and the extent of this Board’s powers, it marks another significant shift in the EU’s agency-creating process and must, therefore, also be considered in its broader EU context.


Sign in / Sign up

Export Citation Format

Share Document