26236
portfolio_page-template-default,single,single-portfolio_page,postid-26236,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031

Accountability of Algorithms in the GDPR and Beyond: A European Legal Framework on Automated Decision- Making
Céline Castets-Renard
Article

  The full text of this Article may be found here.

30 Fordham Intell. Prop. Media & Ent. L.J. 91 (2019).

Article by Céline Castets-Renard*

 

ABSTRACT

[A]

utomated decision systems appear to carry higher risks today than they ever have before. Digital technologies collect massive amounts of data and evaluate people in every aspect of their lives, such as housing and employment. This collected information is ranked through the use of algorithms. The use of such algorithms may be problematic. Because the results obtained through algorithms are created by machines, they are often assumed to be immune from human biases. However, algorithms are the product of human thinking and, as such, can perpetuate existing stereotypes and social segregation. This problem is exacerbated by the fact that algorithms are not accountable. This Article explores problems related to algorithmic bias, error, and discrimination which exists due to a lack of transparency and understanding behind a machine’s design or instruction.

This Article deals with the European Union’s legal framework on decision-making on the General Data Protection Regulation (“GDPR”) and some Member State implementation laws, with specific emphasis on French law. This Article argues that the European framework does not adequately address the algorithm’s problems of opacity and discrimination related to machine learning processing and the explanations of automated decision-making. The Article proceeds by evaluating limitations to the legal remedies provided by the GDPR. In particular, the GDPR’s lack of a right to individual explanation regarding these decisions poses a problem. Furthermore, the Article also argues that the GDPR allows for too many flexibilities for individual Member States, thus failing to create a “digital single market.” Finally, this Article proposes certain solutions to address the opacity and bias problems of automated decision-making.


* Professor, University of Ottawa, Civil Law Faculty (Canada), Chair of the Law, Accountability, Social Trust in AI, ANITI (Artificial and Natural Intelligence Toulouse Institute) (France), and Adjunct Professor, Fordham University School of Law (New York). This Article has been presented to a workshop at the Fordham Center on Law and Information Policy (CLIP) in April 2018.

The author thanks the participants for their useful comments: Marie-Apolline Barbara, Visiting Researcher, Cornell Tech; Erin Carroll, Associate Professor of Legal Research and Writing, Georgetown University; Tithi Chattopadhyay, Associate Director, Center for Information Technology Policy (CITP), Princeton University; Ignacio Cofone, J.S.D. candidate at Yale Law School and Research Fellow at NYU, Information Law Institute; Roger A. Ford, Associate Professor of Law, University of New Hampshire School of Law; Frank Pasquale, Professor of Law, University of Maryland Carey School of Law; N. Cameron Russell, Executive Director of CLIP, Adjunct Professor of Law, Fordham University School of Law. Thanks to the Fulbright-French Commission Program for the Fulbright Grant and to Professor Joel Reidenberg, Stanley D. and Nikki Waxberg Chair and Professor of Law, Founding Academic Director, Fordham CLIP for welcoming me. I also thank Professor Jack Balkin, Knight Professor of Constitutional Law and the First Amendment at Yale Law School, and founder and director of Yale’s Information Society Project (ISP), as well as Rebecca Crootof, Executive Director of the ISP, Research Scholar and Lecturer in Law at Yale Law School for welcoming me as ISP Visiting Fellow.