27384
post-template-default,single,single-post,postid-27384,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,smooth_scroll,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Should a Computer Decide Your Freedom

Should a Computer Decide Your Freedom

America is the “Land of the Free,” but critics to our penal system think we don’t live up to the title.[1] Some local governments have begun answering these calls for reform, with states such as California, New Jersey, and Arizona trying to end the use of cash bail entirely.[2] However, the artificial intelligence (AI) they replaced it with may have made the problem worse.[3]

The United States bail system is a way for defendants to remain free while they wait for their trial, with a financial penalty if they don’t show up. [4]. Judges consider numerous factors when deciding the bail amount, with higher numbers based on how likely you are to skip your court date.[5] The most common type of bail is cash bail, which is an upfront payment.[6]

While this is often a straightforward process, those who can’t afford to post bail must remain in jail until the date of their trial.[7] This can lead to cases where an innocent, but poor defendant is waiting for years behind bars.[8] To combat this, as well as reduce the possibility of human biases, some states replaced the cash bail system with computer algorithms (AI) that determine if someone is a flight risk.[9] Note that some states require judges to use the risk assessment tools, and others make them optional.[10] Essentially, a computer is deciding who goes free and who waits in jail. Unfortunately for those defendants, machines may be more biased than man.[11]

Artificial intelligence is a computer algorithm designed to make human-like decisions.[12] AI is exposed to an extremely large amount of data points, from which it is trained to find patterns and make predictions.[13] Therefore, an AI system is only as unbiased as its data. States have implemented AI as “risk assessment systems” in the place of the cash bail system for almost a decade, at the advice of the Pretrial Justice Institute.[14] But in 2019, researchers from Harvard, MIT, Princeton, and several other reputable institutes have strongly urged states stop using them.[15]

While AI was implemented to decrease the racial disparity in pre-trial jails, it ended up doing the exact opposite. Kentucky, for example, did not have a large difference between black and white defendants granted release before the AI was implemented in 2011. Since then, Kentucky has consistently released a higher percentage of white defendants.[16] This problem was consistent, even after multiple attempts to change the algorithm.[17] Despite signs the system is flawed, Kentucky law allows the AI to release “low-risk” defendants without judge involvement or approval.[18]

Similar trends of possible bias were seen in states such as Colorado,[19] Ohio[20], and Texas.[21] There are various theories explaining why the AI could have these biases, all of which focus on how data sets could cause AI to make flawed inferences. One suggestion is that a judge is more likely to set little to no bail in rural, white areas, and set higher bails in urban, racially diverse areas.[22] Another theory is that the AI was trained with flawed or incorrect databases, skewing the results.[23] It is also proposed that the data is correct, but data filled with disparities in punishment will lead to disparate results.[24] Regardless of the reason(s), it is clear that pretrial risk assessments are problematic.

Last November, California residents voted to repeal Proposition 25, the 2018 law that implemented AI pretrial risk assessments.[25] However, other states maintain that it is a better alternative to cash bail.[26] In the past decade, the Supreme Courts of Iowa, Indiana, and Wisconsin have all upheld their use.[27] Matt Henry from The Appeal explained that risk assessment tools are probably here to stay, and “[w]hile better technology has the potential to make risk assessments fairer, that result is far from guaranteed, and it is up to the people who design, implement, and employ these tools to ensure they … safeguard the rights of those at society’s margins.”[28]

Footnotes[+]

Ziva Rubinstein

Ziva Rubinstein is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. She is also a member of the Dispute Resolution Society ABA Mediation Team, the Secretary of the Jewish Law Students Association, the co-treasurer of the Latin American Law Students Association, and a Board of Student Affairs Advisor. She holds a Bachelor of Engineering in Biomedical Engineering from Macaulay at CUNY City College.