27319
post-template-default,single,single-post,postid-27319,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Stop the Bleeding: Discontinuing Governments’ AI Use

Stop the Bleeding: Discontinuing Governments’ AI Use

I. INTRODUCTION

As technology develops, it creeps its way further into our lives, and although these advancements generally benefit society, they can also be a source of great detriment. Specifically, governments’[1] use of Artificial Intelligence (“AI”) in broad decision-making[2] processes presents a source of significant harm to already marginalized populations burdened by systemic oppression in the United States. Governments’ AI usage must be restricted to only decision-making pertaining to non-social issues. Doing so would help mitigate further oppression of marginalized populations,[3] and even prevent the government from infringing on citizens’ privacy.

This Essay first defines AI in readily understandable terms and provides background information and examples regarding its general function. Second, the Essay discusses examples of governments’ prolific and expanding AI usage in vital decision-making, and how it adversely affects already marginalized populations. Third, the Essay focuses on the negative privacy implications resulting from the governments’ use of AI, focusing on its need to gather and retain vast amounts of private data for their AI systems to function. Fourth, the Essay examines potential solutions for this growing issue. Finally, concluding by arguing for restricting governmental AI usage to decisions relating only to non-social decisions,[4] and disallowing its application to any decisions associated with social and interpersonal issues.[5]

II. BACKGROUND

A. ARTIFICIAL INTELLIGENCE DEFINED

For brevity and clarity, this Essay will refer to the “pragmatic” definition of AI,[6] and avoid delving too deeply into AI’s technical aspects. AI, defined generally, refers to a computerized system that exhibits behavior commonly thought of as requiring intelligence.[7] Additionally, AI systems use human-like thought processes that enable them to make their own decisions.[8] Programmers[9] use datasets to train and teach AI systems, and in doing so, the AI can recognize patterns and similarities within a given dataset to create derivative outputs.[10] While it was once difficult to develop and train one’s own AI software, it is now easy for people to do so using different (sometimes free) tools on the internet.[11] Eventually, and with enough data, AI systems can learn to perform many tasks commonly done by humans.[12]

Presently, governments throughout the U.S. (and the world) use AI systems to make critical decisions.[13] These systems are responsible for many vital governmental choices, some of which include: selecting restaurants for inspection, predicting where to conduct city-wide rodent control, where to send building and fire inspectors, and how the USPS sorts mail.[14] While these may seem like innocuous fields in which AI decisions reign supreme, insidious AI usage lurks in the background. [15] Among other uses, AI also selects passengers for airport searches, evaluates loan applications, and determines other crucial governmental decisions with far-reaching consequences.[16]

B. AI’S DISCRIMINATION AND BIAS PROBLEM

An AI system is only as good as the data on which it bases its decisions; often, this data is biased and discriminatory.[17] Powered by tainted data, these systems exemplify levels of substantial indifference to problematic information; instead, they incorporate bad data into its decisions, obfuscating biases and hiding discriminatory tendencies then embedded within its programming.[18] This issue of input bias is inherent in the source data of AI systems employed by governments and corporations alike.[19] Soon after programming, these biases are “hardwired” into the respective platforms, and to further complicate matters, those using these AI systems will often attempt to obfuscate them to prevent outside scrutiny.[20]

Many have proposed potential solutions for the issue of AI discrimination.[21] However, most recommendations will meet strong resistance and likely fail to pierce the “shroud of secrecy” surrounding the “Black Boxes” that envelop AI platforms. [22] Some proposals suggest remedying biases by imposing a burden upon entities using AI systems, requiring public disclosures and third-party audits.[23] However, the government is often prone to avoiding such scrutiny, and without legislative intervention, these solutions will likely fail.[24] Ultimately, the best option may be preventing government from employing AI systems in the decision-making process, at least to the degree to which the process induces social consequences.[25]

III. DISCUSSION

A. NEGATIVE CONSEQUENCES OF AI BIASES IN EVERYDAY LIFE

AI’s inherent biases and propensity for discrimination is one that permeates nearly every circumstance in which it operates;[26] this section will focus on just a couple of these instances.

Every day more employers utilize AI hiring tools that decide which candidates best suit their open positions.[27] However, because AI uses vast and biased datasets when making these decisions, they often penalize protected classes in the process.[28] More often than not, these applications systemically disadvantage Black individuals and women, although they are more-than-suitable candidates.[29] To compound matters, AI’s complexities make it difficult for victims to demonstrate discriminatory intent (and other Title VII elements[30]), practically rendering obsolete current protections against hiring discrimination.[31] While some suggest that compelling transparency requirements upon AI users or forcing companies to confront their algorithms could solve the issue,[32] it is unlikely that this will successfully mitigate discriminatory data’s undesirable consequences in the near future, at least until users purge their AI Systems of the tainted data.

Another example of AI’s inherent biases and resultant social ramifications pertains to its racist profiling on the internet. Since 2000, cases of “weblining” became a scourge of the internet.[33] Companies were (and still are) using AI algorithms and bots to discriminate against minorities and peoples with disabilities.[34] In defense of their racially biased algorithms, companies argued that they were not targeting people based on their race, rather on their internet usage; once again, bad data is the culprit.[35] While racism plagues the entire world, countries with more profound “racial cleavages” (like the U.S.) are more likely to collect and operate with racist and biased datasets.[36] This racism inherent to the U.S.’s majority populations only leads to skewed AI decision-making outputs and further embeds racism, discrimination, and bias into society’s very framework.[37][38]

B. GOVERNMENTS’ AI USAGE

While AI’s discriminatory applications have met some resistance throughout the corporate world,[39] governments’ AI use remains a daunting issue.[40] For example, recent litigation revolved around the NYPD’s use of AI and algorithms that attempted to predict where crime is likely to occur.[41] In several instances, plaintiffs sued New York government agencies to compel disclosure of their AI algorithms and shed light on innate biases and sources of discrimination.[42] When governments use AI systems in their decision-making, they effectively allow for innately biased and discriminatory processes to dictate their decisions.[43] Compounding the issue is that simply for AI systems to function, they must utilize vast datasets to make predictions; thus, government agencies must indefinitely retain citizens’ personal data.[44]

C. PRIVACY CONCERNS AND GOVERNMENTS’ DATA COLLECTION

Privacy concerns and AI technologies go hand in hand, particularly where the government is involved.[45] People are often unaware of the “sheer magnitude of data that others possess about them.”[46] Worsening matters, AI technologies are developing too quickly for Congress, and current possibilities for adequate regulatory schemes are too remote.[47] While in the past, it may have been laborious or cumbersome for the government to collect and retain vast amounts of its citizens’ data, this is no longer the case.[48] Before the 21st century, the government faced difficulty in efficiently and cost-effectively collecting and retaining its citizens’ information (at a rate that would be beneficial to AI systems); but as technology advanced, that all changed.[49]

There is presently a massive imbalance between individual privacy and government access to personal data in the US.[50] Reports show that in recent years the U.S. government, through its surveillance programs, tapped into servers of leading ISPs in the country to access, extract, and retain data.[51] Furthermore, when some began arguing that there is a greater need for governmental data collection and retention than ever before in history, privacy took a backseat to governments’ “insatiable appetite” for private data.[52] This infringement, borne out of a great source of pressure for government data collection—the fear of homeland terrorist activity.[53]

Capitulating to its critics, Congress and the Executive branch eventually expanded the government’s authority to collect, retain, and synthesize citizens’ intimate data, eventually leading to its use for a variety of objectives, separate from the original purpose of any such authorizations.[54] Since using this data alongside AI systems helps facilitate governmental duties, some argue it is necessary to retain, but the absence of any legal regime monitoring the governments’ data mining intensifies the risk of misuse and instances of rampant privacy infringement.[55][56]

D. INSTITUTIONALIZING AI FURTHER ENTRENCHES BIASES

Aside from the issues of privacy infringement, governmental usage of AI results in decisions that are not only skewed by racist and prejudiced data, but that further embeds systemic biases into an already broken system.[57] To this effect, this Essay posits a simple argument: (1) governments are increasing their reliance on AI systems for decision-making,[58] (2) most AI systems are replete with structures of deep-rooted racism, discrimination, and bias,[59] and finally, (3) incorporating these AI systems within the way government functions and processes decisions only further entrenches those negative attributes.

IV. POTENTIAL SOLUTIONS

Numerous legal professionals and scholars offered potential solutions for the issue of AI’s innate biases and tainted datasets. However, many proposed solutions either require drastic changes in current legal regimes or are just unlikely candidates for implementation. This section will present and examine some of these solutions to show that, ultimately, the best available solution is preventing governmental AI use for social-issue decision-making altogether.

A. THE CONFRONTATION CLAUSE

The first proposed solution argues that the Confrontation Clause in the Sixth Amendment applies to governments’ dataset transfers from private corporations.[60] While this theory may mitigate the harms of governmental privacy infringement, it does not address the underlying issue of biased AI decision-making. Under this theory, any transfer of an individual’s data to the government constitutes a testimonial statement against that person.[61] Accordingly, the Confrontation Clause (if expanded, as it has been by some courts) would limit governments’ ability to obtain its citizens’ data.[62] This solution posits that these limitations are flexible enough to prevent governmental misuse of personal data, while simultaneously allowing data use in emergencies.[63] However, this Essay rejects this solution because it does not consider the (potentially more severe) issue of AI’s skewed outputs resulting from inherently flawed datasets.

B. DATA MINING: A NEW FRAMEWORK

A second proposed solution suggests creating an entirely new data mining framework that would oversee the way government obtains, stores, and utilizes personal data.[64] This solution further suggests that any such program must contain audit tools to ensure compliance.[65] An essential piece of this framework would include the opportunity for data correction and changes to machine learning that may help prevent “inevitable” mistakes produced by AI systems.[66] As with the previously proposed solution, this Essay rejects this solution as well. The proposal points to a need for “some form of judicial authorization” for data mining systems.[67] Still, it fails to consider that while government data mining is an issue, the data itself is inherently problematic.

C. THE AI DATA TRANSPARENCY MODEL

This third solution suggests an innovative model in which users must train AI systems to ensure compliance with relevant regulations and societal expectations.[68] This proposal recommends establishing third-party (objective) auditors who would evaluate any data with which AI functions, effectively confronting the issue at its source.[69] These auditors must examine and assess any data accessible to an AI system and verify that its use does not conflict with existing legal rules (mainly discrimination and bias guidelines).[70] The driving force underlying this solution is the idea that discriminatory and privacy-infringing datasets “reduce the likelihood that AI systems will produce good outcomes.”[71] Accordingly, by facilitating this framework, “the likelihood of adverse outcomes” will decrease.[72] Additionally, this proposal requires that AI users “play along” and submit to audits of their datasets before they are exposed to any AI systems.[73] This solution, however, does not account for governments’ AI use[74] and the challenges involved in imposing any sort of limitations upon governmental conduct.[75]

V. CONCLUSION

Ultimately, legislative limitations upon governments’ AI decision-making systems, relegating them to purely non-social issues is the only viable option. While AI systems provide innovation and efficiency opportunities, their decisions are tainted and lack reasonable consideration for fundamental social values.[76] This Essay examined AI’s functions, inherent biases and discrimination problems, and the governments’ use of these systems. Many proposed solutions for reducing undesirable AI output, but none resolve the issue in a readily implementable manner. Therefore, a government should not utilize AI technologies for any decision-making that has social ramifications.[77] Continuing to utilize AI in such a manner would further entrench the biases that continuously plague our country into our governments’ future decisions, perpetuating and institutionalizing discrimination against already marginalized populations ad infinitum.

Footnotes[+]

Steven W Schlesinger

Steven W Schlesinger is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. He holds a B.A. in Psychology from Touro College. Steven is an ordained Rabbi, founder of an IT company, and occasionally dabbles in blitz chess.