40498
post-template-default,single,single-post,postid-40498,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

“Promise and Peril” President Biden Signs Executive Order to Increase AI Accountability

“Promise and Peril” President Biden Signs Executive Order to Increase AI Accountability

President Biden signed an executive order Oct. 30 aimed at addressing the “promise and peril” posed by artificial intelligence (AI).[1] The order, for the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” seeks to balance the needs of burgeoning AI companies to grow and innovate with the technology’s risks to national security, privacy, and consumer rights.[2] While the order will impact how the federal government uses AI, it will take more to influence the private sector.[3]

The Executive Order

“Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.”[4]

The executive order outlines a series of actions to be taken over the next 90 to 365 days, including promoting safety and security, advancing authentication, and protecting privacy.[5]

Promoting Safety and Security

The order requires that AI developers creating products that could pose serious threats to national security share their safety test results and other critical data with the U.S. government.[6]

Those findings would not need to be made public.[7] This provision is invoked under the Defense Production Act and focuses on any AI system models with the potential to threaten national security, public health, or public safety.[8] If the government finds test results concerning, it could force changes to AI systems[9], although such action could prompt a legal response from AI developers.[10] However, many of the AI developers for whom this regulation could apply are already working with the federal government.[11]

Further, the order charges the National Institute of Standards and Technology (NIST) with setting guidelines that AI systems must comply with before public release with the goal of ensuring public safety.[12] With this measure the president hopes to anticipate the threat AI systems could create including “chemical, biological, radiological, nuclear, and cybersecurity risks.”[13]

Advancing Authentication

A major goal of the order is to make it clear to consumers when they are viewing AI-generated content in an effort to avoid fraud and deception.[14] Under the order, the Department of Commerce is tasked with creating guidance to authenticate AI -content using watermarks.[15] Watermarks help to track the origin of online content and can be used to identify ownership.[16] Their use in AI could help consumers identify who created the content and when they are viewing doctored videos, such as deepfakes.[17] The Biden Administration further hopes that federal agencies placing watermarks on their own AI-content will make it clear to Americans when they are viewing authentic communications from the government.[18] While this provision of the order will only mandate watermarks on federal agencies’ AI products, the new guidelines are meant to set an example for the private sector.[19]

Protecting Privacy

The order additionally mandates federal funds be set aside to support developing AI systems that use privacy-preserving techniques.[20] It also funds a Research Coordination Network to help advance such breakthroughs.[21] However, the order acknowledges that for further progress to be made, particularly concerning consumer privacy and the role of the private sector, Congress must pass bipartisan legislation on data privacy.[22]

While the Biden Administration calls the executive order “the most significant” action ever taken to advance AI safety,[23] there are questions about the order’s power to enforce these new guidelines, with some industry specialists noting that implementation mechanisms are missing.[24] Achieving the president’s goals will require the government to act fast, faster even than the technology is advancing, hire AI experts, and compete with the private sector for new talent.[25]

Intellectual Property Implications

One of the guiding principles of the order is to promote “responsible innovation, competition, and collaboration” in AI while tackling the novel intellectual property (IP) concerns raised by the technology.[26] To accomplish this, the Under Secretary of Commerce for Intellectual Property and the Director of the US Patent and Trademark Office are set to issue guidance to patent examiners and applicants on how AI is impacting “the inventive process.”[27] Further, the Copyright Office is tasked with creating guidance for the president on future actions he could take concerning AI and copyright issues.[28] The recommendations from the Copyright Office can be wide ranging and include the “protection for works produced using AI and the treatment of copyrighted works in AI training.”[29]

This comes as the Copyright Office of the Library of Congress is conducting a study on AI, addressing AI’s impact on copyright issues.[30] Major lawsuits are underway right now concerning generative AI’s potential infringement on visual artists’ and authors’ copyrighted works, both in generative AI’s use of content in its training sets as well as outputted content.[31]; see also Blake Brittain, John Grisham, Other Top US Authors Sue OpenAI over Copyrights, Reuters (Sept. 21, 2023, 6:34 AM), https://www.reuters.com/legal/john-grisham-other-top-us-authors-sue-openai-over-copyrights-2023-09-20/ [https://perma.cc/6PUQ-MV4N].[/mfn]

Next Steps

The order builds off the voluntary commitments of seven US companies leading the development of AI technology to meet safety standards negotiated by the White House this summer.[32] Those companies are Amazon, Google, Meta, Microsoft, Open AI, Anthropic and Inflection.[33] One standard agreed upon is having independent experts do security testing on AI systems with the goal of preventing risks to biosecurity and cybersecurity as well as social harm.[34]

While the order will impact AI development domestically, the Biden Administration also notes it will work to create an international framework for the future of AI with allies.[35]

Footnotes[+]

Courtney Sonn

Courtney Sonn is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. She holds a B.S. in Journalism from Boston University.