39447
post-template-default,single,single-post,postid-39447,single-format-standard,stockholm-core-2.4,qodef-qi--touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,smooth_scroll,no_animation_on_touch,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

The DALL-E Dilemma: Commercialization and Fair Use of AI-Generated Artwork

The DALL-E Dilemma: Commercialization and Fair Use of AI-Generated Artwork

In July of this year, Elon Musk-founded research company OpenAI announced the beta release of their new AI system, DALL-E 2.[1] DALL-E 2 is a machine learning model[2] which creates realistic images and artwork from user-inputted text descriptions.[3] This tool allows users to create fantastical artwork based only on simple text prompts like “an astronaut playing basketball with cats in space in a watercolor style,” and “teddy bears shopping for groceries in ancient Egypt.”[4]

DALLE-2 works alongside another OpenAI tool called Contrastive Language-Image Pre-training (“CLIP”).[5] CLIP is a neural network “trained” by a dataset consisting of over 400 million text-image pairs sourced from the internet.[6] As part of its training, OpenAI put CLIP to the task of predicting from a selection of over 32,000 random captions to determine which is the correct one for a given image.[7] Thanks to this extensive training, CLIP not only recognizes images, but also captures the semantic concepts humans associate with images and correctly relates them with corresponding captions.[8] In doing so, CLIP creates a sort of image-text dictionary by which DALL-E 2 can draw upon.[9]

The result is a powerful AI-tool limited only by the imagination of its’ own users.[10] Indeed, one million lucky beta users can now pay $15 a month for 115 DALL-E credits (one credit generates four images, for a total of 460 unique images) and full usage rights to reprint, sell, and merchandize the images they create with the tool.[11]

Too Good To Be True?

While DALL-E users retain the rights to commercialize their images, “OpenAI retains ownership of the original image primarily so that [they] can better enforce [their] content policy.”[12] This statement should put many creatives on notice of the potential legal issues that may arise as DALL-E 2 opens to the mainstream. Legal experts in the burgeoning AI field warn that creatives who choose to use DALL-E 2 images are essentially at the mercy of OpenAI’s content policy.[13]

Imagine, for example, an advertising agency that uses DALL-E 2 to create graphics for a client’s campaign. There is nothing preventing OpenAI from revising their terms of use to allow them to use, license, or sell those same graphics for a competing company’s use, which opens the advertising agency and its clients to lost business or costly litigation.[14]

Creatives themselves are torn on the impact that DALL-E 2 may have on the art industry.[15] Advocates believe that access to the tool will open up opportunities for freelancers and small businesses while critics fear that an influx of AI-art could drive down prices for creative works.[16]

Copyright and Fair Use

In addition to the concerns surrounding ownership of the images DALL-E 2 outputs, a plausibly larger issue exists as to ownership of the hundreds of millions of source images inputted to train DALL-E 2.[17] The process OpenAI uses to scrape the internet for massive amounts of data is known as text and data mining (“TDM”).[18] OpenAI’s GPT-3 model—which is used to train DALL-E 2—consists of a collection of datasets that utilize TDM to scrape the web for images, text, audio, video, code, and other sources.[19]

DALL-E’s underlying datasets collect from over 60 million domains; sources like the New York Times[20], Reddit[21], and the BBC[22]—each having their own copyright policies.[23] Given that the vast majority of content posted online is protected by U.S. copyright law, it is no wonder why legal scholars are wary of the potential for mass-infringement that commercial use of DALL-E 2 might pose.[24] Typically, TDM is protected under the Fair Use Doctrine, which establishes a defense to copyright infringement based upon consideration of the following factors:

 (1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

 (2) the nature of the copyrighted work;

 (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

 (4) the effect of the use upon the potential market for or value of the copyrighted work.[25]

OpenAI, in a Comment to the U.S.P.T.O., insists that the highly transformative nature of AI training justifies a finding of fair use for any potential infringement claims against their GPT-3 technology.[26] Notably, OpenAI’s Comment was written in 2019 when the prevailing Supreme Court precedent on the issue was Campbell v. Acuff-Rose Music, which held that a transformative purpose and character of the use is the most compelling of the four factors to a finding of fair use.[27] Since 2019, the Court has adjusted its analysis, stating that “the application of [the four-factor test] requires judicial balancing, depending upon relevant circumstances, including significant changes in technology.”[28] The novel commercial application of DALL-E 2 may be just the significant change that tips the judicial scales in favor of a finding of copyright infringement against OpenAI.[29]

The Future of Fair Use and Commercialized AI-Art

Courts have not yet considered the issue of copyright infringement for either the training or output of AI-art generators like DALL-E 2, but that is likely to change.[30] Users of text-image generators like DALL-E 2 and Google’s Conceptual Caption tool have noticed image generations bearing watermarks from stock photo websites like Shutterstock, which plainly prohibits the use of data mining or similar image gathering and extraction methods in connection with its content.[31]

The current statutory regime surrounding copyright and Fair Use is inadequate to deal with the issues surrounding ownership and infringement of images used in the generation of AI-art.[32] Legal experts predict an influx of litigation as the industry expands; leaving companies like OpenAI and their users at the mercy of the courts to determine this issue of first impression.[33]

It is likely that the concept of transformative fair use will undergo its own transformation as the Supreme Court decides on Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith later this year.[34] AI-legal experts should watch this case closely for the implications it may have on TDM for AI text-image generators like DALL-E 2.

Footnotes[+]

Benjamin Rodrigues

Benjamin Rodrigues is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media, and Entertainment Law Journal. He holds a B.A. in International Political Economy from Fordham University.