38568
portfolio_page-template-default,single,single-portfolio_page,postid-38568,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Targeting Exceptions
Michal Lavi
Article

Targeting Exceptions
Michal Lavi
Article

  The full text of this Article may be found here.

32 Fordham Intell. Prop. Media & Ent. L.J. 65 (2021).

Article by Michal Lavi*.

 

ABSTRACT

[O]

n May 26, 2020, the forty-fifth President of the United States, Donald Trump, tweeted: “There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed.” Later that same day, Twitter appended an addendum to the President’s tweets so viewers could “get the facts” about California’s mail-in ballot plans and provided a link. In contrast, Facebook’s CEO Mark Zuckerberg refused to take ac- tion on President Trump’s posts. Only when it came to Trump’s support of the Capitol riot did both Facebook and Twitter suspend his account. Differences in attitude between platforms are reflected in their policies toward political advertisements. While Twitter bans such ads, Facebook generally neither bans nor fact-checks them.

The dissemination of fake news increases the likelihood of users believing it and passing it on, consequently causing tremendous reputational harm to public representatives, impairing the general public interest, and eroding long-term democracy. Such dissemination depends on online intermediaries that operate platforms, facilitate dissemination, and govern the flow of information by moderating, providing algorithmic recommendations, and targeting third-party advertisers. Should intermediaries bear liability for moderating or failing to moderate? And what about providing algorithmic recommendations and allowing data-driven advertisements directed toward susceptible users?

In A Declaration of the Independence of Cyberspace, John Perry Barlow introduced the concept of internet exceptionalism, differentiating it from other existing media. Internet exceptionalism is at the heart of Section 230 of the Communications Decency Act, which provides intermediaries immunity from civil liability for content created by other content providers. Intermediaries like Facebook and Twitter are thereby immune from liability for content created by users and advertisers. However, Section 230 is currently under attack. In 2020, Trump issued an “Executive Order on Preventing Online Censorship” that aimed to limit platforms protections against liability for intermediary-moderated content. Legislative bills seeking to narrow Section 230’s scope soon followed. From another direction, attacks on the overall immunity provided by Section 230 emerged alongside the transition from an internet society to a data- driven algorithmic society—one that changed intermediaries’ scope and role in information dissemination. The changes in the utility of intermediaries requires reevaluation of their duties; that is where this Article steps in.

This Article focuses on dissemination of fake news stories as a test case. It maps the roles intermediaries play in the dissemination of fake news by hosting and moderating content, deploying algorithmically personalized recommendations, and using data-driven targeted advertising. The first step toward developing a legal policy for intermediary liability is identifying the different roles intermediaries play in the dissemination of fake news stories. After mapping these roles, this Article examines intermediary liability case law and reflects on internet exceptionalism’s current approach and recent developments. It further examines normative free speech considerations regarding intermediary liability within the context of the different roles they play in fake news dissemination and argues that the liability regime must correspond with the intermediary’s role in dissemination. By targeting exceptions to internet exceptionalism, this Article outlines a nuanced framework for intermediary liability. Finally, it proposes subjecting intermediaries to transparency obligations regarding moderation practices and imposing duties to conduct algorithmic impact assessments as part of consumer protection regulation.


* Michal Lavi Ph.D (Law); Research Fellow at the Hadar Jabotinsky Center for Interdisciplinary Research of Financial Markets, Crises and Technology. I thank Emily Cooper. Special thanks are due to Daniel Levin, Laura Rann, Caroline Vermillion, and their colleagues on the Fordham Intellectual Property, Media & Entertainment Law Journal staff for their helpful comments, suggestions, and outstanding editorial work that profoundly improved the quality of this Article. I dedicate this Article to the memory of my mother, Aviva Lavi, who died suddenly and unexpectedly. My mother taught me to love knowledge and gave me the strength to pursue it. She will always be loved, remembered, and dearly missed.