38515
post-template-default,single,single-post,postid-38515,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Is Artificial Intelligence on Social Platforms Actually Halting Movement Towards Diverse Representation?

Is Artificial Intelligence on Social Platforms Actually Halting Movement Towards Diverse Representation?

Like most New Yorkers, I ride the subway almost every day. And like most New Yorkers, I pass the time on my phone, whether it be on social media, on a streaming platform, reading the news, or listening to music. Last week, I was stuck on a busy subway car that stopped to let another train pass, and, being the curious person I am, I took that moment to look around to see what others were doing. When I looked around, I noticed that the man next to me was on Bumble, the girl across from me was on Instagram, several people were on Twitter, and I was on TikTok. The subway is incredible because it does not discriminate; that is to say, A list celebrities, students, people seeking shelter, bankers, and everyone in between shares in its convenience. In many ways, the subway is the great equalizer in New York. Everyone has the same chance to squeeze their way on and, if they’re lucky, find a seat. In theory, the same applies to social platforms. Almost anyone with a smartphone can create an account, and most social platforms have some sort of feed that is curated for you based on where you fall in their algorithm. But how would you feel if you found out that these social platforms, marketed as a beacon of free speech and equal opportunity to build a following or be seen, actually operated off of an algorithm that, in truth, reinforces oppressive social constructs and creates a new mode of racism and discrimination?

Serving as somewhat of a futuristic frontier, one would hope that artificial intelligence (“AI”) systems would be indiscriminate and able to carry out tasks without the implicit bias that we, as humans shaped by our environment, are predisposed to. However, even though these systems can operate without human intervention, they had to have been created by humans, who, like all others, hold their own biases. Therefore, a lack of diversity and representation within a company that programs AI systems would surely lead to the creation of algorithms that are not adequately programmed to promote minority groups and marginalized communities.[1] One of the most publicized examples of this issue is the concern regarding Facebook, the owner of Instagram’s, lack of racial diversity and the consequent discriminatory algorithm it employs.[2] According to the company’s diversity report, less than four percent of the roles at Facebook are held by Black employees and about six percent are held by Hispanic employees.[3]

In 2019, two computational linguistic studies were published in which researchers discovered that AI intended to identify hate speech may in fact oppress BIPOC users.[4] In the first study, researchers found that tweets written in Ebonics, or other forms of speech commonly spoken by Black Americans, were twice as likely to be flagged as offensive when compared to other “plain English” speech.[5] A second study that took place at Cornell University found similar results of racial bias against black voices using a sample of 155,800 tweets.[6] A proffered explanation given by these two studies is that, unlike a human moderator, AI systems lack the ability to decipher nuanced messages or cultural particularities.[7]Not to mention, it is only natural that the algorithms, written and trained by people influenced by their own biases, carry on their creators’ discriminatory views. This results in what is often viewed as an objective tool, an AI system, perpetuating biases and silencing black voices.

A widely subscribed-to principle in the field of social psychology is the idea that people are attracted to others that look like them both romantically and socially, leading to an implicit in-group bias.[8]People with traditionally Anglo-American features are likely to gravitate towards others with similar features – this can be extrapolated to any race, gender, or ethnic group.[9] The algorithms that curate your “explore page” or “news feed” appeal to this inherent trend in hopes of increasing your time spent on the platform.[10] Let’s use TikTok as an example: if I, a white woman who loves dogs, politics, and coffee, sign up for TikTok, follow a few family members, and upload a picture of myself as the avatar, TikTok will more than likely fill my ‘for you page’ with their top creators (most of whom are white) and people who look similar to me. Now, when I start using TikTok, I linger on a couple videos of dogs and I ‘like’ a video by @rebmasel, a young female corporate attorney who makes comedy videos that I find relatable.[11] My ‘for you page’ will quickly populate with dog videos and young white women in corporate jobs both because they have mutual interests and follow @rebmasel and because all of them, in the half inch circle on their profile, have a picture with some variation of a white woman in a suit in an office.[12] You might be thinking, “So what? The algorithm finds videos I’ll like and shows them to me. How could that be bad?” According to Marc Faddoul, an AI researcher at UC Berkeley’s School of Information, “TikTok’s algorithm will think it is creating a personalised experience for you, but actually it is just building a filter bubble – an echo chamber where you only see the same kind of people with little awareness of others.”[13]This is not a new accusation, either. In January of 2019, Whitney Phillips, a professor of communication and online media rhetoric at Syracuse University, argued that TikTok’s algorithm leads users to replicate the community of which they are a part.[14] This became especially problematic when the app reached new heights in late spring of 2020 as the COVID-19 pandemic intersected with the national uproar surrounding police brutality against black men and the Black Lives Matter movement.[15] How is information that primarily affects an already oppressed community supposed to reach a broader audience if users are only seeing people similar to themselves? Further, according to Faddoul, “[p]eople from underrepresented minorities who don’t necessarily have a lot of famous people who look like them, it’s going to be harder for them to get recommendations.”[16]

While this may be a particularly pressing issue in light of the push in recent years towards increasing BIPOC (Black, Indigenous, People of Color) representation and calling for greater transparency and accountability surrounding systemic racial disparities in our country, at the end of the day, it boils down to a persistent and pervasive legal issue in this country: censorship.[17] With more discourse, sharing of ideas, and exchange of knowledge taking place online, social platforms have the power to mediate these conversations, both on public feeds and in private messaging.[18]

Because social platforms allow us to communicate seamlessly with people around the world, especially during the worldwide COVID-19 lockdown, these platforms became many people’s main mode of communication.[19] Many feel that increased content moderation is necessary, citing that censorship of these platforms occurs “to protect one user from another, or one group from its antagonists, and to remove the offensive, vile, or illegal – as well as to present [platforms’] best face to new users, to their advertisers and partners, and to the public at large.”[20] It seems relatively straightforward; social platforms employ teams of moderators who apply the platform’s guidelines to content and determine if the post remains public based on its adherence to the guidelines.[21] However, this becomes more complicated as we consider things like context, tone, sarcasm, satire, and cultural differences.[22] If a platform’s guidelines ban the use of hate speech, for example the use of derogatory terms, the massive scale at which moderators must review content could lead them to ban a user who is a member of a marginalized group, reclaiming the term and allow an extremist to stay on the platform because the moderator did not “catch” them using the same word.[23] Due to the massive number of users on social media, it would be nearly impossible for their parent companies to employ enough moderators to address the demand for filtering.[24] Rather, these still-emerging social platforms rely on AI algorithms to carry out a majority of the moderation, frequently relying on a lost list of prohibited terms to remove content in a rather “crude and unsophisticated” manner.[25] The new automated techniques intensify current concerns to free speech and user privacy while also providing rich new sources of data for monitoring, posing new threats to freedom of expression, associational rights, religious liberty, and equality.[26] Rather than allowing users to share their content to a larger audience, the automation of information spreading has created a new form of surveillance and speech regulation veiled in an accessible and privatized system.[27] The hash database for terrorist and violent extremists content, like AI copyright enforcement, is “context-blind”; as Daphne Keller put it, “an ISIS video looks the same, whether used in recruiting or in news reporting.”[footnote]Id.[/footnote] As a result, journalistic organizations, human rights defenders, and dissidents who attempt to uncover and comment on atrocities may be disproportionately harmed by the hash database. Social platforms are employing their own content moderation policies in ways that are, if not outright discriminatory in the way they suppress BIPOC voices and are under inclusive and under representative of minority groups.[28] The obscurity of content moderation policies, algorithms, and judgements affects how online users perceive and experience participation.[29] The apparent arbitrariness of moderation judgments, particularly algorithmic moderation, may increase the need for disclosure to promote credibility.[30] Users have established what Sarah Myers West has tactfully likened to “folk theories” surrounding moderation in response to ambiguous judgements about user-generated content.[31] More transparency may increase user trust in the system, but this is contingent on how useful the disclosures are.[mfn]See Id.[/fmfn] Conversely, providing substantive transparency for ex ante AI moderation techniques would jeopardize their effectiveness, leading to greater ex post screening.

 

Footnotes[+]

Madison Rosenthal

Madison Rosenthal graduated in 2019 from the University of Southern California with a degree in Psychology. Her decision to attend law school stemmed from her work abroad for REPEAL, a nonprofit at the forefront of the coalition to establish the right to women's bodily autonomy in Ireland. She spent over two years at the ACLU of Southern California before moving to New York where she now attends Fordham Law.