Social Media Manipulation: Catfish and Social Bots
In October, excited social media influencers walked the red carpet of a posh, up-and-coming, store called “Palessi.”[1] They sampled the wares and, in the first few hours, bought a few thousand dollars’ worth of shoes, with one influencer paying $640 for one pair.[2] In fact, unbeknownst to the partygoers, they were being manipulated by Payless, an affordably priced shoe company, as part of a social experiment.[3] Payless used the influencers to prove that its low-priced shoes were comparable to higher priced brands.[4]
Although there was no harm, no foul, since the influencers received their money back and got free shoes, Payless’s experiment highlights the malleability of public perception. It also exposes the dangerous practice of “catfishing” – creating a fictitious online identity to deceive a third party.[5]
Although Payless pretended to be a luxury brand for a benign purpose, sinister characters can harness social media in more detrimental ways. For example, a study conducted on behalf of the military found, that with just $60 and open-source data – like Facebook profiles – soldiers could be catfished to act contrary to military orders.[6]
The detrimental effects of catfishing can be amplified by software-controlled profiles, known as social bots.[7] Instead of just creating one fictitious persona, an actor can use software programs to create thousands of personas called social bots. These fake accounts mimic humans and interact with legitimate users to shape public opinion.[8] Social bots have been found to spread misinformation on many important topics, like vaccinations and politics.[9]
States have been struggling to keep pace with technological developments and to adequately respond to fake social media activity.[10] For the first time, the New York Attorney General’s office has found that creating fake social media posts and comments to generate revenue constitutes illegal deception and impersonation.[11] California recently passed a bill that will require companies to disclose whether they are using a bot to communicate with the public.[12] The bill will go into effect on July 1, 2019.[13] However, critics point out that the new bill raises First Amendment problems, since it broadly regulates speech on the internet.[14]
Companies have been taking matters into their own hands[15] and Americans tend to agree with this approach. A poll by the Knight Foundation found that 46% of Americans believe this is the responsibility of the tech companies to regulate misinformation, while only 16% believe it is the government’s responsibility.[16] Pinterest has taken an extreme approach to curbing misinformation; it blocks all searches related to vaccinations to prevent fearmongering.[17] YouTube announced it would remove videos with “borderline content” that is detrimentally misleading.[18] Although allowing companies to directly tackle misinformation would circumvent the First Amendment issues, it doesn’t completely solve the misinformation problem.
First, who makes the ultimate determination of the factual veracity of information? We don’t live in a binary world; there exists a world of grey between information and misinformation. Second, how can we ensure that tech companies can be trusted to properly regulate information?
Consumers want transparency but tech companies are profit-oriented, and these two goals directly conflict. The Cambridge Analytica scandal in 2018,[19] and Facebook’s settlement agreement with the Federal Trade Commission in 2011,[20] both illustrate that tech companies will misuse information about their consumers to turn a profit. It’s not hard to imagine tech companies using misinformation as a pretext for regulating content in a manner that further increases profits.
Another approach to combating misinformation that wouldn’t implicate the First Amendment, and that wouldn’t rely on tech companies, is boosting media literacy. After a Stanford University study found that 82% of middle school students are unable to differentiate news content from ads, California passed a bill to encourage media literacy.[21] Media literacy refers to the process of critically evaluating messages and information produced by the media.[22] Critics question whether individual responsibility is reasonable when it’s increasingly easy to access personal information and the source of information isn’t always readily apparent.[23]
Social media has increased the amount of information on the web, which in turn has allowed devious characters to use misinformation to manipulate the public. Sifting through the wild west of truth and falsehood can be difficult even for critical consumers. Lawmakers eager to keep the law relevant as technology rapidly develops must be wary of crossing the line into censorship or creating new hazards.
Footnotes