40424
post-template-default,single,single-post,postid-40424,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

How Public is Too Public? What Your Social Media is Being Used for Without Your Knowledge

How Public is Too Public? What Your Social Media is Being Used for Without Your Knowledge

We’ve heard it time and time and again: be careful what you post on social media. At this point, most people are aware that what they share is viewable by the general public—depending on what privacy settings they use—and many strangers could be looking at their pictures, videos, and musings. In fact, many people posting on social media probably want their posts to be seen by as many people as possible. However, we often forget just how much information there is online about us and disregard the ways in which people use—not just view—what we put online.

In fact, the amount of information out there has become a gold mine for nefarious actors who want to use information to target people or companies; even those of us who aren’t influencers with thousands of followers are not safe. Human hackers, or bad actors who use people’s personal information against them for some end goal, start by identifying targets, and they generally look for weak links in the companies they want to target (if their target is a company).[1] The easiest place for threat actors to start is a company’s website, where they may be able to get a sense of potential targets from information in the “About” section.[2] Threat actors will also review job postings, particularly for IT roles, because those will often describe the systems that the company uses.[3] They will review Glassdoor, (to understand employee moral––are people content, or disgruntled and less likely to be loyal?), social media (often pictures posted that were taken in the offices will provide an idea of the office layout), and Google Earth (this will show them the surrounding area where they can pinpoint places employees may go after work or during lunch, in case they want to intercept them and meet in person).[4] They can use LinkedIn and social media to build a profile on the person and understand what their motivations and vulnerabilities are.[5]

This allows human hackers to create specific, targeted campaigns, more akin to spear phishing than what we think of as a traditional phishing scam.[6] The threat actor might send a spear phishing email that appears it came from a specific organization an individual belongs to with specific information in the email about their account there, encouraging the person to click on any links provided.[7] It has been estimated that at least 10% of all LinkedIn profiles are fake, created to enact scams on unsuspecting, trusting people.[8] Some other ploys threat actors use are Smishing (text messaging malicious links) and Vishing (call spoofing, where human hackers pretend to be someone you know to extort money).[9] Using deep fakes and AI voice distortion techniques, threat actors need a very small sample of an individual’s voice to imitate another person to say whatever they want.[10] All human hackers need to do is survey the abundant public information, create a profile of their target, and come up with a designer plan just for them.[11]

To some, the above information is obvious: there is always a risk that bad actors can try to use your information against you in a scam. That is why employers have training about cybersecurity attacks like phishing and vishing, so employees are aware and don’t fall prey to these attacks, even as they become more sophisticated. But there are other actors out there scraping your data for other, legal purposes, and they are benefiting from you without your direct knowledge or authorization.

For example, data brokers often make money from scraping data from the internet.[12] This includes personal information such as your name, address, and phone number, as well as your location data, financial data, and more.[13] ClearviewAI is the most successful company (thus far) in the United States engaging in this type of activity, and it has been for years; the company was founded in 2016 and has been working towards achieving this goal ever since.[14] Right now, ClearviewAI has been selling its platform to police departments to use to find criminals: officers can upload a picture of their suspect, and use the results that ClearviewAI pulls to identify the suspects in them.[15] Police departments are proud of the arrests they’ve made; however, most of their success has been finding suspects in low level crimes, misdemeanors, and traffic violations.[16] And, while the system is scarily accurate, it’s not perfect: the police have mistakenly arrested individuals who were guilty of no crimes using only the misinformation promulgated by ClearviewAI.[17]

If the implications of that were not scary enough—because, keep in mind, there is the chance for bad actors within the police departments to abuse this technology for their own ill intent—ClearviewAI’s goals expand further.[18] Ultimately, the creators want this technology to be available in an app that anybody can use to take pictures of random people in the street, upload it, and then find out who they are, where they live, and all other information that is public (but never meant to be that public).[19] The only reason ClearviewAI hasn’t done this yet is because they’re being cautious; but there isn’t anything that would legally prevent them from doing so.[20]

Scraping public data from the internet is, in fact, legal.[21] Therefore, individuals have very little recourse against data brokers or organizations like ClearviewAI who have scraped and are using this data, because once it is out on a public forum, it is no longer considered private. Even so, that doesn’t mean everyone who posts something publicly consents to it being used in any which way and for companies to profit. What can be done? There may be hope on the horizon: California recently passed and signed into law the California Delete Act, which directly targets data brokers and levies stricter requirements about what data brokers can do with the data they scrape and the ways they have to report it.[22] And the more exciting piece: this legislation might even have implications for organizations like ClearviewAI. The act defines data brokers very broadly: “data broker” under the law “means a business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship.”[23] Could that be construed as ClearviewAI selling this data to third parties? Some may view this as a stretch and say that ClearviewAI is selling the platform, not the data itself. However, there very well may be creative legal arguments that can place ClearviewAI within the purview of the Delete Act and increase restrictions and reporting requirements on the app, at least for residents in the state of California.

Although posting pictures and information publicly implies that we are comfortable giving up our right to privacy, there is a difference between willingly sharing information with people who will engage with it on the platform and being comfortable with unrelated third parties taking the information to use for their own profit. Very slowly, it seems legislation in the United States is trying to catch up to the industry and take back some control. In the meantime, however, we simply have to remember: our information is being used by many more people than those who are viewing it on our platforms. So, it probably is a good idea to be careful what you post.

Footnotes[+]

Taylor Veracka

Taylor Veracka is a second-year J.D. candidate at the Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. She is the Co-President of the Fordham Information Law Society. Taylor holds a dual B.A. in International Studies and Film & Media Studies from the Johns Hopkins University.