38872
post-template-default,single,single-post,postid-38872,single-format-standard,stockholm-core-2.4,qodef-qi--no-touch,qi-addons-for-elementor-1.6.7,select-theme-ver-9.5,ajax_fade,page_not_loaded,,qode_menu_,wpb-js-composer js-comp-ver-7.9,vc_responsive,elementor-default,elementor-kit-38031
Title Image

Can Machines Have Rights?

Can Machines Have Rights?

I propose to consider the question, ‘Can machines have rights?’[1] Firstly, there exists a spectrum of artificial intelligence, with the weak end termed ‘narrow AI’ and the powerful end ‘strong AI.’[2] Narrow AI is what we encounter today- machines with limited cognitive functionality.[3] These are machines created to perform a specific task and that alone.[4] Strong AI refers to machines that have intelligence or consciousness matching that of a human.[5] This sort of AI does not exist today, but perhaps will one day in the future.[6]This initial theoretical question is intended to address strong AI.

Secondly, the term ‘rights’ can hold a plethora of meanings. For the purposes of the opening question, I use the term in order to refer to legal rights, such as the constitutional right to be free from unreasonable searches and seizures,[7] or to be the holder of a copyright[8]

Two approaches to whether machines have rights immediately come to mind. First, legal rights are inextricably linked to beings that feel emotions, then because AI is not an emotional being, it will never be deserving of rights.[9] Let’s call this the “emotion-based” theory of rights. However, the possibility that AI will one day have emotional responses like that of a human must be considered. Second, legal rights are linked to beings with high levels of intellect, and because AI may reach the required level of intellect, it might one day be deserving of rights, no difference than a human. Let’s call this the “intelligence-based” theory of rights.[10]

Of course, there are humans that don’t have high levels of intelligence, and there are humans that are not capable of emotional response. Nonetheless, they have rights. Why is this so? Perhaps it is because they belong to a species that has rights, so they too hop on the bandwagon. This feels wrong intuitively, though. They have rights on their own merit, I believe. Why? The religious among us might say this is because of their inherent godliness. The secular might base it on other moral or societal considerations.

But what about AI? Imagine a robot with the capability of conversing with you, with holding a job of its own, with essentially going about day-to-day life in much the same way as you or I. Now imagine this robot applies for a job at a clothing retailer, and the employer says, “We don’t hire your kind. As a matter of fact, we won’t even sell our clothes to you. Now get out.” If this happened to a human, the human’s rights were certainly violated. Were the robot’s rights violated as well?

If this happened to a robot, it likely would not feel degraded, or feel it was violated in some way. After all, it is not an emotional being. Nonetheless, it is capable of thinking intelligently and of problem solving much like an ordinary human.

What may make this question even more confusing is the possibility of AI actually feeling emotions. If this were the case, should we then all agree that it is deserving of rights? Now, regardless of if we focus on the “intelligence-based” or “emotions-based” theory of rights, the AI seems to be deserving.

At a more practical level, if a human created (high-level, emotional and intelligent) AI which then went ahead and created a work of art, should the human creator own the art, or should the AI own the art? Today, intellectual property law is unclear about whether the human owns or the AI owns such art.[11] This ambiguity is partially caused by the statutory language in the “America Invents Act,” which uses the word “individual” instead of a more inclusive term.[12]  If, in the future, humans can create AI with the requisite intelligence, should this debate weigh more favorably on the side of the AI?

There are no easy solutions to these questions. However, perhaps we should be open to the idea of extending legal rights to strong AI despite an initial reluctance one might feel to such a proposal.

Footnotes[+]

Akiva Mase

Akiva Mase is a second-year J.D. candidate at Fordham University School of Law and a staff member of the Intellectual Property, Media & Entertainment Law Journal. He holds a B.S. in Individualized Studies from Fairleigh Dickinson University.