

Difficult to Discern
The first proposal makes an important assumption, that some of the theoretic future artificial intelligences possess definable consciousness. Obviously, this means that the first and possibly most important step is to develop a comprehensive way of defining such consciousness. Schwitzgebel and Garza agree that “developing a good theory of consciousness is a moral imperative” (115), defining consciousness as “a genuine stream of subjective experience” (114). It is very difficult, however, to draw a line in the sand that adequately defines the parameters of such a definition. Santosuosso offers “Giulio Tononi's Integrated Information Theory (IIT) of consciousness” which claims that consciousness “varies in quantity and comes in many degrees” (216). While still not a solid line, IIT sets the ground work for developing the all-important criteria for determining when an entity might be defined as conscious.
Once we have a comprehensive way of defining a sentient entity, we must be willing to extend basic rights to them. The UN Declaration of human rights states that “those characteristics of human beings which distinguish them from animals, [are] reason and conscience” (Santosuosso 214). It is significant to note that there is a different between “conscience” and “consciousness.” While the two words share marked similarity in spelling, a conscience is a moral center, but consciousness is an awareness or sentience. However possession of conscience implies consciousness, “whose meaning [Santosuosso] here assume[s] to be coincidental” (215). If we can empirically point to an AI and claim it can reason and has a conscience, and by extension consciousness, then there are some fairly widely accepted grounds for giving them more rights than animals. Furthermore, when considering what might possess rights “purely biological otherness is irrelevant unless some important psychological or social difference flows from it” (Schwitzgebel 107).
To protect the rights of such entities, a Universal Declaration of Consciousness should be developed, that would extend basic rights to all entities falling within a certain definable degree of consciousness. This would be similar to “the 1998 Declaration of Human Duties and Responsibilities,” which extends “rights to personal and physical integrity,” rights to “meaningful participation in public affairs,” “rights to a remedy,” rights to “quality of life and standard of living”, rights to “education, arts and culture”, the right to “equality”, and the “freedom of opinion, expression, assembly, association and religion” (322).
Drum roll please! Remember that the criteria for the proposals are feasibility, acceptability, ethics, and fairness. For this, the first proposal I have judged the following in each criterion:
Feasibility – Moderate
Justification: In the near future when AIs are more prevalent, rules and regulations will need to be written. Such regulatory measures may eventually move on to address the potential for sentient artificial intelligences. This process will likely take a reasonable amount of effort, time and resources to implement. However, given the size and scope of the cause, I don’t think it would be considered an inordinate amount, hence the moderate rating.
Acceptability – Moderate
Justification: As AIs become more human-like, I am confident that the general populace will connect with them emotionally, effectively winning the hearts and minds of humanity. Certainly, there will be strong objections to a plan that gives robots such strong liberties, but like so many civil rights movements, I believe a successful outcome is inevitable.
Ethics – Excellent
Justification: Ethically, this passes my moral test; if animals have limited rights, then a sentient, if artificial creature, must have some as well. As per our criteria, there is no murder or stealing, but charity and goodwill reign supreme.
Fairness – Excellent
Justification: This proposal suggests a universal declaration very similar to existing ones that simply changes the wording to be more inclusive. This kind of document would be protective to both humans and robots alike, making exceedingly fair for both parties.

C-3PO and R2-D2 Navigate the Tantive IV. 1977. Star Wars:
Episode IV A New Hope (1977). Fandom Powered by Wikia. By Lucasfilm. Web. 3 Apr. 2017.

Sonny. 2004. I, Robot (2004). Jeffrey Scott's Music, Rants, TV,
Photography, & Poetry. By Jeffrey Scott. Web. 3 Apr. 2017.