I wonder if many people have thought about ethical issues that may be raised by systems that embody artificial intelligence, and that are possibly conscious (but I would guess that anyone who has seen Spielberg & Kubrick’s AI might have). There seems little doubt that humans will eventually, perhaps even soon (on a time scale of tens of years), create artificial systems that exhibit intelligence. The issue will be, of course, are they conscious?
Now, as everyone who has taken a college philosophy class knows, we only infer the consciousness of other humans (and, to some more questionable extent, that of other animals). Each of us only directly knows our own consciousness (thank you, Rene Descartes). Consciousness is subjective, and I do not see any way it can be objectively proven to exist in another entity. While we may all infer it as being virtually certain that biological systems have it, it is by no means clear how we can determine whether a human-made system is conscious. That is, if it has a sense of self. Or, to put it in the terms that present day students of consciousness use, whether a system has that peculiar feeling of what it is like to be someone. But no doubt it will eventually come to seem that some artificially produced systems do have it. How then will we deal with them? Will it be ethical to deny them the rights that humans have? Will it be right to use them as servants or as slaves? Will it be right to terminate their existence, scrap them, and interchange central processors (or whatever would serve as a brain). Can they be punished for “crimes”, or will they be considered as lacking free will and hence be immune from punishment? These questions are just a partial list of any of the issues we could all come up with. Spielberg & Kubrick’s entertaining film aside, how many philosophers have begun to address these issues?