I wonder if many people have thought about ethical issues that may be raised by systems that embody artificial intelligence, and that are possibly conscious (but I would guess that anyone who has seen Spielberg & Kubrick’s AI might have). There seems little doubt that humans will eventually, perhaps even soon (on a time scale of tens of years), create artificial systems that exhibit intelligence. The issue will be, of course, are they conscious?
Now, as everyone who has taken a college philosophy class knows, we only infer the consciousness of other humans (and, to some more questionable extent, that of other animals). Each of us only directly knows our own consciousness (thank you, Rene Descartes). Consciousness is subjective, and I do not see any way it can be objectively proven to exist in another entity. While we may all infer it as being virtually certain that biological systems have it, it is by no means clear how we can determine whether a human-made system is conscious. That is, if it has a sense of self. Or, to put it in the terms that present day students of consciousness use, whether a system has that peculiar feeling of what it is like to be someone. But no doubt it will eventually come to seem that some artificially produced systems do have it. How then will we deal with them? Will it be ethical to deny them the rights that humans have? Will it be right to use them as servants or as slaves? Will it be right to terminate their existence, scrap them, and interchange central processors (or whatever would serve as a brain). Can they be punished for “crimes”, or will they be considered as lacking free will and hence be immune from punishment? These questions are just a partial list of any of the issues we could all come up with. Spielberg & Kubrick’s entertaining film aside, how many philosophers have begun to address these issues?
Subscribe to:
Post Comments (Atom)
3 comments:
Tom, since humans will likely be the creators of any AI capability, the creator is the only one that could even conceivably be responsible for actions that occur either physically or mentally in the realm or sphere of influence of the AI. Even when the AI "being" draws intelligence, experience, etc. from beyond the creator and puts this to use, the creator gave it that ability.
Now some may say this is no different than a God creating a being, but unless the AI morphs into a biological being someway taking the technology (electronic) memory and experiences with it, the AI will most likely not be capable of doing more than cloning in the strict sense and operating in the pre-programmed sense even if it acquires new skills and knowlege on it's own, and while an AI clone may acquire information apart and different from the original and even be superior to the original AI parent, it will be still an acquiring ability enabled by the original scientific creator.
The creator will be as much responsible for AI misdeeds as they are for their own misdeeds. Should the AI get beyond control, the State will be forced to step in.
Consciuosness and self awareness is present in people and likely in animals, though when animals or people kill for sport some may think something has gone off scale. If AI goes off, this behavior origin has another original cause other than enzymes, hormones, experiences, mothering, etc. it received, etc. Almost like original sin of the creator.
Fred, NM
Thanks for a very interesting comment, Fred. You raise a bunch of juicy points.
The Abrahamic religions hold, of course, that God made man, and your interpretation that moral culpability of a created system lies with the creator of the system would seem to imply that those religions should consider God to be the guilty party for the evils done by humans. Of course, theologians have denied that, on the grounds that man has free will and chose, on his own volition, to go wrong. But couldn't we imagine arguing the same thing here with respect to AI systems? That is, in some distant future where we succeed in building a conscious robotic creature with consciousness and free will, that the creature, not its creator, is responsible for any misdeeds the creature does?
At present this question is an idle one, since such systems are not possible. Maybe they will never be, although I am guessing that robots will eventually be created that seem to be conscious. And may we assume that consciousness is somehow intimately related to having free will? I am inclined to think so, even though I have to admit I cannot make any sense out of the concept of free will. Well, in any case, we all seem to assume that we all have it, otherwise why would it make any sense to punish criminals or assign blame to anyone?
Consider the case of a machine that is obviously non-conscious, as they virtually all are at the present state of the art. Lawn mowers, power saws, guns, etc. You are certainly right in saying that the blame for anything such a system, or machine, does to cause injury or evil of any kind lies with the creator (or in many cases the owner) of the machine. Indeed, this is even the case with animals owned by humans. If a viscous dog breaks loose from it’s owners and bites or mauls someone, the owner is, quite rightly, held responsible and punished (unfortunately, in addition, the animal usually pays the price and is destroyed). If a computer based missile system runs amok and launches lethal attacks on innocent people, we would of course blame the designer of the computer-missile system. At least, this would be the case with any computer system we might imagine as existing today or in the immediate future. But consider a “HAL”-like system (the film 2001); maybe in such a case it would not be so clear where to assign the blame.
Imagine some distant day in the future where a robot that seems conscious and possessing of free will is turned loose from its bondage as a slave or servant, and it wanders off and years later commits a crime. Is it so terribly clear that its owner or creator should then be tracked down and punished?
Numerous works of fiction have explored issues like these. Mary Shelley’s brilliant novel Frankenstein comes to mind. Of course, the films AI, Blade Runner, and 2001 have also illuminated these future ethical problems. Let me add that none of them portray human kind’s response in a very good light. Let us hope that we can do better than our fictional representatives.
There's a lot of philosophical "artificial intelligence" info on the www. I don't find nearly as many Google hits when I search for the phrase "artificial consciousness". (This blog thread is CLEVERLY concealed by misspelling consciousness--"artificial consciuosness" yields 4 hits.) Other fruitful search terms are "machine intelligence" and "strong AI".
You'd think that there would be a lot more about ethics, morals and machine consciousness lately, since researchers seem to agree that humans are making headway in this field. (I still think it's handy and satisfying to blame God for giving us the power to invent things that can invent things.)
Post a Comment