Post by Haukur Jónasson on Jan 23, 2009 19:31:50 GMT -5
Time travelling robots aside, we all know the scenario. It's a recurring theme in science fiction.
Humans create a sentient robot/AI, give it too much power, trusting in it's programming. All goes well until the AI's system of learning and deduction leads it to eventually believe humans are inferior, starting a genocidal war against it's creators.
On the other hand, there's Asimov's Laws:
I'm not sure where I'm going with this, but it feels like a relevant topic in this group of people, and not likely to be discussed in class so I thought I'd go ahead and start a discussion here. Some points to consider:
-Is this scenario likely to ever come up? Do programmers of advanced AI have reason to be careful, perhaps even hard-coding Asimov's Laws into their creations?
-Can the military funding of AI affect this? Will robots be used for war (as soldiers - they are already being used for e.g. disarming mines in war-zones), thereby breaking the first law? Is this preferable to risking human lives on the front lines?
-Is it in any case acceptable for an AI to go against it's initial programming, whether it results in harming humans or not?
-Is making it's own decision like that perhaps the ultimate test of whether an AI is truly thinking or not? Why/why not?
Humans create a sentient robot/AI, give it too much power, trusting in it's programming. All goes well until the AI's system of learning and deduction leads it to eventually believe humans are inferior, starting a genocidal war against it's creators.
On the other hand, there's Asimov's Laws:
- 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- 2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'm not sure where I'm going with this, but it feels like a relevant topic in this group of people, and not likely to be discussed in class so I thought I'd go ahead and start a discussion here. Some points to consider:
-Is this scenario likely to ever come up? Do programmers of advanced AI have reason to be careful, perhaps even hard-coding Asimov's Laws into their creations?
-Can the military funding of AI affect this? Will robots be used for war (as soldiers - they are already being used for e.g. disarming mines in war-zones), thereby breaking the first law? Is this preferable to risking human lives on the front lines?
-Is it in any case acceptable for an AI to go against it's initial programming, whether it results in harming humans or not?
-Is making it's own decision like that perhaps the ultimate test of whether an AI is truly thinking or not? Why/why not?