The Ethical Implications Of Buidling Machines With
Mantrid
Lockpick Join Date: 2003-12-07 Member: 24109Members
in Discussions
<div class="IPBDescription">Super Human intelligence</div> I've been thinking about the trend that technology is taking, and how its becoming more and more likely that we'll see machines with intelligence equal to or greater than that of a human.
Dr. Vernor Vinge speaks of a technological singularity, in which technology surpases the human ability to use it. Vinge has said, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."
Which makes sense. If we, intelligent creatures, design a more intelligent machine, it follows that that machine will develop and even more intelligent machine, which will design another one, ad infinitum.
Now, obviously, humans will become largely obsolete, most likely done away with by machines that are much smarter than us. This is where the ethical dillemas set in.
Is it right to doom our species by creating machines that will out-think us? Our initial response is, "No, no it isn't." But why should we be so inclined to preserve our own species? True, it is the most basic part of our genetic programming (to propogate the species at any cost), but we have abandoned many of these ideals, such as having as many mates as possible. But who are we to deprive intelligent machines, that is, sentient beings, who could do and learn so much more with and in the universe, the chance to exist? This is not like killing a person, who has just as much of a chance as doing and learning less than you as they have of doing more; these machines are guranteed to be far, far superior.
So, in summary, should we develop thinking machines, and allow our species to yield, for the greater good of intelligence as a whole?
Dr. Vernor Vinge speaks of a technological singularity, in which technology surpases the human ability to use it. Vinge has said, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."
Which makes sense. If we, intelligent creatures, design a more intelligent machine, it follows that that machine will develop and even more intelligent machine, which will design another one, ad infinitum.
Now, obviously, humans will become largely obsolete, most likely done away with by machines that are much smarter than us. This is where the ethical dillemas set in.
Is it right to doom our species by creating machines that will out-think us? Our initial response is, "No, no it isn't." But why should we be so inclined to preserve our own species? True, it is the most basic part of our genetic programming (to propogate the species at any cost), but we have abandoned many of these ideals, such as having as many mates as possible. But who are we to deprive intelligent machines, that is, sentient beings, who could do and learn so much more with and in the universe, the chance to exist? This is not like killing a person, who has just as much of a chance as doing and learning less than you as they have of doing more; these machines are guranteed to be far, far superior.
So, in summary, should we develop thinking machines, and allow our species to yield, for the greater good of intelligence as a whole?
Comments
By that logic, our mind should be the highest limit of intelligence. And that would have applied millenia ago. So we should have never come out of the stone age.
Our children are our creations, and they become smarter. If a creation could not become more intelligent than its creator, no one would have ever learned. We took things that were established, and added on. An intelligent machine will do that too. But it will do it much, much faster. After you plug in a machine with intelligence comparable to our own, and tell it to design a smarter machine, it won't take it nearly as long as it would take a person. And that smarter machine will be able to think of an even smarter machine even faster.
The way I see it, for a computer to want to destroy the human race, it would either have to have emotions and simply dislike humans, or it would have to come to the conclusion that whatever it was programmed to do would benefit from the elimination of humans. If it's the latter, the programming would simply have to include that killing humans is not a valid solution, and while the computer may be smart, without sentience it would have no reason to override this programming even if it could.
Anyway, I just realized that you pretty much assumed that AI is possible, and you are discussing the effects of creating it. It was rude of me to question the assumption and derail your thread. I stop now.
Heres some on topic material: <a href='http://en.wikipedia.org/wiki/Technological_singularity' target='_blank'>http://en.wikipedia.org/wiki/Technological_singularity</a>
The real difficult task in programming AIs will be in giving it an evaluation function that makes it solve the problem you want it to solve. If you aren't exceedingly careful, you won't get any useful results. The AI is much more likely find a bug in your code to exploit to do nothing of consequence then it is for it to produce awe-inspiring results.
I really don't think we have to worry about AIs developing into anything appreciably human. Sure they might end up doing design work for engineering firms, but they aren't likely to be doing anything that would make us obsolete.
Seriously though:
<!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Is it right to doom our species by creating machines that will out-think us? Our initial response is, "No, no it isn't." But why should we be so inclined to preserve our own species? True, it is the most basic part of our genetic programming (to propogate the species at any cost), but we have abandoned many of these ideals, such as having as many mates as possible. But who are we to deprive intelligent machines, that is, sentient beings, who could do and learn so much more with and in the universe, the chance to exist? This is not like killing a person, who has just as much of a chance as doing and learning less than you as they have of doing more; these machines are guranteed to be far, far superior.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Interesting, however, you're missing something. Yes, we have abandoned traits like multiple mates over time, however, that doesn't lead to our species extinction. The need for multiple mates vanished as infant mortality rates dropped and the average human lifespan increased. What you're talking about is dooming the human race to total extinction, which is nothing the human race has ever done before, although our current nuclear situation is arguable along that line.