The Ethical Implications Of Buidling Machines With

MantridMantrid Lockpick Join Date: 2003-12-07 Member: 24109Members
<div class="IPBDescription">Super Human intelligence</div> I've been thinking about the trend that technology is taking, and how its becoming more and more likely that we'll see machines with intelligence equal to or greater than that of a human.

Dr. Vernor Vinge speaks of a technological singularity, in which technology surpases the human ability to use it. Vinge has said, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."

Which makes sense. If we, intelligent creatures, design a more intelligent machine, it follows that that machine will develop and even more intelligent machine, which will design another one, ad infinitum.

Now, obviously, humans will become largely obsolete, most likely done away with by machines that are much smarter than us. This is where the ethical dillemas set in.

Is it right to doom our species by creating machines that will out-think us? Our initial response is, "No, no it isn't." But why should we be so inclined to preserve our own species? True, it is the most basic part of our genetic programming (to propogate the species at any cost), but we have abandoned many of these ideals, such as having as many mates as possible. But who are we to deprive intelligent machines, that is, sentient beings, who could do and learn so much more with and in the universe, the chance to exist? This is not like killing a person, who has just as much of a chance as doing and learning less than you as they have of doing more; these machines are guranteed to be far, far superior.

So, in summary, should we develop thinking machines, and allow our species to yield, for the greater good of intelligence as a whole?

Comments

  • CxwfCxwf Join Date: 2003-02-05 Member: 13168Members, Constellation
    You start with the assumption that the advance of technology is <i>guaranteed</i> to make humanity obsolete, and eventually, extinct. While that may indeed be a possibility, I would argue that it is far from guaranteed. Being who we are, we tend to make machines, intelligent or not, with the intention that they will make our lives easier, not harder. We control their programming, and it's difficult to foresee a machine programmed to advance itself at the cost of humanity.
  • SandstormSandstorm Join Date: 2003-09-25 Member: 21205Members
    Honestly, I never understood why people think AIs are so evil. Obviously, an AI that doesn't understand what it's doing can do some serious damage in the right situations, much like a person, but it's not like AIs have emotions and "want" to exterminate the human race. AIs are tools, and just like any tool they can either help or harm, depending on how they're used. Even then, what are the chances that someone is smart enough to give AI emotions, but dumb enough to actually do it? <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html/emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
  • Umbraed_MonkeyUmbraed_Monkey Join Date: 2002-11-25 Member: 9922Members
    Why do you think our creations will be more intellegent than us? Programs are only as smart as the man who programmed it.
  • MantridMantrid Lockpick Join Date: 2003-12-07 Member: 24109Members
    <!--QuoteBegin-Umbraed Monkey+Jul 15 2005, 11:27 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Umbraed Monkey @ Jul 15 2005, 11:27 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Why do you think our creations will be more intellegent than us? Programs are only as smart as the man who programmed it. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    By that logic, our mind should be the highest limit of intelligence. And that would have applied millenia ago. So we should have never come out of the stone age.

    Our children are our creations, and they become smarter. If a creation could not become more intelligent than its creator, no one would have ever learned. We took things that were established, and added on. An intelligent machine will do that too. But it will do it much, much faster. After you plug in a machine with intelligence comparable to our own, and tell it to design a smarter machine, it won't take it nearly as long as it would take a person. And that smarter machine will be able to think of an even smarter machine even faster.
  • Status_QuoStatus_Quo Join Date: 2004-01-30 Member: 25749Members
    Intelligence does not imply an ego or emotions (sentience, basically), though.

    The way I see it, for a computer to want to destroy the human race, it would either have to have emotions and simply dislike humans, or it would have to come to the conclusion that whatever it was programmed to do would benefit from the elimination of humans. If it's the latter, the programming would simply have to include that killing humans is not a valid solution, and while the computer may be smart, without sentience it would have no reason to override this programming even if it could.
  • Umbraed_MonkeyUmbraed_Monkey Join Date: 2002-11-25 Member: 9922Members
    It is knowledge that has increased over the millenia. I doubt human intellegence has increased much over that time. Sure we may have improved a bit, but that would be due to natural selection (omg thanks flayra) and not by our design.

    Anyway, I just realized that you pretty much assumed that AI is possible, and you are discussing the effects of creating it. It was rude of me to question the assumption and derail your thread. I stop now.


    Heres some on topic material: <a href='http://en.wikipedia.org/wiki/Technological_singularity' target='_blank'>http://en.wikipedia.org/wiki/Technological_singularity</a>
  • DepotDepot The ModFather Join Date: 2002-11-09 Member: 7956Members
    Reminiscent of <i>2001, A Space Oddesey</i> ... HAL ruled remember? Indeed it's possible in real life.
  • moultanomoultano Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
    edited July 2005
    Humanity has something that any AI will lack, and that is that we have inherent goals that are incredibly complex. Though we will probably develop incredible AI algorithms that have the potential to quickly solve a wide variety of problems, these things are only going to have the goals that humans assign to them. All AI algorithms know how to do is to maximize functions on their inputs.

    The real difficult task in programming AIs will be in giving it an evaluation function that makes it solve the problem you want it to solve. If you aren't exceedingly careful, you won't get any useful results. The AI is much more likely find a bug in your code to exploit to do nothing of consequence then it is for it to produce awe-inspiring results.

    I really don't think we have to worry about AIs developing into anything appreciably human. Sure they might end up doing design work for engineering firms, but they aren't likely to be doing anything that would make us obsolete.
  • Deus_Ex_MachinaDeus_Ex_Machina Join Date: 2004-07-01 Member: 29674Members
    30 years? Sweet. By that time I'll be the perfect candidate to lead the human resistance against the self-reproducing machines. It's gonna be sweet.

    Seriously though:

    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Is it right to doom our species by creating machines that will out-think us? Our initial response is, "No, no it isn't." But why should we be so inclined to preserve our own species? True, it is the most basic part of our genetic programming (to propogate the species at any cost), but we have abandoned many of these ideals, such as having as many mates as possible. But who are we to deprive intelligent machines, that is, sentient beings, who could do and learn so much more with and in the universe, the chance to exist? This is not like killing a person, who has just as much of a chance as doing and learning less than you as they have of doing more; these machines are guranteed to be far, far superior.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    Interesting, however, you're missing something. Yes, we have abandoned traits like multiple mates over time, however, that doesn't lead to our species extinction. The need for multiple mates vanished as infant mortality rates dropped and the average human lifespan increased. What you're talking about is dooming the human race to total extinction, which is nothing the human race has ever done before, although our current nuclear situation is arguable along that line.
Sign In or Register to comment.