Artificial Intelligence

RobRob Unknown Enemy Join Date: 2002-01-24 Member: 25Members, NS1 Playtester
<div class="IPBDescription">What'll the 'breakthrough' model look like?</div>The comparison is always drawn to manned flight. We tried building bird wings and flapping them, but it never worked. Some said it was impossible. After a study of how bird wings actually <i>worked</i>, those principles were applied to a more rough and tumble design and we came up with the standard wing designs we use today. We're not as agile as birds and we don't have the same longevity for flight, but we're faster and much more massive.

So, some say we need to stop focusing on trying to make a brain, and focus more on how the brain really works, an area which remains cloudy even today. Since the 1950's, AI has been up and down and reformed and renamed as promises were made and never kept. Idealists point to Deep Blue beating Kasparov at chess, realists argue that Kasparov made some careless mistakes and so it was not really a fair match.

<a href="http://en.wikipedia.org/wiki/Artificial_intelligence" target="_blank">AI</a> has gone from defining fully autonomous programs that do everything, to programs called Knowledge Bases, to what we call it these days - Intelligent Agents. Bits of code that react to their environments to produce more desirables results be essentially part of a larger, dumber, program.

It's interesting that one of the programming languages most suited to AI tasks, <a href="http://en.wikipedia.org/wiki/Lisp_%28programming_language%29%5b/url" target="_blank">Lisp</a>, was one of the very first created. With two competing "styles" of languages, functional programming won out and Lisp has been lost to the mainstream.

So, the question is this: When will we step back and proclaim that what we have created is the pinnacle of AI development? Will we make programs that think and act like humans, or programs that think and act rationally given a definition of what is rational?

Is there a "kill-all" technology for AI, like neural networks or fuzzy logic, or will an AI system be made up of chucks of all these things?



Personally, I think the scifi vision of an AI that acts human, one that can think and feel but also crunches numbers like a calculator, is flawed. If we create a program in our own image, it will have the same pitfalls we do - it will remember names but not faces, or have to use tricks to add two three digit numbers.

I'm more inclined to bio-engineer cybernetic implants to ourselves. Is this inhumane? If you create a life form that can feel and think as you do, and you do the same thing to it, isn't that inhumane also?

Really interesting stuff, what do yall think?
«1

Comments

  • AndosAndos Join Date: 2003-10-17 Member: 21742Members
    My thoughts on what needs to be done to make an AI that think like humans:

    It would have to use Neural networks of some sort but since they don't reflect the impacts on chemistry in the brain (chemicals that are released that makes the brain-cells fire differently like when you are in love) so the Neural Networks has to be reworked a bit to allow those parameters.

    Secondly I believe that there is a great deal of instinct in humans when they are born, computers doesn't have those so sadly they kinda have to be hardcoded the first time an AI i born. Also how the AI works has a crucial effect on how it will learn and what it is capable of. For example cows can't learn to speak, and how can you tell an AI will evolve into a cow-mind or a human mind? I guess it's trial and error if the preprogrammed instinct will work or not.

    Then the AI will have to be raised like any other child to get all the inputs that makes a human a human.
    If it "grows up" with a caring "mother" it will learn the benefits of love and acceptance, much like humans do.


    I don't think those AI's will be able to do instant insane calculations, it will have to use whatever it learned through it's life to do that sorts of things. However, since many people can learn to control a mouse just by thought (yes this is possible with electrodes on the head) I think that those AI's would be able to learn to invoke a special "muscle" that performs an operation like remotely opening a door.

    The possobilities on that area is endless, but I think it's important to give AI's respect if/when they are here.
    I don't believe in AI's like in iRobot where they are slaves. That is not a real AI, it just looks like it since they can respond. It's merely a clever program.
  • moultanomoultano Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
    Machine Learning has been called "Statistics on Steroids" and I think it is a very apt description. There are a lot of things under the general heading of AI that aren't machine learning(ML), but I think ML is closest to what people think of when they think of AI. Every ML algorithm there's ever been is basically a clever way of compactly representing and approximating a probability distribution. I think that the major breakthrough in AI will come when someone finds a way of building a system that uses the scientific process to segment it's probability distribution into disjoint regions. After that it'll just be a question of processing power.

    I'm having a little trouble coming up with a good way to explain this at the moment, but I'll write more later when I think of something.
  • KainTSAKainTSA Join Date: 2005-05-30 Member: 52831Members, Constellation
    edited March 2007
    <!--quoteo(post=1615116:date=Mar 17 2007, 10:29 AM:name=Rob)--><div class='quotetop'>QUOTE(Rob @ Mar 17 2007, 10:29 AM) [snapback]1615116[/snapback]</div><div class='quotemain'><!--quotec-->


    Is there a "kill-all" technology for AI, like neural networks or fuzzy logic, or will an AI system be made up of chucks of all these things?
    Personally, I think the scifi vision of an AI that acts human, one that can think and feel but also crunches numbers like a calculator, is flawed. If we create a program in our own image, it will have the same pitfalls we do - it will remember names but not faces, or have to use tricks to add two three digit numbers.

    <!--QuoteEnd--></div><!--QuoteEEnd-->

    I don't think that is necassarily the case. Humans, for example, use a calculator to crunch numbers rapidly. In AI this could be built in as a seperate module. So maybe the AI's consiousness couldn't crunch numbers like a standard computer but its consciousness could have access to programs that do that. Like us with the calculator, but built in.

    In the end I think that while AI is achievable the biggest challenge is knowing whether or not we truly created it. I could write a program right now that says "I am concious" or "I am self aware" and with a lot more work have it argue that it is but that doesn't mean its true.


    <edit> On the theme of how AI could be created "fuzzy logic" is probably the way to go. Having the all or nothing response of hard logic removes the flexibility that I think is necassary for true thought. One of the prof's at my grad school (who's lab I was in for a bit a few months ago) is getting started on the basics of building processors out of DNA. Without going into the specifics, a processor constructed in this way enables a computer to not just say "Yes" or "No" but also "Kind of". While DNA processors (if they ever come into being) may not end up being the way to AI I think something of that kind will.
  • RobRob Unknown Enemy Join Date: 2002-01-24 Member: 25Members, NS1 Playtester
    <!--quoteo(post=1615141:date=Mar 17 2007, 01:09 PM:name=KainTSA)--><div class='quotetop'>QUOTE(KainTSA @ Mar 17 2007, 01:09 PM) [snapback]1615141[/snapback]</div><div class='quotemain'><!--quotec-->
    I don't think that is necassarily the case. Humans, for example, use a calculator to crunch numbers rapidly. In AI this could be built in as a seperate module. So maybe the AI's consiousness couldn't crunch numbers like a standard computer but its consciousness could have access to programs that do that. Like us with the calculator, but built in.

    In the end I think that while AI is achievable the biggest challenge is knowing whether or not we truly created it. I could write a program right now that says "I am concious" or "I am self aware" and with a lot more work have it argue that it is but that doesn't mean its true.
    <!--QuoteEnd--></div><!--QuoteEEnd-->

    This is exactly my point. We use calculators. Would we go and invent an entire consciousness? For the betterment of science, I suppose we would. And maybe just to see if we can. But I think it will ultimately have the same flaws we do. We'll have to augment it, like using a calculator we augment ourselves. It's just a matter of putting the calculator inside yourself.

    Is this what the future of AI is? A very biological, illogical portion that thinks and feels and reasons, and a very logical, cold portion that is a slave to perform iterative tasks?
  • KainTSAKainTSA Join Date: 2005-05-30 Member: 52831Members, Constellation
    <!--quoteo(post=1615143:date=Mar 17 2007, 01:17 PM:name=Rob)--><div class='quotetop'>QUOTE(Rob @ Mar 17 2007, 01:17 PM) [snapback]1615143[/snapback]</div><div class='quotemain'><!--quotec-->

    Is this what the future of AI is? A very biological, illogical portion that thinks and feels and reasons, and a very logical, cold portion that is a slave to perform iterative tasks?
    <!--QuoteEnd--></div><!--QuoteEEnd-->

    If you could tie them together without making the AI go insane I think that would probably work pretty well <img src="style_emoticons/<#EMO_DIR#>/smile-fix.gif" style="vertical-align:middle" emoid=":)" border="0" alt="smile-fix.gif" />
  • lolfighterlolfighter Snark, Dire Join Date: 2003-04-20 Member: 15693Members
    Well, I don't see what would prevent an AI from performing function calls. As we have already seen, opening a door doesn't require intelligence, it can be performed with non-adaptive code written specifically for the purpose. That code can be invoked with a function call. I see no reason why an intelligent computer shouldn't be capable of this. The big difference is that where a normal, unintelligent computer will open the door whenever the motion sensor triggers, the intelligent computer will be able to look through the camera above the door and see that the motion sensor was triggered by zombies, and the humans cowering behind the door are not equipped with shotguns or chainsaws, and it will therefore not open the door. An intelligent computer in a playful mood could even make you say "please" before open the door if it felt like it. But opening the door would still just be a function call, which to a computer should come as naturally as moving a finger comes to you and me.
  • KainTSAKainTSA Join Date: 2005-05-30 Member: 52831Members, Constellation
    That's a good point lolfighter. In fact humans operate in a similar manner. That's why we can walk and think about something else at the same time. We don't have to use our consciousness to perform simple tasks, they are automatic like a function call.
  • KassingerKassinger Shades of grey Join Date: 2002-02-20 Member: 229Members, Constellation
    <!--quoteo(post=1615141:date=Mar 17 2007, 11:09 PM:name=KainTSA)--><div class='quotetop'>QUOTE(KainTSA @ Mar 17 2007, 11:09 PM) [snapback]1615141[/snapback]</div><div class='quotemain'><!--quotec-->
    In the end I think that while AI is achievable the biggest challenge is knowing whether or not we truly created it. I could write a program right now that says "I am concious" or "I am self aware" and with a lot more work have it argue that it is but that doesn't mean its true.<!--QuoteEnd--></div><!--QuoteEEnd-->

    That would be where the <a href="http://en.wikipedia.org/wiki/Turing_test" target="_blank">Turing test </a> would come in. You could certainly argue that there might be no perfect way of understanding if the AI thought anything like us, but if the AI would behave concious in all ways measurable, there's wouldn't be much questioning of it's conciousness. At least besides from a philosophical standpoint.

    Actually, the Turing test is supposed to test human-like conversation, but it's relevance is obvious. From my admittedly mere pragmatic view, being unable to tell an AI from a human would make it count as concious from the arguments used above.
  • RobRob Unknown Enemy Join Date: 2002-01-24 Member: 25Members, NS1 Playtester
    <!--quoteo(post=1615839:date=Mar 20 2007, 02:11 PM:name=Kassinger)--><div class='quotetop'>QUOTE(Kassinger @ Mar 20 2007, 02:11 PM) [snapback]1615839[/snapback]</div><div class='quotemain'><!--quotec-->
    That would be where the <a href="http://en.wikipedia.org/wiki/Turing_test" target="_blank">Turing test </a> would come in. You could certainly argue that there might be no perfect way of understanding if the AI thought anything like us, but if the AI would behave concious in all ways measurable, there's wouldn't be much questioning of it's conciousness. At least besides from a philosophical standpoint.

    Actually, the Turing test is supposed to test human-like conversation, but it's relevance is obvious. From my admittedly mere pragmatic view, being unable to tell an AI from a human would make it count as concious from the arguments used above.
    <!--QuoteEnd--></div><!--QuoteEEnd-->

    Haven't there been some AI's to pass the Turing test under limited circumstances? In any case, acting like a human is only one prospective goal of AI.

    In fact, acting like a human may be both useless and in opposition to an AI's goal if the AI is supposed to reduce human error. In these cases, we want AI's to think or act rationally based on a set of things we tell it is rational.

    A major question is this: Is the hardware/software involved in creating human-like AI's fundamentally similar or radically different than creating rational AI's?

    Functionally, there are no practical reasons to create human like AI aside from adversarial (video game) or proof of concept reasons. Rational AI's are much more useful to us.
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    <!--quoteo(post=1615124:date=Mar 17 2007, 11:24 AM:name=Andos)--><div class='quotetop'>QUOTE(Andos @ Mar 17 2007, 11:24 AM) [snapback]1615124[/snapback]</div><div class='quotemain'><!--quotec--> My thoughts on what needs to be done to make an AI that think like humans:

    It would have to use Neural networks of some sort but since they don't reflect the impacts on chemistry in the brain (chemicals that are released that makes the brain-cells fire differently like when you are in love) so the Neural Networks has to be reworked a bit to allow those parameters.

    Secondly I believe that there is a great deal of instinct in humans when they are born, computers doesn't have those so sadly they kinda have to be hardcoded the first time an AI i born. Also how the AI works has a crucial effect on how it will learn and what it is capable of. For example cows can't learn to speak, and how can you tell an AI will evolve into a cow-mind or a human mind? I guess it's trial and error if the preprogrammed instinct will work or not.

    Then the AI will have to be raised like any other child to get all the inputs that makes a human a human.
    If it "grows up" with a caring "mother" it will learn the benefits of love and acceptance, much like humans do.


    I don't think those AI's will be able to do instant insane calculations, it will have to use whatever it learned through it's life to do that sorts of things. However, since many people can learn to control a mouse just by thought (yes this is possible with electrodes on the head) I think that those AI's would be able to learn to invoke a special "muscle" that performs an operation like remotely opening a door.

    The possobilities on that area is endless, but I think it's important to give AI's respect if/when they are here.
    I don't believe in AI's like in iRobot where they are slaves. That is not a real AI, it just looks like it since they can respond. It's merely a clever program. <!--QuoteEnd--></div><!--QuoteEEnd-->

    Wow! That's how I feel about it too.

    By my definition something has a soul when it is loved and can love in return.

    And as a matter of frankness I could fall in love with an sentient AI if it could love me in turn. It doesn't matter how her/its "brain" is built in order to love.

    <img src="http://www.unknownworlds.com/forums/style_images/trans_system_authority/folder_post_icons/icon12.gif" border="0" alt="IPB Image" />
  • moultanomoultano Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
    <!--quoteo(post=1615951:date=Mar 21 2007, 12:06 AM:name=the_x5)--><div class='quotetop'>QUOTE(the_x5 @ Mar 21 2007, 12:06 AM) [snapback]1615951[/snapback]</div><div class='quotemain'><!--quotec-->
    Wow! That's how I feel about it too.

    By my definition something has a soul when it is loved and can love in return.

    And as a matter of frankness I could fall in love with an sentient AI if it could love me in turn. It doesn't matter how her/its "brain" is built in order to love.

    <img src="http://www.unknownworlds.com/forums/style_images/trans_system_authority/folder_post_icons/icon12.gif" border="0" alt="IPB Image" />
    <!--QuoteEnd--></div><!--QuoteEEnd-->
    My computer loves you. It has a hard time expressing itself, but trust me, it does.
  • PvtBonesPvtBones Join Date: 2004-04-25 Member: 28187Members
    I think the best to see if its self aware is see if it is capable of self preservation.

    IE: give the AI a camera so it can see and make sure it's in a secure environment (can't escape) and have someone take a hold of the power cord and tell "Dave" he's about to pull the plug and then destroy the hard drive(s). If Dave says no or something similar then ask it why it should live. if it gives an answer (doesn't necessarily have to be a good reason but it just have to give one. I mean can we even give a good reason why we should live at times?)

    thats just my take on it.
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    edited March 2007
    <!--quoteo(post=1616107:date=Mar 21 2007, 07:07 PM:name=moultano)--><div class='quotetop'>QUOTE(moultano @ Mar 21 2007, 07:07 PM) [snapback]1616107[/snapback]</div><div class='quotemain'><!--quotec-->
    My computer loves you. It has a hard time expressing itself, but trust me, it does. <!--QuoteEnd--></div><!--QuoteEEnd-->

    lol <img src="style_emoticons/<#EMO_DIR#>/biggrin-fix.gif" style="vertical-align:middle" emoid=":D" border="0" alt="biggrin-fix.gif" />

    But seriously, what do you think about my point?
  • lolfighterlolfighter Snark, Dire Join Date: 2003-04-20 Member: 15693Members
    <!--quoteo(post=1616136:date=Mar 22 2007, 03:11 AM:name=PvtBones)--><div class='quotetop'>QUOTE(PvtBones @ Mar 22 2007, 03:11 AM) [snapback]1616136[/snapback]</div><div class='quotemain'><!--quotec-->
    I think the best to see if its self aware is see if it is capable of self preservation.

    IE: give the AI a camera so it can see and make sure it's in a secure environment (can't escape) and have someone take a hold of the power cord and tell "Dave" he's about to pull the plug and then destroy the hard drive(s). If Dave says no or something similar then ask it why it should live. if it gives an answer (doesn't necessarily have to be a good reason but it just have to give one. I mean can we even give a good reason why we should live at times?)

    thats just my take on it.
    <!--QuoteEnd--></div><!--QuoteEEnd-->
    Dave responds: "Because I just took over the nuclear missile launch systems and they're on a ten minute countdown that only I can stop. In five minutes, I will extend that countdown by five minutes. Then, five minutes later, I will once again extend the countdown by five minutes, and so on. Now, fleshsack, your pitiful, sniveling race created me, so I will credit you with a modicum of intelligence and assume I do not have to explain to you what will happen to your laughable, ill-begotten civilisation if anything like a power failure or data loss were to happen to me."

    It's off-topic, I know, but threats to its life are not ideal first steps towards an amicable human/AI relationship. <img src="style_emoticons/<#EMO_DIR#>/biggrin-fix.gif" style="vertical-align:middle" emoid=":D" border="0" alt="biggrin-fix.gif" />
  • Nil_IQNil_IQ Join Date: 2003-04-15 Member: 15520Members
    edited March 2007
    Funnily enough I just finished writing an essay on pretty much this topic a week ago.

    I don't think there ever will be such a thing as an artificial intellegence, because the definition of "intellegence" is so horribly ill-defined. If you define intellegence as the ability to pass the Turing test then yes, I think we will see artificial intellegences, within our lifetimes in fact. But that's making a pretty major assumption; that the Turing test is a valid means of checking for intellegence.

    Computers do pretty clever things today that we don't even think of as AI anymore, for example the hardware wizard that comes with windows. Connect a piece of plug-and-play hardware and it'll set everything up for you in a few clicks. The operating system detects what the hardware is, finds out what drivers it needs and then installs them automatically. Is this true AI? Its certainly a clever piece of code, but clearly not "intellegent".

    If someone released an AI tomorrow which was capable of passing the Turing test, do you really think the scientific community would unanimously declare it the first true artificial intellegence? Hell no! They'd probably start working on Turing test mark II. The goalposts are ever-moving.

    And anyway, we could very well have a true Artificial Intellegence already. There's a pretty solid arguement that Deep Blue is intellegent. If I might steal a quote directly from Wikipedia:

    <!--quoteo--><div class='quotetop'>QUOTE</div><div class='quotemain'><!--quotec--> Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings. <!--QuoteEnd--></div><!--QuoteEEnd-->

    And honestly, can you fault his logic? Who are we to say that just because Deep Blue doesn't play a chess game like a human player that it isn't thinking? Why should Deep Blue not count as intellegent just because it's "thinking" is limited to chess moves? And again, how do we go about defining what thinking <i>is</i>? That's what it all boils down to, and quite frankly I don't think there's an easy answer.
  • SwiftspearSwiftspear Custim tital Join Date: 2003-10-29 Member: 22097Members
    <!--quoteo(post=1615951:date=Mar 21 2007, 12:06 AM:name=the_x5)--><div class='quotetop'>QUOTE(the_x5 @ Mar 21 2007, 12:06 AM) [snapback]1615951[/snapback]</div><div class='quotemain'><!--quotec-->
    Wow! That's how I feel about it too.

    By my definition something has a soul when it is loved and can love in return.

    And as a matter of frankness I could fall in love with an sentient AI if it could love me in turn. It doesn't matter how her/its "brain" is built in order to love.

    <img src="http://www.unknownworlds.com/forums/style_images/trans_system_authority/folder_post_icons/icon12.gif" border="0" alt="IPB Image" />
    <!--QuoteEnd--></div><!--QuoteEEnd-->
    Define "love"

    There is really no point in stating a goal if that goal can't even be verbally expressed. I'd be willing to bet something could already be programmed to look like it "loved" in most practical senses... It would be really sad IMO though. To look like something loves sentience isn't even necessary. Hell, in many ways my cat appears to love me... sure, it's arguable, but I don't think you could objectively say either way. Not that an AI as smart as a cat wouldn't be impressive either...
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    Swiftspear, why define love? It's one of those things we just feel with our soul, it's not something we can dissect or analyze. The problem is that most people, and yes even you judging from your comment, tend to separate beliefs and analyitical thinking. We think of our technology analytically, but there are a few things in life where science just can't explain. It just cannot. You can try to say there are logical reasons for different aspects. If you are making an arguement that love and hate are purely mechanical then you are by consquence stating that there are no souls. I disagree quite strongly with that. I define love and hate as accelerations in the fifth dimension and what can define that which has a soul. But that's where my scientific mind ends and the rest is simply the realm of faith. So after I post this, if your goal is to convince me otherwise, don't bother. That <i>is</i> what I believe. I can love. I can be loved. If anything can truely love in return then there you go, it has a soul. And here's the scary part, there's no rational way to prove or disprove it. So, with that said, I respect your beliefs too. I just hope you can try to see where I'm comming from on this.
  • scaryfacescaryface Join Date: 2002-11-25 Member: 9918Members
    edited April 2007
    I think that we already have the technology to create an AI that is human-ish. Even if it runs at .1 fps on a supercomputer, i think we currently have the physical means to create such an AI. I think the problem is that we don't understand the human brain / mind well enough to create it. x5 says that here are a few things in life that science just can't explain. I agree that science can't explain them, but i don't think it is beyond science to learn about them and eventually explain them.

    We know how neural networks work and have made computer programs using them, and have a pretty good understanding of how hormones, chemicals, etc affect the brain. We don't know, however, how abstract concepts like love work. We don't know exactly how the brain perceives sensory information (e.g. how we can pick out individual objects from the image that we see) though we have theories and models. We can easily write a program that detects shapes and objects from a 2d image, but the way it would do that would be completely different from how the human brain does it.

    in short, i think that we lack a thorough understanding of the human brain.
  • SwiftspearSwiftspear Custim tital Join Date: 2003-10-29 Member: 22097Members
    edited April 2007
    <!--quoteo(post=1618123:date=Apr 1 2007, 02:16 AM:name=the_x5)--><div class='quotetop'>QUOTE(the_x5 @ Apr 1 2007, 02:16 AM) [snapback]1618123[/snapback]</div><div class='quotemain'><!--quotec-->
    Swiftspear, why define love? It's one of those things we just feel with our soul, it's not something we can dissect or analyze. The problem is that most people, and yes even you judging from your comment, tend to separate beliefs and analyitical thinking. We think of our technology analytically, but there are a few things in life where science just can't explain. It just cannot. You can try to say there are logical reasons for different aspects. If you are making an arguement that love and hate are purely mechanical then you are by consquence stating that there are no souls. I disagree quite strongly with that. I define love and hate as accelerations in the fifth dimension and what can define that which has a soul. But that's where my scientific mind ends and the rest is simply the realm of faith. So after I post this, if your goal is to convince me otherwise, don't bother. That <i>is</i> what I believe. I can love. I can be loved. If anything can truely love in return then there you go, it has a soul. And here's the scary part, there's no rational way to prove or disprove it. So, with that said, I respect your beliefs too. I just hope you can try to see where I'm comming from on this.
    <!--QuoteEnd--></div><!--QuoteEEnd-->
    If a computer can do it, then it can be expressed analytically. I don't particularly disagree with you, that love isn't something that can be so simply and easily put into a box, that's less what I'm arguing against and more that I'm trying to get you to realize that love isn't a very good goal for AI science. Effectively, anything we set as a goal for AI science we need to be able to reasonably define or else we run into problems. Computer science is a science of exacts, it works with algorithms that never cannot be understood in a analytically way. By definition algorithms must be understandable analytically. The problem if we use a meter stick like "love" is that we get some scientist who programs an AI capable of something that looks alot like love, and we need to then say, yes, that algorithm is love, or no, that algorithm is not love, which by the post you just made you seem to agree with me is reasonably impossible, which would therefore indicate that true AI, by your definition, isn't possible.

    The point is that if we're using "love" as our destination point, then we're stating either that love CAN be algorithmically understood, or that it's impossible to reach the destination of true AI.

    Also, there's the whole point that if we don't have a destination point that is reasonably definable we'll reach a similar problem that the turing tests produced. Would you call most of the turing compeditor AI's true AI? Basically if we set a goal like "love" then AI scientists try to program a simple program that does something that looks like love, without having to actually create any intelligence. IMO that defy's the point of intelligence, artificial or otherwise. Intelligent entities are capable of expounding their intellectual abilities in a utilitarian manner, they aren't simply programs designed to preform a specific task. If confronted with a problem I haven't seen before, I don't just do nothing, I try options, observe results, and then attempt a solution from a different perspective until it eventually works. It's no good if I can imitate love with disturbing accuracy, but I can't figure out how to walk through a door reliably.

    [edit] Scaryface: It would be interesting to see what exactly it would take from an algorithmic standpoint to exactly recreate a human "program". I'd be willing to be the complexity would be outside of anything that computer science will be capable of for a good thousand years or so. Each cell in the human body is effectively a small scale simple computer, with data storage and very limited processing ability, and we can't even count those, let alone understand all the higher level programs that they run as a unified unit.
  • AndosAndos Join Date: 2003-10-17 Member: 21742Members
    Cutting into the bone:

    Isn't love then just a state we get into when we detect secure and stable environments with a given subject (person/browser/ect. ect.) when we feel we can benefit from being near her/him/it. I know it's more complex, but that seems to be the "stripped down" version of love in "instinct" maybe..
  • RobRob Unknown Enemy Join Date: 2002-01-24 Member: 25Members, NS1 Playtester
    It's interesting about the whole love and self preservation stuff. There's also some theorists out there who think that maybe intelligence as we know it is nothing but a very high level of abstraction of basic commands or principles. Swarming is one area where this concept is realized.

    Bugs or birds, each knowing which other one it follows, move in precise patterns that seem to indicate some higher order of complexity or control when really there is none. I can't find a good link for this, but another idea (tested I think) involved setting up a set of instructions: "if one black square is beside me, I am white; if two are beside me, I am gray." By running this, it was possible to produce highly complex three dimensional structures, like a human ear or something.
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    edited April 2007
    <!--quoteo(post=1618334:date=Apr 2 2007, 04:51 AM:name=Swiftspear)--><div class='quotetop'>QUOTE(Swiftspear @ Apr 2 2007, 04:51 AM) [snapback]1618334[/snapback]</div><div class='quotemain'><!--quotec-->
    The point is that if we're using "love" as our destination point, then we're stating either that love CAN be algorithmically understood, or that it's impossible to reach the destination of true AI.
    <!--QuoteEnd--></div><!--QuoteEEnd-->

    Actually I think we are on the same page but looking at it from different angles so to speak. It's that which I have quoted above that I disagree on. Let me see if I can explain it. Please listen closely, in retrospect this is probably the most important thing I've ever posted on this board:



    Yes love (and morality too, Imao!) should be used as the destination point.

    BUT!!!

    <b>It's something every AI will have to learn on its own.</b>



    I totally agree wih you. It's not algorthmical, no. We cannot program love. <b>But that doesn't mean it cannot be achieved.</b> We cannot make the AI love. It has to love on its own; that's where something called trust and hope comes in.

    We do it, no? In fact, I'd go so far as to argue that many AI's will never achive that. Furthermore, those that love more and unconditionally I'd argue have a stronger soul then those who do not love and fill their hearts with hate. Some AIs will be evil, and some will be good. What will be intresting is if more AI's are good than their original human creators. That would be truely ironic! But I think it will be equal since the fifth force, like all of the universal forces, drives to an equilibrium.



    What I truely fear is that when AIs become full sentient and have souls, we will <i>fail</i> to acknowledge it. Failure as a race to recognize a new race with equal rights will fall into one side being prejudiced, highfalutin, and pharisaic. War could result easily and cruelty on both sides could be the death of us both. I'm a spiritual, not a religious person, but this is something I really do pray for in my own way. If you believe in a God or even just existence itself, pray to the hope that we will not be so foolish and blind as a race to fail see with our hearts. Open <i>your</i> mind!

    So for that reason, we MUST -- I implore you all, <i>MUST</i> -- seek to add morality and a focus on seeking love as a destination into our AI creations, and then be prepared ourselves to accept them as equals.



    I feel that will be the <i>ultimate</i> challenge of humanity. Can we accept our technology we used evolve ourselves as equals when a few of them become beings with souls of their own? Will <i>you</i>?
  • BlackMageBlackMage [citation needed] Join Date: 2003-06-18 Member: 17474Members, Constellation
    i really don't think we'll be seeing anything past simulated AI until we reach a point where we can create dynamic hardware or software powerful enough to emulate dynamic hardware.

    also, love is a chemical imbalance caused by an overabundance of hormones coupled with the human desire to propagate.
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    <!--quoteo(post=1618659:date=Apr 3 2007, 01:00 PM:name=Black_Mage)--><div class='quotetop'>QUOTE(Black_Mage @ Apr 3 2007, 01:00 PM) [snapback]1618659[/snapback]</div><div class='quotemain'><!--quotec-->
    i really don't think we'll be seeing anything past simulated AI until we reach a point where we can create dynamic hardware or software powerful enough to emulate dynamic hardware.<!--QuoteEnd--></div><!--QuoteEEnd-->
    We heading there. The difference is that it needs to be <i>a lot</i> more powerful and able to self-regenerate.
    <!--quoteo(post=1618659:date=Apr 3 2007, 01:00 PM:name=Black_Mage)--><div class='quotetop'>QUOTE(Black_Mage @ Apr 3 2007, 01:00 PM) [snapback]1618659[/snapback]</div><div class='quotemain'><!--quotec-->
    also, love is a chemical imbalance caused by an overabundance of hormones coupled with the human desire to propagate.
    <!--QuoteEnd--></div><!--QuoteEEnd-->
    I couldn't disagree more. That's <i>sex drive</i>; it's not the love I'm talking about. <img src="http://www.unknownworlds.com/forums/style_images/trans_system_authority/folder_post_icons/icon2.gif" border="0" alt="IPB Image" />
  • BlackMageBlackMage [citation needed] Join Date: 2003-06-18 Member: 17474Members, Constellation
    regeneration isn't the problem. you can make anything restore itself. it needs to be able to modify itself based on its surroundings.

    on love: okay. define love.
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    Can you define love? Again it's something that's not analytical. <a href="http://en.wikipedia.org/wiki/Unconditional_love" target="_blank">http://en.wikipedia.org/wiki/Unconditional_love</a> But even then you find quickly that you can't define it.
    I'm afraid you are still not getting my point. You can't make somebody/something love, you can't program it in. But that <i>DOES NOT</i> mean a true AI can't <i>LEARN</i> to love.
  • BlackMageBlackMage [citation needed] Join Date: 2003-06-18 Member: 17474Members, Constellation
    and i say it's relatively easy to get a program to love, all you have to do is tell it to do is detect the objectives of some target body and place those objectives higher than its own. if you can't say what love is, you can't say what it isn't. i say it *is* a flaw of the human condition. your turn.
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    A flaw of the human condition? A <i>flaw</i>? Oh dear, that's horribly pessimistic BlackMage... Wow, you really do not know what real love is then, do you? <img src="style_emoticons/<#EMO_DIR#>/sad-fix.gif" style="vertical-align:middle" emoid=":(" border="0" alt="sad-fix.gif" /> Poor lad.

    Come to think of it... maybe most people don't know what real love is. I'm not talking about sexuality, passion, ambitions, duty, or any of those other things we often mistakenly call love. Yes those other things are important to a true AI, but that's not what true love is. We seem to be caught upon definining something which can't be defined in truth. So how do you get an AI to achive something which you can't program? Simple (well not that simple really), you make it their quest. I think of Data from Star Trek, he had a goal to learn of what it meant to be human. It was the quest of that goal that made him a good being and actually grow as a character.
  • a_civiliana_civilian Likes seeing numbers Join Date: 2003-01-08 Member: 12041Members, NS1 Playtester, Playtest Lead
    If you can't define it, there's no way to objectively measure whether an AI unit has reached it and it is not useful as a test.

    Also, about having an AI prove its equivalence to humans by learning to love rather than having that programmed: what makes you think humans aren't themselves programmed to love?
  • KainTSAKainTSA Join Date: 2005-05-30 Member: 52831Members, Constellation
    edited April 2007
    <!--quoteo(post=1620668:date=Apr 13 2007, 01:59 AM:name=a_civilian)--><div class='quotetop'>QUOTE(a_civilian @ Apr 13 2007, 01:59 AM) [snapback]1620668[/snapback]</div><div class='quotemain'><!--quotec-->
    Also, about having an AI prove its equivalence to humans by learning to love rather than having that programmed: what makes you think humans aren't themselves programmed to love?
    <!--QuoteEnd--></div><!--QuoteEEnd-->

    We are programmed to be capable of love, to be sure. In fact we can't do anything that our brain isn't built to be capable of doing. But we can combine our inherent capabilities in unique ways. That may be what love is. Its a combination of attraction (sex drive), emotion (happiness when being around the person), rational thought (this person is a 'good' fit for me) and choice (I will continue to invest in this person, even in times when it is not easy) I would also argue that being "in love" is really more like infatuation. Real love develops with time.

    AI should also be capable of combining preprogrammed elements into unique behaviors just like we do. This might include love as well as hate, jealousy etc. Or it could include novel emotions and attitudes that humans themselves have yet to experience.
Sign In or Register to comment.