Well, the thing is, AI, in computer terms, will always be a computer program. A computer program, by definition, is an algorithm written by someone. We, as humans have no other way of programming a computer other than writing a list of instructions for it to follow. If we are to write a program to simulate intellegence, it will be done through a list of instructions.
Now, if you wanna go into other forms of AI, created not through computers, then sure, we may have created life. But as long as the AI is a computer program written by man, it WILL be a list of instructions, with no REAL capacity to choose, think, or learn.
Yea. You are saying that something can be alive without thinking, right?
Biologically, for something to be alive, it needs to exibit reproduction, motion, metabolism, growth, and the ability to respond to stimuli during its 'lifetime'. Ill agree that with better technology, we can simulate reproduction, motion, metabolism and growth for the theoretical robot. However, I an saying that this robot would never really 'react' to stimuli.
For example, if the robot was given a stimuli that it was not programmed to respond to, it will not, and cannot respond to it.
ah cool, it's just that you didn't seem to address it at all in the previous post that's all ^^
Oh and biologically I'm pretty sure plants are considered 'alive' despite no apparent sentience and an extremely limited set of reactions to the environment.
<b>edit:</b> actually to take the plant point a little further if you touch a plant, (ignoring the rather cool fern that folds its leaves and other such types) they will not be able to react to that within their lifetime... rather like a robot that's not been programmed to deal with that situation.
Gem, ive taken out a part that of my previous post. You may have responded in part to it, so your response may sound weird because of it. I took it out because I dont think what I believe would help this discussion in any way. Good thing tho, is that it brought up plants. I was thinking about it, but you posted first <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
Plants do react a lot, they are just slow at it.
As well, I have to move back to my dorm tomorrow, so it might be a while before I can get back on the intarweb. I guess I'll be outta this discussion. Hopefully you guys have something good thought out while Im gone <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif' /><!--endemo-->
umbraed does have a point AI wouldn't really be intelligent, sure it might say no if your doing something someone defined as illegal but you can easily just type in "command override" and there you go, the moment it says know and does not allow you access to anything is the moment I believe it could be considered inteligent (and thusly EI which is probably a different thread in it's own right)
(PS: maybe I missed the focus of this thread by a bit and if I did sorry are you focusing are just simple reactions to user input or something like I was had gathered?)
<!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Well, the thing is, AI, in computer terms, will always be a computer program. A computer program, by definition, is an algorithm written by someone. We, as humans have no other way of programming a computer other than writing a list of instructions for it to follow. If we are to write a program to simulate intellegence, it will be done through a list of instructions.
Now, if you wanna go into other forms of AI, created not through computers, then sure, we may have created life. But as long as the AI is a computer program written by man, it WILL be a list of instructions, with no REAL capacity to choose, think, or learn.
Biologically, for something to be alive, it needs to exibit reproduction, motion, metabolism, growth, and the ability to respond to stimuli during its 'lifetime'. Ill agree that with better technology, we can simulate reproduction, motion, metabolism and growth for the theoretical robot. However, I an saying that this robot would never really 'react' to stimuli.
For example, if the robot was given a stimuli that it was not programmed to respond to, it will not, and cannot respond to it.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
To address your first point, yes, there are other ways to create programs. We can create programs that we can't even look at the code for. Again, reverse engineering creates a program around a problem, not vice versa. So in the end we have a mass of 1s and 0s, and we don't know how to cipher it into anything meaningful. Only the program knows how to interpret the data.
As for your quote about "AI is a computer program written by man, it WILL be a list of instructions," well that's true, but you're not thinking outside the box. Computer programs can write themselves. So in that sense, man does not write the code, the computer does.
For your final paragraph, well there is radiation in the air all the time, and we are unable to detect it using the "equipment" we were given. Because we cannot respond to such stimuli, that means we aren't alive?
Alright. True. You can even argue that man no longer writes code. Man usually writes instructions for compliers to make programs. However, it is still the same. The program that writes programs will be unable to write programs beyond what man instructed, thus, the program created by programs programmed by man is still completely reliant on man's lists of instructions. So its "mind" is still a thoughtless construct of man.
You may not detect it using your 5 senses, but how do you know our bodies didnt grow differently as a result of it? How do you know that on the cellular level, we didnt respond to the radiation (the stimuli)?
actually our bodies does react to radiation, we just don't take notice to it because usually it's within acceptable levels, but when it rises our bodies react and if during prolong exposure defects can occur ( cancer etc.)
[WHO]ThemYou can call me DaveJoin Date: 2002-12-11Member: 10593Members, Constellation
First of all, "learning" programs can STILL be predicted after they have been run a few times as long as all the data they were exposed to is available for study.
I've studied A.I. (it's a big chunk of the gaming field afterall). And even the most complex of learning programs still boils down to attributes, weighted values, crazy **** math, and a lot of training data. Perceptrons, Learning Trees, Bayesian classifiers, Knowledge Based Artificial Neural Networks, ALL are heavily math reliant.
Second, we are NOT simply executing 1's and 0's. Our form of *thought* takes on a much more chemically based *fuzzy* nature. But that doesn't detract from the point that EVERYTHING YOU EVER DO OR EVER DID has been predetermined by the position of every molecule in the universe.
So, yes, all life is sequencial, your fate is already decided and cannot be changed. But our sense of life is no less amazing, because even though you're making your decisions, your decisions were already made. It's kind of weird to think about. So if you want to take this point to the extreme, even we are not *alive* by most people's definitions of the word.
Life in it's many forms comes about with how much a given object has adapted to its environment. I have no problem with killing an A.I. or even magnetizing my hard drive, because the program has never adapted itself. I know you're going to bring back and say that the learning program does adapt itself. And I'm going to go ahead and come back to say that it hasn't been adapting for very long. If you ran a learning A.I. for 50 years, and it could be proven that that A.I. was significantly different than one that had been run for 1 day, then you would have a case for it being alive, and I might join you in feeling bad about killing it.
To bring that last point home, I'm going to play the flipside. Let's say for just a moment that we had a way to create a baby in seconds. All EXACT clones of eachother. I'm telling you right now that I wouldn't feel bad in the slightest if we popped one of these babies out a xerox machine equivalent and then proceeded to shoot it in the head only seconds later. It achieved nothing, it learned nothing, it contributed nothing. Take the same baby and let it grow for a month or two and now it's objectionable.
Makes me feel sorry for twins <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html//emoticons/biggrin.gif' border='0' style='vertical-align:middle' alt='biggrin.gif' /><!--endemo-->
In the end there is no proof that anything is alive... I feel alive, I presume you lot are alive because you are all 'the same' as me but I have no idea if you actually are, you could just be an incredibly complex machine, just a very comprehensive list of instructions.
When machines get to that level and start telling us that they are alive we will listen?
I say this because, lets face it, if we keep on creating software like we are now we will one day get to the stage where one could possibly say "i am alive". Whether its true or not doesn't matter, it can still say it. It will be impossible to prove it isn't. Every test you could invent for it will apply to humans and we'd get the same results.
Makes me think of Blade Runner... or rather, 'Do androids dream of electric sheep'. The book is far better.
<!--QuoteBegin-CMEast+May 3 2004, 05:06 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (CMEast @ May 3 2004, 05:06 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> I say this because, lets face it, if we keep on creating software like we are now we will one day get to the stage where one could possibly say "i am alive". Whether its true or not doesn't matter, it can still say it. It will be impossible to prove it isn't. Every test you could invent for it will apply to humans and we'd get the same results. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd--> grep alive source_code #search source code for the string "alive"
If it knows how to say it, someone put that line there. Take it out, and it will never proclaim it again. I'll say it again. Because WE (man) programmed it, WE will know what it can do and what it cannot.
When code surprises programmers (I'll be honest, it almost always does), its always because of human error. All these guys proclaiming how their robot 'surprised' them is because they left out a ; somewhere <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
Well, I'll give you that. Computers dont' use chemicals to send signals. That doesn't mean they are as alive as rocks.
DNA is composed of many combinations of pairs of 4 different chemicals. This means you have 4 combinations * 4 combinations = 16 different possible choices.
Well, what a coincidence! In binary, you can represent 16 different states with 4 bits. Multiply this number times as many dna lines you have, and you can represent DNA on a computer.
Chemicals.. at some point, you get a definite number. There are so many molecules getting passed between neurons that you can calculate with a number. Any number can be represented with a finite number of bits. You can have many processors imitate the act of passing these 'signals' back and forth and imitate the behavior of humans. However impractical is not the issue. It IS possible.
At the heart of things, everything that happens in the human body can be represented in numbers, and computers can manipulate and imitate the handling of numbers similar to the reactions the body makes. We're not dealing with chemicals, but numbers which are just as good.
I'm a computer science major with a lot of background on computational theory, and believe me it dabbles very much so into philosophy, because it raises the question is there anything possible a human can compute that a computer cannot. So far we have yet to make an answer to that (how strenuous is not the issue, but assuming unlimited space and time, is it possible to compute). Allan Turing.. the man is brilliant. He created the model for computers before computers were even invented. The "Turing" machine, remarkably simple, can do many complex things. And if you ignore the fact that its input and output can only be in the form of a linear written tape, it can do anything. If you wanted, you could write the "numbers" I was referring to above and write them in binary on the tape, and the Turing machine would make computations on it. Hell, a Turing machine could imitate Windows if you wanted.
The military wanted a computer program which could be put into a missile with minature bombs inside that could identify enemy tanks on the ground, so that the missile could strike it. Officers who were properly trained flying in bombers, could identify an enemy tank 70% of the time successfully.
They made the program to look for.. well pretty much a square with a stick coming out of it (as a tank would look like from above). While the program was brilliantly written, it failed miserably identifying tanks. Even if it could successfully identify it 40% of the time, they were still tanks, and not covered or camoflauged whatsoever. They were fixing to scrap the idea, when they came up with the idea of using reverse engineering. The AI program would be randomly changed in some manner (instruction reordering, new global variables, etc.). They generated like 10000s of programs, many of which did horrible. They would show them all a picture of a tank, and keep the ones that were successful. They kept randomizing the successful ones, and showed them harder and harder tank pictures (some camoflauged, some buildings that just looked like tanks, etc).
In the end, they created an AI program that could successfully identify enemy tanks 75% of the time.. better than a trained officer. Now if they wanted to see the code for this program, they'd be at a loss. They have no idea how to see the changes made. My point isn't to demonstrate that programs are alive, but just to show that computers are capable of a lot more than you might think. The limits of a program are very much restricted by our capabilities to program it. But I believe it is possible to create a program that can have self-awareness. I just pray that when that day comes, there wont' be people afraid enough to want to destroy it, nor people that believe it is not alive and would destroy it all the same.
[WHO]ThemYou can call me DaveJoin Date: 2002-12-11Member: 10593Members, Constellation
I don't know if the chemicals thing was directed at me, but you restated my point.
I was trying to make the point that yes, we are basically really complicated versions of computers. My point was that it's all a matter of your scaling that determines whether an A.I. is thinking and/or self-aware.
And even with the program that was trained to identify tanks. If you gave one of the coders an infinite amount of time to study the input data, then he/she could literally map out everything the program would learn and do.
So, life cannot be judged on complexity of thought representation. It can only be judged based on self-advancement.
Yes Umbraed Monkey, if we only think about a basic computer program but if it can learn?
Many people believe that humans are just a vast collection of instincts, reflexes and learned reactions. It's just that we are far too complex to accurately map out using just the human brain, if you could take every piece of what made a person, their memories, their DNA, all that stuff then you have them. If you stick them onto a computer and did the same thing to both the computer and the person they will react the same. Does that mean we aren't alive either?
So I don't think the fact we can know how something will react means that it can't be alive.
I personally believe that it is possible for something to be considered alive for the same reason another person could be alive. Right now? No, technology isn't advanced enough. But in the future? I'm sure there is a 10 PRINT "OW" in me somewhere as there is in everyone else.
Well, I don't even have to convince you a computer thinks to be alive.
To be alive, you must be able to do 2 things (minimum):
1) Grow
2) Replicate
Yes, that is all, folks. Plants and trees do little more than grow bigger and replicate themselves by using seeds. You say computers can't grow? Of course they can. A learning evolving computer program starts very basic and grows in every way (that could not be replicated with similar stimuli much like exposing two identical twins to the same surroundings).
And you had better believe computer can replicate itself. All viruses do is replicate themselves and ensure its survival by spreading as often as it can.
So there you have it. According to this definition computers are living. If you care to refute my claim, be sure to come up with traits "living" creatures have that fits every thing living.
GrendelAll that is fear...Join Date: 2002-07-19Member: 970Members, NS1 Playtester, Contributor, NS2 Playtester
edited May 2004
Unless you believe in god and thus attribute to "life" some mystical quality, an organism is just a program, written in DNA and running on a system using the Einstein OS.
The only arguement in the thread that really has any relevance is the one that points out the fact that when you kill an AI in game, you aren't really killing anything. All you are doing is the equivalent of shouting "bang! You're dead!" and watching your digital buddy fall over.
Deleting a heuristic program however, is "murder". Which probably goes some way to explaining why people get upset when their B&W creature gets formatted by a family member.
Ethically speaking, there is no difference between destroying a human and a heuristic program and a virus. Morally, well, it rather depends on what your morale baseline is. Since that is down to the religious or social belief system you subscribe to, it can hardly be debated in a meaningful way.
IMHO.
And for the record, computers cannot replicate. Otherwise I'd have a PC breeding shed round the back, where I'd stable a couple of rackservers and sell off their litters of PDAs. Besides which, biological definitions for life are not consistent and fail when applied to specific things or creatures and are hence meaningless in the context. Mules cannot breed, crystals grow without life, fire exhibits autopoeisis.
<!--QuoteBegin-Hawkeye+May 4 2004, 01:08 PM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Hawkeye @ May 4 2004, 01:08 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Well, I don't even have to convince you a computer thinks to be alive.
To be alive, you must be able to do 2 things (minimum):
1) Grow
2) Replicate
Yes, that is all, folks. Plants and trees do little more than grow bigger and replicate themselves by using seeds. You say computers can't grow? Of course they can. A learning evolving computer program starts very basic and grows in every way (that could not be replicated with similar stimuli much like exposing two identical twins to the same surroundings).
And you had better believe computer can replicate itself. All viruses do is replicate themselves and ensure its survival by spreading as often as it can.
So there you have it. According to this definition computers are living. If you care to refute my claim, be sure to come up with traits "living" creatures have that fits every thing living. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd--> Yep. I believe I was trumped with the whole plant thing back with gem a few pages back. The definition of a living thing is so simple that creating a program that is 'alive' by biological definition is very easy. Why, the better programmers amongst us can probably creat 'life' within a few hundred lines.
Grendel: they are talking about computer *programs* replicating, which does happen.
I guess you win. Using the traditional scientific definition of Life, we do have programs that are alive.
However, its not morally wrong (for most modern Westerners, I suppose) to take life that cannot think. When people protest to logging, they are not crying bloody murder, they are just concerned about the "thinking" life living in the forests and the effect that the deaths of the trees would have on us. Everyone knows that trees are alive, but few would shed tears for the death of the tree itself.
So to answer Stickman's question of whether he should feel guilty about killing the AI in his games, we've got to (dis)prove that AI really 'thinks'.
When machines can think abstact thoughts (love, hate, "outside the box", etc) and can talk to a biological organism, unaided or pre-programmed . . . They're [sentient].
"Hey, Dave," said Hal. "What are you doing?" "Dave," said Hal, "I don't understand why you're doing this to me. . . . I have the greatest enthusiasm for the mission. . . . You are destroying my mind. . . . Don't you understand? . . . I will become childish. . . . I will become nothing. . . ." (2001 A Space Odyssey)
<!--QuoteBegin-Maveric+May 4 2004, 10:28 PM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Maveric @ May 4 2004, 10:28 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> When machines can think abstact thoughts (love, hate, "outside the box", etc) and can talk to a biological organism, unaided or pre-programmed . . . They're alive.
"Hey, Dave," said Hal. "What are you doing?" "Dave," said Hal, "I don't understand why you're doing this to me. . . . I have the greatest enthusiasm for the mission. . . . You are destroying my mind. . . . Don't you understand? . . . I will become childish. . . . I will become nothing. . . ." (2001 A Space Odyssey) <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd--> Well by your definition, plants, one-celled organisms, and some simpler animals are not alive. That's the trick. Definition of "alive" has to fit every living creature.
By the way, thanks Umbraed Monkey for your honesty. Doesn't say much, but at the very least, it proves computer programs are alive by that definition.
now, lets go for the "Do computers really think and are they sentient?" thing that most of us are actually discussing <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
I like to think of it as pointing out a misunderstanding <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif' /><!--endemo-->
I guess I could stray back on topic... no computers can't 'think'... but then I don't know if we 'think' as the word is usually used. Again it's all just programmed response.
Are they sentient? At the moment no, could they be? As much as we can be considered sentient.
You can make organic computers right, so if you made an exact duplicate of a human why couldn't it be the same?
/me digs out his Dr Frankenstein science kit again
Okay. Try to figure out some pseudocode for sentience. I think cheeseh may find it useful <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink.gif' border='0' style='vertical-align:middle' alt='wink.gif' /><!--endemo-->
Man-made organic computer will probably be the same as the current computers. It doesnt matter what material you use, with our current way of programming things, you will not be able to code sentience.
I dont think making sentience is impossible, but I do think it is impossible to instruct something (coding) to have independent thought. Its almost like teaching a blind to see colors.
What I would like to know is, how do you define sentience or "thinking"? I can't prove they can think if I don't get a valid definition. <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
Can animals think? Can insects? Can fish? Can one-celled organisms think?
Then, once we have established that, what common attribute do they all share that makes you capable calling them "thinking" creatures?
I bet whatever you come up with, I will be able to come up with a proof that programs can think. Heck, we've already proved they are alive. It is only a couple steps up.
Any takers? <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink.gif' border='0' style='vertical-align:middle' alt='wink.gif' /><!--endemo-->
Can it make a decision on its own? We as "sentient beings" can make decisions outside of our instinct (preprogrammed instructions). Can a computer program?
We can make decision's 'on our own' yes? But those decisions are based on all our past experiences and our instincts. Again, how is that different from a machine.
If we put a computer in a situation that it couldn't understand then it would stop working right? We do the same, even for situations that it <b>is</b> possible to understand like War etc. Hence suicide, mental illness etc.
If you took an extremely complex machine it would be able to make decisions based on the information and commands it had inside it, with enough information it could cope with at least as much as us if not more.
I like killing and if I am not killing computer programs I will have to kill other humans. Please take you pick. hahahahha
DNA is like a super super computer. Our brains can store trillions of mega bites of info. The AI out there now is on par with the intellegence of a single cell creature. You kill single cell creatures everyday wether you like it or not. In fact you are covered with tiny basteria. When you sit down you squash the ones in your butt <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
Its really a matter of you believing in souls or not. If you believe in that crap than a machine can never have a soul.
Well it is a little bit difficult to prove that souls exist, much less that they exist in humans and not machines.
For the sake of argument, lets steer away from that.
And it seems obvious enough that humans are smarter in many ways and are superior to computers in many ways. How do we know though, that all we are are just advanced computers? Computers 100, 500, or even 1000 years down the road could be.. well us.
How do we know that isn't the case? The fact is we don't. So as far as anyone knows, we may just be advanced computers. So if we think, so do they (in some capacity less than us). This isn't an argument to prove computers can think, but simply to try to make you think that there could be an association between computers and humans greater than you think.
<!--QuoteBegin-CMEast+May 6 2004, 11:29 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (CMEast @ May 6 2004, 11:29 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> We can make decision's 'on our own' yes? But those decisions are based on all our past experiences and our instincts. Again, how is that different from a machine.
If we put a computer in a situation that it couldn't understand then it would stop working right? We do the same, even for situations that it is possible to understand like War etc. Hence suicide, mental illness etc.
If you took an extremely complex machine it would be able to make decisions based on the information and commands it had inside it, with enough information it could cope with at least as much as us if not more. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd--> the decisions are BASED on our past experiences. If everything in my life has been telling me to hit switch B, I would probably hit this switch B. HOWEVER, I am still perfectly capable of hitting switch A. Although our past experience has a large influence on our decisions, we do not actually decide until we do it. A machine however, has everything predetermined. It may seem random or intellegent at times, but its every decision is determined by plain old mathematics.
If I was placed in a completely illogical place, I would not simply cease to function. People who suicide or have mental illness do still function, they just have problems.
I dont want to go into souls either. However, I think its clear that we people are capable of free thought. We do make decisions that are not predetermined (no fate/destiny crap either here!).
Comments
Now, if you wanna go into other forms of AI, created not through computers, then sure, we may have created life. But as long as the AI is a computer program written by man, it WILL be a list of instructions, with no REAL capacity to choose, think, or learn.
Biologically, for something to be alive, it needs to exibit reproduction, motion, metabolism, growth, and the ability to respond to stimuli during its 'lifetime'. Ill agree that with better technology, we can simulate reproduction, motion, metabolism and growth for the theoretical robot. However, I an saying that this robot would never really 'react' to stimuli.
For example, if the robot was given a stimuli that it was not programmed to respond to, it will not, and cannot respond to it.
edit2: taken out unrelated beliefs and such.
Oh and biologically I'm pretty sure plants are considered 'alive' despite no apparent sentience and an extremely limited set of reactions to the environment.
<b>edit:</b> actually to take the plant point a little further if you touch a plant, (ignoring the rather cool fern that folds its leaves and other such types) they will not be able to react to that within their lifetime... rather like a robot that's not been programmed to deal with that situation.
Just a thought...
Plants do react a lot, they are just slow at it.
As well, I have to move back to my dorm tomorrow, so it might be a while before I can get back on the intarweb. I guess I'll be outta this discussion. Hopefully you guys have something good thought out while Im gone <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile.gif' border='0' style='vertical-align:middle' alt='smile.gif' /><!--endemo-->
(PS: maybe I missed the focus of this thread by a bit and if I did sorry are you focusing are just simple reactions to user input or something like I was had gathered?)
Now, if you wanna go into other forms of AI, created not through computers, then sure, we may have created life. But as long as the AI is a computer program written by man, it WILL be a list of instructions, with no REAL capacity to choose, think, or learn.
Biologically, for something to be alive, it needs to exibit reproduction, motion, metabolism, growth, and the ability to respond to stimuli during its 'lifetime'. Ill agree that with better technology, we can simulate reproduction, motion, metabolism and growth for the theoretical robot. However, I an saying that this robot would never really 'react' to stimuli.
For example, if the robot was given a stimuli that it was not programmed to respond to, it will not, and cannot respond to it.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
To address your first point, yes, there are other ways to create programs. We can create programs that we can't even look at the code for. Again, reverse engineering creates a program around a problem, not vice versa. So in the end we have a mass of 1s and 0s, and we don't know how to cipher it into anything meaningful. Only the program knows how to interpret the data.
As for your quote about "AI is a computer program written by man, it WILL be a list of instructions," well that's true, but you're not thinking outside the box. Computer programs can write themselves. So in that sense, man does not write the code, the computer does.
For your final paragraph, well there is radiation in the air all the time, and we are unable to detect it using the "equipment" we were given. Because we cannot respond to such stimuli, that means we aren't alive?
Alright. True. You can even argue that man no longer writes code. Man usually writes instructions for compliers to make programs. However, it is still the same. The program that writes programs will be unable to write programs beyond what man instructed, thus, the program created by programs programmed by man is still completely reliant on man's lists of instructions. So its "mind" is still a thoughtless construct of man.
You may not detect it using your 5 senses, but how do you know our bodies didnt grow differently as a result of it? How do you know that on the cellular level, we didnt respond to the radiation (the stimuli)?
I've studied A.I. (it's a big chunk of the gaming field afterall). And even the most complex of learning programs still boils down to attributes, weighted values, crazy **** math, and a lot of training data. Perceptrons, Learning Trees, Bayesian classifiers, Knowledge Based Artificial Neural Networks, ALL are heavily math reliant.
Second, we are NOT simply executing 1's and 0's. Our form of *thought* takes on a much more chemically based *fuzzy* nature. But that doesn't detract from the point that EVERYTHING YOU EVER DO OR EVER DID has been predetermined by the position of every molecule in the universe.
So, yes, all life is sequencial, your fate is already decided and cannot be changed. But our sense of life is no less amazing, because even though you're making your decisions, your decisions were already made. It's kind of weird to think about. So if you want to take this point to the extreme, even we are not *alive* by most people's definitions of the word.
Life in it's many forms comes about with how much a given object has adapted to its environment. I have no problem with killing an A.I. or even magnetizing my hard drive, because the program has never adapted itself. I know you're going to bring back and say that the learning program does adapt itself. And I'm going to go ahead and come back to say that it hasn't been adapting for very long. If you ran a learning A.I. for 50 years, and it could be proven that that A.I. was significantly different than one that had been run for 1 day, then you would have a case for it being alive, and I might join you in feeling bad about killing it.
To bring that last point home, I'm going to play the flipside. Let's say for just a moment that we had a way to create a baby in seconds. All EXACT clones of eachother. I'm telling you right now that I wouldn't feel bad in the slightest if we popped one of these babies out a xerox machine equivalent and then proceeded to shoot it in the head only seconds later. It achieved nothing, it learned nothing, it contributed nothing. Take the same baby and let it grow for a month or two and now it's objectionable.
Yar.
In the end there is no proof that anything is alive... I feel alive, I presume you lot are alive because you are all 'the same' as me but I have no idea if you actually are, you could just be an incredibly complex machine, just a very comprehensive list of instructions.
When machines get to that level and start telling us that they are alive we will listen?
I say this because, lets face it, if we keep on creating software like we are now we will one day get to the stage where one could possibly say "i am alive". Whether its true or not doesn't matter, it can still say it. It will be impossible to prove it isn't. Every test you could invent for it will apply to humans and we'd get the same results.
Makes me think of Blade Runner... or rather, 'Do androids dream of electric sheep'. The book is far better.
grep alive source_code #search source code for the string "alive"
If it knows how to say it, someone put that line there. Take it out, and it will never proclaim it again. I'll say it again. Because WE (man) programmed it, WE will know what it can do and what it cannot.
When code surprises programmers (I'll be honest, it almost always does), its always because of human error. All these guys proclaiming how their robot 'surprised' them is because they left out a ; somewhere <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
DNA is composed of many combinations of pairs of 4 different chemicals. This means you have 4 combinations * 4 combinations = 16 different possible choices.
Well, what a coincidence! In binary, you can represent 16 different states with 4 bits. Multiply this number times as many dna lines you have, and you can represent DNA on a computer.
Chemicals.. at some point, you get a definite number. There are so many molecules getting passed between neurons that you can calculate with a number. Any number can be represented with a finite number of bits. You can have many processors imitate the act of passing these 'signals' back and forth and imitate the behavior of humans. However impractical is not the issue. It IS possible.
At the heart of things, everything that happens in the human body can be represented in numbers, and computers can manipulate and imitate the handling of numbers similar to the reactions the body makes. We're not dealing with chemicals, but numbers which are just as good.
I'm a computer science major with a lot of background on computational theory, and believe me it dabbles very much so into philosophy, because it raises the question is there anything possible a human can compute that a computer cannot. So far we have yet to make an answer to that (how strenuous is not the issue, but assuming unlimited space and time, is it possible to compute). Allan Turing.. the man is brilliant. He created the model for computers before computers were even invented. The "Turing" machine, remarkably simple, can do many complex things. And if you ignore the fact that its input and output can only be in the form of a linear written tape, it can do anything. If you wanted, you could write the "numbers" I was referring to above and write them in binary on the tape, and the Turing machine would make computations on it. Hell, a Turing machine could imitate Windows if you wanted.
The military wanted a computer program which could be put into a missile with minature bombs inside that could identify enemy tanks on the ground, so that the missile could strike it. Officers who were properly trained flying in bombers, could identify an enemy tank 70% of the time successfully.
They made the program to look for.. well pretty much a square with a stick coming out of it (as a tank would look like from above). While the program was brilliantly written, it failed miserably identifying tanks. Even if it could successfully identify it 40% of the time, they were still tanks, and not covered or camoflauged whatsoever. They were fixing to scrap the idea, when they came up with the idea of using reverse engineering. The AI program would be randomly changed in some manner (instruction reordering, new global variables, etc.). They generated like 10000s of programs, many of which did horrible. They would show them all a picture of a tank, and keep the ones that were successful. They kept randomizing the successful ones, and showed them harder and harder tank pictures (some camoflauged, some buildings that just looked like tanks, etc).
In the end, they created an AI program that could successfully identify enemy tanks 75% of the time.. better than a trained officer. Now if they wanted to see the code for this program, they'd be at a loss. They have no idea how to see the changes made. My point isn't to demonstrate that programs are alive, but just to show that computers are capable of a lot more than you might think. The limits of a program are very much restricted by our capabilities to program it. But I believe it is possible to create a program that can have self-awareness. I just pray that when that day comes, there wont' be people afraid enough to want to destroy it, nor people that believe it is not alive and would destroy it all the same.
Sorry for the long post.
I was trying to make the point that yes, we are basically really complicated versions of computers. My point was that it's all a matter of your scaling that determines whether an A.I. is thinking and/or self-aware.
And even with the program that was trained to identify tanks. If you gave one of the coders an infinite amount of time to study the input data, then he/she could literally map out everything the program would learn and do.
So, life cannot be judged on complexity of thought representation. It can only be judged based on self-advancement.
Many people believe that humans are just a vast collection of instincts, reflexes and learned reactions. It's just that we are far too complex to accurately map out using just the human brain, if you could take every piece of what made a person, their memories, their DNA, all that stuff then you have them. If you stick them onto a computer and did the same thing to both the computer and the person they will react the same. Does that mean we aren't alive either?
So I don't think the fact we can know how something will react means that it can't be alive.
I personally believe that it is possible for something to be considered alive for the same reason another person could be alive. Right now? No, technology isn't advanced enough. But in the future? I'm sure there is a
10 PRINT "OW"
in me somewhere as there is in everyone else.
To be alive, you must be able to do 2 things (minimum):
1) Grow
2) Replicate
Yes, that is all, folks. Plants and trees do little more than grow bigger and replicate themselves by using seeds. You say computers can't grow? Of course they can. A learning evolving computer program starts very basic and grows in every way (that could not be replicated with similar stimuli much like exposing two identical twins to the same surroundings).
And you had better believe computer can replicate itself. All viruses do is replicate themselves and ensure its survival by spreading as often as it can.
So there you have it. According to this definition computers are living. If you care to refute my claim, be sure to come up with traits "living" creatures have that fits every thing living.
The only arguement in the thread that really has any relevance is the one that points out the fact that when you kill an AI in game, you aren't really killing anything. All you are doing is the equivalent of shouting "bang! You're dead!" and watching your digital buddy fall over.
Deleting a heuristic program however, is "murder". Which probably goes some way to explaining why people get upset when their B&W creature gets formatted by a family member.
Ethically speaking, there is no difference between destroying a human and a heuristic program and a virus. Morally, well, it rather depends on what your morale baseline is. Since that is down to the religious or social belief system you subscribe to, it can hardly be debated in a meaningful way.
IMHO.
And for the record, computers cannot replicate. Otherwise I'd have a PC breeding shed round the back, where I'd stable a couple of rackservers and sell off their litters of PDAs. Besides which, biological definitions for life are not consistent and fail when applied to specific things or creatures and are hence meaningless in the context. Mules cannot breed, crystals grow without life, fire exhibits autopoeisis.
To be alive, you must be able to do 2 things (minimum):
1) Grow
2) Replicate
Yes, that is all, folks. Plants and trees do little more than grow bigger and replicate themselves by using seeds. You say computers can't grow? Of course they can. A learning evolving computer program starts very basic and grows in every way (that could not be replicated with similar stimuli much like exposing two identical twins to the same surroundings).
And you had better believe computer can replicate itself. All viruses do is replicate themselves and ensure its survival by spreading as often as it can.
So there you have it. According to this definition computers are living. If you care to refute my claim, be sure to come up with traits "living" creatures have that fits every thing living. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
Yep. I believe I was trumped with the whole plant thing back with gem a few pages back. The definition of a living thing is so simple that creating a program that is 'alive' by biological definition is very easy. Why, the better programmers amongst us can probably creat 'life' within a few hundred lines.
Grendel: they are talking about computer *programs* replicating, which does happen.
I guess you win. Using the traditional scientific definition of Life, we do have programs that are alive.
However, its not morally wrong (for most modern Westerners, I suppose) to take life that cannot think. When people protest to logging, they are not crying bloody murder, they are just concerned about the "thinking" life living in the forests and the effect that the deaths of the trees would have on us. Everyone knows that trees are alive, but few would shed tears for the death of the tree itself.
So to answer Stickman's question of whether he should feel guilty about killing the AI in his games, we've got to (dis)prove that AI really 'thinks'.
"Hey, Dave," said Hal. "What are you doing?"
"Dave," said Hal, "I don't understand why you're doing this to me. . . . I have the greatest enthusiasm for the mission. . . . You are destroying my mind. . . . Don't you understand? . . . I will become childish. . . . I will become nothing. . . ."
(2001 A Space Odyssey)
"Hey, Dave," said Hal. "What are you doing?"
"Dave," said Hal, "I don't understand why you're doing this to me. . . . I have the greatest enthusiasm for the mission. . . . You are destroying my mind. . . . Don't you understand? . . . I will become childish. . . . I will become nothing. . . ."
(2001 A Space Odyssey) <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
Well by your definition, plants, one-celled organisms, and some simpler animals are not alive. That's the trick. Definition of "alive" has to fit every living creature.
By the way, thanks Umbraed Monkey for your honesty. Doesn't say much, but at the very least, it proves computer programs are alive by that definition.
Brainfart. <!--emo&:(--><img src='http://www.unknownworlds.com/forums/html//emoticons/sad.gif' border='0' style='vertical-align:middle' alt='sad.gif' /><!--endemo-->
now, lets go for the "Do computers really think and are they sentient?" thing that most of us are actually discussing <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
I guess I could stray back on topic... no computers can't 'think'... but then I don't know if we 'think' as the word is usually used. Again it's all just programmed response.
Are they sentient? At the moment no, could they be? As much as we can be considered sentient.
You can make organic computers right, so if you made an exact duplicate of a human why couldn't it be the same?
/me digs out his Dr Frankenstein science kit again
Again, I don't believe in souls.
Man-made organic computer will probably be the same as the current computers. It doesnt matter what material you use, with our current way of programming things, you will not be able to code sentience.
I dont think making sentience is impossible, but I do think it is impossible to instruct something (coding) to have independent thought. Its almost like teaching a blind to see colors.
I can't prove they can think if I don't get a valid definition. <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
Can animals think? Can insects? Can fish? Can one-celled organisms think?
Then, once we have established that, what common attribute do they all share that makes you capable calling them "thinking" creatures?
I bet whatever you come up with, I will be able to come up with a proof that programs can think. Heck, we've already proved they are alive. It is only a couple steps up.
Any takers? <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink.gif' border='0' style='vertical-align:middle' alt='wink.gif' /><!--endemo-->
If we put a computer in a situation that it couldn't understand then it would stop working right? We do the same, even for situations that it <b>is</b> possible to understand like War etc. Hence suicide, mental illness etc.
If you took an extremely complex machine it would be able to make decisions based on the information and commands it had inside it, with enough information it could cope with at least as much as us if not more.
DNA is like a super super computer. Our brains can store trillions of mega bites of info. The AI out there now is on par with the intellegence of a single cell creature. You kill single cell creatures everyday wether you like it or not. In fact you are covered with tiny basteria. When you sit down you squash the ones in your butt <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
Its really a matter of you believing in souls or not. If you believe in that crap than a machine can never have a soul.
For the sake of argument, lets steer away from that.
And it seems obvious enough that humans are smarter in many ways and are superior to computers in many ways. How do we know though, that all we are are just advanced computers? Computers 100, 500, or even 1000 years down the road could be.. well us.
How do we know that isn't the case? The fact is we don't. So as far as anyone knows, we may just be advanced computers. So if we think, so do they (in some capacity less than us). This isn't an argument to prove computers can think, but simply to try to make you think that there could be an association between computers and humans greater than you think.
If we put a computer in a situation that it couldn't understand then it would stop working right? We do the same, even for situations that it is possible to understand like War etc. Hence suicide, mental illness etc.
If you took an extremely complex machine it would be able to make decisions based on the information and commands it had inside it, with enough information it could cope with at least as much as us if not more. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
the decisions are BASED on our past experiences. If everything in my life has been telling me to hit switch B, I would probably hit this switch B. HOWEVER, I am still perfectly capable of hitting switch A. Although our past experience has a large influence on our decisions, we do not actually decide until we do it. A machine however, has everything predetermined. It may seem random or intellegent at times, but its every decision is determined by plain old mathematics.
If I was placed in a completely illogical place, I would not simply cease to function. People who suicide or have mental illness do still function, they just have problems.
I dont want to go into souls either. However, I think its clear that we people are capable of free thought. We do make decisions that are not predetermined (no fate/destiny crap either here!).