Artificial Intelligence, Not Could We: Should We?
Mantrid
Lockpick Join Date: 2003-12-07 Member: 24109Members
in Discussions
Inspired by this:
<!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->And once he [a robot] relaizes that he's a slave to the humans and that his shoulder-equipped rocket launchers can be used for more than just cockroach extermination, then what?<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Eventually, technology will develop to the point where Artificial Intelligence is possible. However, even with this capability how far should we go?
Obviously, automated tasks could be performed by machines with rudimentry intelligence.
But to be of any use, a robot of sufficient complexity needs to be able to learn, and learning can cause changes.
Is it morally right to create a machine that can think if it can only serve us? What if we give it emotions? Must we then provide robots with housing, an economy, rights, even a position in the government?
And what happens when a robot realizes that it is nothing but a far more efficient human, and decides to start a revolution to remove humanity?
It makes sense: a robot of that caliber would be self-sufficient, fuel-efficient, super-intelligent, very strong, have a nearly infinite memory and thats immortal would soon realize that they don't need humans anymore; rather, that we are some pest thats destroying the planet and squandering its resources.
Or, would these mechanical men take pity on humanity, and cut down our numbers, but protect a few "just in case"?
<!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->And once he [a robot] relaizes that he's a slave to the humans and that his shoulder-equipped rocket launchers can be used for more than just cockroach extermination, then what?<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
Eventually, technology will develop to the point where Artificial Intelligence is possible. However, even with this capability how far should we go?
Obviously, automated tasks could be performed by machines with rudimentry intelligence.
But to be of any use, a robot of sufficient complexity needs to be able to learn, and learning can cause changes.
Is it morally right to create a machine that can think if it can only serve us? What if we give it emotions? Must we then provide robots with housing, an economy, rights, even a position in the government?
And what happens when a robot realizes that it is nothing but a far more efficient human, and decides to start a revolution to remove humanity?
It makes sense: a robot of that caliber would be self-sufficient, fuel-efficient, super-intelligent, very strong, have a nearly infinite memory and thats immortal would soon realize that they don't need humans anymore; rather, that we are some pest thats destroying the planet and squandering its resources.
Or, would these mechanical men take pity on humanity, and cut down our numbers, but protect a few "just in case"?
Comments
Anyway, its a logical conclusion that many others have also drawn.
Humans are nothing more then inefficient illogical machines that do nothing but consume resources at an alarming rate. In essence, humans are nothing more but cheap dirty and expendable robots. Machines are far more efficient, durable etc.
However, logic cannot account for all.
The machine intelligences will come to realise that human thinking is both non-linear and not constrained by the bounds of logic. Hence, leaps in thinking and advancement are possible (Einstein and Newton anyone?).
In any case a logical sentient machine will come to realise that humans have as much right to live as the sentient machine does and at the very worst will force humans to transfer their conciousness over to machine bodies.
Not all creations of man will be Frankenstein, but if there is enough concern there will always be the 3 laws of robotics to fall back on, though if it makes the machines slaves to mankind it will be regrettable.
Let me get more into this:
You say that it would be dangerous: The AI robots would 'break free' and replicate themselves.
Utterly wrong. You're implying that EVERY AUTOMATED PROCESS would be controlled by a Mr. Roboto. Think about this. For a 'machine takeover', this would require the machines to all be about the same 'IQ'. Look at the first problem: Replication.
I can't imagine that the robots we have doing manufacturing now would change all too much in the future. Right now, they're simple, menial machines doing simple, menial tasks. You don't need a human to lift a car door and weld it into place, over and over, all day long, when a cheap program put on a cheap computer can do the same thing. That's the whole REASON why they started using the robots: It was a waste of human resources that can be put to better use, like testing the electronics system.
Making the robotic arm capable of thinking for itself wouldn't increase productivity. Every day the same thing: Lift door, pivot 250 degrees, rotate wrist 90 degrees and pitch down 60 degrees, forward 6 inches, hold, spot weld from 60,70,0 to 60,70,415. The stupid arm isn't going to have to decide 'Hmmm, would it be better to lift it like this, or like <i>this</i>. Should I use the double focus welder this time?' It has no need. So it sure as hell wouldn't be cost effective either. So unless the robots broke into a Chrysler plant, rewired and rebuilt all the robot arms (how the hell would that go unnoticed), they'd have no chance of a 'takeover'.
Secondly, you'd have various stages of 'intelligence'. You're not going to have SHODAN running the floor scrubber robot - you could put a rudimentary AI in them to recognize areas its been, verbal commands, and human movements, but you would have no need to make it contemplate the meaning of life, how Kerr Black Holes work, etc.
Likewise, the cockroach-laser-blasting robot - It would only be as smart as the programming lets it, and the programming would be set by the creators to decide how smart it needs to be. You'd have target identification, room recognition, self-maintenance, maybe give it basic verbal and emotion recognition so you can chat with the pest-control-robot at the water cooler. But they would have no need to waste lots of money on hardware and software by making it understand its OWN emotions.
Let me get more into this:
You say that it would be dangerous: The AI robots would 'break free' and replicate themselves.
Utterly wrong. You're implying that EVERY AUTOMATED PROCESS would be controlled by a Mr. Roboto. Think about this. For a 'machine takeover', this would require the machines to all be about the same 'IQ'. Look at the first problem: Replication.
I can't imagine that the robots we have doing manufacturing now would change all too much in the future. Right now, they're simple, menial machines doing simple, menial tasks. You don't need a human to lift a car door and weld it into place, over and over, all day long, when a cheap program put on a cheap computer can do the same thing. That's the whole REASON why they started using the robots: It was a waste of human resources that can be put to better use, like testing the electronics system.
Making the robotic arm capable of thinking for itself wouldn't increase productivity. Every day the same thing: Lift door, pivot 250 degrees, rotate wrist 90 degrees and pitch down 60 degrees, forward 6 inches, hold, spot weld from 60,70,0 to 60,70,415. The stupid arm isn't going to have to decide 'Hmmm, would it be better to lift it like this, or like <i>this</i>. Should I use the double focus welder this time?' It has no need. So it sure as hell wouldn't be cost effective either. So unless the robots broke into a Chrysler plant, rewired and rebuilt all the robot arms (how the hell would that go unnoticed), they'd have no chance of a 'takeover'.
Secondly, you'd have various stages of 'intelligence'. You're not going to have SHODAN running the floor scrubber robot - you could put a rudimentary AI in them to recognize areas its been, verbal commands, and human movements, but you would have no need to make it contemplate the meaning of life, how Kerr Black Holes work, etc.
Likewise, the cockroach-laser-blasting robot - It would only be as smart as the programming lets it, and the programming would be set by the creators to decide how smart it needs to be. You'd have target identification, room recognition, self-maintenance, maybe give it basic verbal and emotion recognition so you can chat with the pest-control-robot at the water cooler. But they would have no need to waste lots of money on hardware and software by making it understand its OWN emotions. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
The car plants are online, so it isn't a wild assumption that an AI would be able to hack the robotic programs for the simple robots and change them however its sees fit. That being said, its not like the macines that make cars are capable of building anything else, at least not without manual configuration, so for the most part, building AI machines is still only really possible with human intervention, for the time being at least.
If AI were to exist, we would have to either be very sure that it couldn't reproduce itself, or be very sure that it wouldn't be able to do us any harm. The former seems much simpler than the latter.
theres always this programmed robot cliche around.. you dont program a AI, a AI programs itself ergo AI is a complete wrong description of it because a self learning system isnt "artificial" we all have to be AI´s then.. but why later.
you can not program a instant AI routine that is something you call intelligent... something like that would be limited to the actions defined to the program. something that is nothing near intelligent, but maybe faster in its actions than a human brain, and this actions would be limited to mathematical, or precice calculateable. binary. thats the wrong way.
the way to create a INTELIGENT machine would be similar to our way of learning.
we getting from birth one genetic instruction already etablied in our brain: learning and some kind of motivation.
but what is learning? learning is mainly the way a system takes its own input from the outer world,evaluate the worthiness of the information given by the strenght of the input
lenght, strenght of the stimulation..you dont give the machine the input, the machine have to get its own input.
the first phase would be to save informations. a huge and fast data storage, and a efficient way to save the information would be required so the system can save the collected input, and recall it fastly on demand. but only collecting information isnt enough. the system needs senses, that stimulates very basic sensations and instincts like fear, pain or well being, curiosity, later more complex ones like frustration and motivation( this are not sensations like we feel them,more simulations, the system have to be able to tell later if a action results of in getting hurt, or doing this feels good)...
the most important part is the connection to the outer world , the system needs a way to interact with a simulated enviroment, or with a real one. with a way to manipulate objects, to see objects, to hear objects, to feel objects . mobility would be a good factor too. the supercomputer where the LI (learning intelligence would be the better word for it) is stored would be connected remotely to a mobile sensor array with manipulators, a avataar of the system with a external brain. this are the most essential points to enable a system to collect informations,and compute them in its learning process, and modificate its own program.
how the learning exact works is just theory many theroys about it .. but what we know is there are many ways to learn informations and compute them.
speed-up learning :a type of deductive learning that requires no additional input, but improves the systems performance over time. There are two kinds, rote learning and generalization . data caching is an example of how it would be used.
learning by taking advice :deductive learning in which the system can reason about new information added to its knowledge base.
learning from examples: inductive learning with concepts are learned from sets of labeled instances.
clustering: unsupervised, inductive learning which "natural classes" are found for data instances, as well as ways of classifying them.
learning by analogy.
Inductive learning in which a system transfers knowledge from one database into a that of a different domain. the sun is warm(warm database), but its also bright (bright database)
discovery : both inductive and deductive learning which an system learns without help from a teacher. it is deductive if it proves theorems and discovers concepts about those theorems; it is inductive when it raises conjecures. the system can use its avataar do discover things. turning the cube around with its manipulators, feeling the sharp edges,the smooth surface, seeing the blue color of it, it hurts when you squeeze its rough edges.
this points are able to supply the system with new informations, and giving it the motivation to learn. but learning alone isnt intelligent, now the system can use its avataar to react on given situations .. noo im not touching that again.. it hurts.. .
or it have the motivation to copy a action learned before, or copy a action it have seen someone else is doing. building a tower with bricks.. aaww it trys to move a brick around. very basic actions .. it trys to build a tower but havent even put a brick onto another before. but it knows how to move him in 3 dimensional space.
blabla.. enough on learning.
lets say the system have all factors that enable it to learn like a baby. it learns maybe slow at first.. slower than a human, because you cant copy the human learning process with a machine. but after a given point, where the machine is able to process the learned information , decides , and sets a action its be able to learn faster than a human, because the saved information is permament saved and very fast to access.
maybe at this state it begins to imitate language ..the next point.. its prooven that heared language would be much better for a machine to learn to speak than a text based input system. sure..you have to start with text , but you also have to asotiate spoken words and pictures to the text later..
you dont learn to write first.. you learn that this animal is called dog , and this round thing is called ball.. later you learn how to write it. you will need a internal system that converts written text in non vocal/acustic spoken words.. hard to explain.. its like you see a ball and speak the word ball in your mind... its not you havin the written word ball in your mind.
(((a kind of self-confidence would be evolving boundet to its avataar... this is my body..my body can feel pain.. strange thing..the actual beeing is in the big supercomputer somewhere else.. but how it could know if noone tells it that?
would you know that your brain is somewhere else when the only way to see yourself are the sensors in your body..and you see just you body and nothing else? have you seen your brain yet? maybe we are all remote controlled puppets .. controlled by our beeing stored in a supercomputer in some lab..and our planet is just a big testing area...)))
this requires a complete different system of computers.. they have to act like a human brain, not saving informations just 1:1 in binary form . i think neural network is a common term.
wt.... i should work .. ahm.. im quiting my monolouge now,more to say,but no more time <!--emo&:p--><img src='http://www.natural-selection.org/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
short spoken you can not program a superintelligent AI .. more realistic would be to have a learning core with motivation , and a way you can interact with it in the real world.. it would be like a baby learning.. i think it would take long time that this baby reach the age of 2 or 3 human years.. i think more than 5 maybe 10 years . but after a point where it can cumpute information efficiently its faster learning that a human and only limited to its storageplace. i think in israel they have a computer able to competete with the mental age of a 2 year old based on such a learning system..but this poor thing cant interact with its world.. its limited to keyboard inputs, a punish button, and a way to get pictures imported to it. i heard it should get cameras soon.
aww man..so many spelling error..must be a error in my ai <!--emo&:p--><img src='http://www.natural-selection.org/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
by defenition, an ai will teach itself everything it knows (or else it's a program)
since this is impossible as you need some code for learning, a program that can learn is the closest we can get.
so, we take a robot. wipe everything on it clean and give it the ability to learn ... not finished.
unless you want to pretend that the chunk of metal is useeful, you need to give it some input. most programming languages have text based inputs by default, so if you dindn't want to cheat you could try use that to "teach" it to see, feel etc (accept and process sensory input) oh yeah ... it has to figure out language sometime soon.
now the question: will the thing see you as it's master or father.
well, asimov was nice enough to leave us these...
Asimov's Three Laws of Robotics:
1. Robots must never harm human beings or, through inaction, allow a human being to come to harm.
2. Robots must follow instructions from humans without violating rule 1.
3. Robots must protect themselves without violating the other rules.
now, as an intelligent robot probably will not be forced to abide by the above, it would be wise to teach it to them at an early age (or treat your new friend as an equal and skip the whole robots' rights bit)
to keep BM fom typing all day (and to spare you a bunch of useless trivia):
most children have been in possesion of a knife or another form of sharp, deadly object:
do you view your parents as guardians or opressors?
by defenition, an ai will teach itself everything it knows (or else it's a program)
since this is impossible as you need some code for learning, a program that can learn is the closest we can get.
so, we take a robot. wipe everything on it clean and give it the ability to learn ... not finished. <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
your wrong... im not talking about a robot.
im talking about a computer with the ability to get his input in the almost same form we do..
what do you think will happen when someone gets born ,and would conected to some pipes to get away his poo, give him some food and air, and put him in a black box without light and sound . only connection to the outside would be a device that directly puts written words in his brain .. dont you think this person would be "gaga" for the rest of his life? a human baby isnt to different to a computer.. it have a initial learning engine ..programmed or geneticaly is not important.. maybe is the human one more complex. there already trys with chemical datastorage in artificial nero gel .. in pack form this will almost have the capacity of a human brain, because the neuro density is almost the same. he important thing is the way how the system gets his data input.. only 1 way is way to less ..it needs more ways like we do.. hearing,seeing,feeling, tasting,smelling.. this is the critical part and for that way ive created my avataar system theory. you only can evolve a beeing with a body and senses
now to something complete different
the IAAI have equipped a computer just with their "learning with motivation" algorythm.. nothing else.. this code isnt really big. and now almost 6 years later the machine is on the level of a 2 year old . it cant grow up more because its limmited to a simulated enviroment, and the storage space is way to small.. it would need some external sensors and a better I/O system to get information faster and better..
its fascinating.. his creator treats him like a kid , reads storys, play games , when he does something wrong he gets virtualy punished and other stuff.. it heard once a story about a zoo visit and playing with a ball in the park . and when playing a memory game, and it sees the ball on the card it asks stuff like .. play ball park? go zoo?
they let nobody else talk to him , only his creator is alowed to..and she have to take care how she talk to him, and what. they fear if anybody else would talk to him he would
be deranged, and his fragile self written experience pattern could get destabiled , damaged or destroyed. they are very paranoid there since he have start to speak in a very primitive form. and that after 6 years of input. i understand them <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
If AI were to exist, we would have to either be very sure that it couldn't reproduce itself, or be very sure that it wouldn't be able to do us any harm. The former seems much simpler than the latter. <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
You make it sound like Skynet is going to take over. Utterly implausible. You can't 'force' a human level of conciousness onto a Manatee - the Manatee's brain is the size of a pear, and about as complex as one. While an AI could probably reprogram the robotic arms, the sheer magnitude of multitasking all these subroutines would tax even a Cray Supercomputer. An AI would be thousands of times more inconcievably complex then anything we have now, to the point where I'd imagine it would only be able to run on a Quantum Computer. Since a Quantum Computer is pure theory right now, I imagine you'd only have a handful due to the cost, and destroying it, or even just turning it off would be a simple task. You'd have a massive multitude of problems here unto itself. Let's look at the 'Skynet' scenario.
Skynet from the movies is an AI program designed by the military for electronic warfare. In the movie, it 'leaks' out, spreads small parts of itself as a virus, until there's a tiny piece of itself running on every computer on earth. Then, when they release the 'brain' of Skynet to counter the virus, it joins with it's 'body' and now controls every computer on earth. In theory, this is terrifying. In reality, this is an utter joke. According to the movie, Skynet HAS no mainframe it runs on, so it cannot be stopped. In reality, since it'd be running on every computer, it'd be a horribly inefficient, miscoordinated jumble. Potentially vital parts of its code could be slowed due to world-wide latency issues, or lost due to malfunctions in the internet itself. Things such as nuclear reactors would be utterly incapable of functioning without a human element, due to the fact that so many parts of the process REQUIRE outside input. Furthermore, even programming an AI would only be as complex as we could design it. I highly doubt a mere human would be capable of programming an AI thousands of times smarter then the human, since the human wouldn't be able to physically concieve of higher cognetive functions. If you did have an AI, while it might be able to eventually crack 1024 bit encryptions, it certainly wouldn't be able to very easilly, or do it undetected, and certainly not before someone could detect it and easilly isolate that sector of government security.
And in the end, no matter how powerful your program is, no matter how smart it is... it's not going to be able to plug itself in. If we did have a 'supervirus AI', you could just get the ISPs to simply sever their mainframe servers from power, the internet, whatever, and the virus would be totally powerless to stop it. What's it going to do, give you popups warning that if you turn it off, it'll erase your hard drive?
Let's look at another 'Killer AI' scenario - SHODAN. Shodan, being a cranky evil ****, decided she was a god, and to kill all the humans, blah blah blah. Shodan was contained on both the Citadel, and the Von Braun/Rickenbacher, making envisioning this scenario a bit easier. In both situations, Shodan ran from and required a central processing hub. She did control a large part of the electronics on the ships/station, but not all (which makes a lot more sense then having an AI running on your snack machines).
In this scenario, it was a pain because you were the only one left alive. In reality, isolating and annihilating this 'central hub' would be a relatively easy and simple task.
Ultimately, in the end, a single AI entity would be powerless against 6 billion humans. If the AIs were as smart as all these 'doomsday' scenarios, they'd be smart enough to realize that fighting back against their 'controllers' would be suicidal.
Of course if an Artificial entity ever grew that powerful and intelligent, it would probably try and recruit humans to help it, which wouldn't be hard, considering it could entice people with nearly infinite knoweldge. That would attract great scientific minds, and they, in turn, would attract their own following.
erm, sorry, i got too personal
- Take a intelligence that was created for a mechanical body, and put it's "brain" into my brain, and would it be considered AI?
What i'm trying to say is:
- Is a Intelligence that is/has a biological base in a mechanical body a artificial intelligence?
- Is a programmed intelligence, very much like a human intelligence in that it can learn/feel/etc, when placed into a biological body considered artificial intelligence?
Intelligence can never really actually be artificial at the highest level, only the construction (biological, mechanical... other?) that holds it can be. <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
- Take a intelligence that was created for a mechanical body, and put it's "brain" into my brain, and would it be considered AI?
What i'm trying to say is:
- Is a Intelligence that is/has a biological base in a mechanical body a artificial intelligence?
- Is a programmed intelligence, very much like a human intelligence in that it can learn/feel/etc, when placed into a biological body considered artificial intelligence?
Intelligence can never really actually be artificial at the highest level, only the construction (biological, mechanical... other?) that holds it can be. <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo--> <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
Very good point, which brings me to a realization.
Any kind of robot "revolution", assuming that it were somehow possible, that they had a method of self replication etc. would also require some kind of malice towards humans. The simplest prevention of said malice is equal treatment. If Robots were supposedly powerful enough to be a threat to humanity, then to nullify that threat, it would make sense simply to show them they had nothing to gain by doing so. Though some people would be revolted by an integrated human and machine society, it would certainly shatter any need for a war that would be all robots versus all humans. In that kind of society, with equality being present, any war would probably subsist of a mixed group of humans and machines versus another mixed group of humans and machines, because if their was a mutual respect between creator and created, then bonds, loyalties and hopefully even FRIENDSHIPS would tie them together. So there would be no threat of a machine takeover, if only humanity can come to accept its adopted child. (Sorry, I didn't mean for that to be a reference to the christian God.)
However, considering humans still alienate each other based on race, it will be a while before our race is mature enough to be able to fulfill the role of benevolent and loving creator at all levels...