Inspired by this:
|And once he [a robot] relaizes that he's a slave to the humans and that his shoulder-equipped rocket launchers can be used for more than just cockroach extermination, then what?|
Eventually, technology will develop to the point where Artificial Intelligence is possible. However, even with this capability how far should we go?
Obviously, automated tasks could be performed by machines with rudimentry intelligence.
But to be of any use, a robot of sufficient complexity needs to be able to learn, and learning can cause changes.
Is it morally right to create a machine that can think if it can only serve us? What if we give it emotions? Must we then provide robots with housing, an economy, rights, even a position in the government?
And what happens when a robot realizes that it is nothing but a far more efficient human, and decides to start a revolution to remove humanity?
It makes sense: a robot of that caliber would be self-sufficient, fuel-efficient, super-intelligent, very strong, have a nearly infinite memory and thats immortal would soon realize that they don't need humans anymore; rather, that we are some pest thats destroying the planet and squandering its resources.
Or, would these mechanical men take pity on humanity, and cut down our numbers, but protect a few "just in case"?
Mantrid, you truly scare me sometimes.