What'll the 'breakthrough' model look like?
The comparison is always drawn to manned flight. We tried building bird wings and flapping them, but it never worked. Some said it was impossible. After a study of how bird wings actually worked
, those principles were applied to a more rough and tumble design and we came up with the standard wing designs we use today. We're not as agile as birds and we don't have the same longevity for flight, but we're faster and much more massive.
So, some say we need to stop focusing on trying to make a brain, and focus more on how the brain really works, an area which remains cloudy even today. Since the 1950's, AI has been up and down and reformed and renamed as promises were made and never kept. Idealists point to Deep Blue beating Kasparov at chess, realists argue that Kasparov made some careless mistakes and so it was not really a fair match.AI
has gone from defining fully autonomous programs that do everything, to programs called Knowledge Bases, to what we call it these days - Intelligent Agents. Bits of code that react to their environments to produce more desirables results be essentially part of a larger, dumber, program.
It's interesting that one of the programming languages most suited to AI tasks, Lisp
, was one of the very first created. With two competing "styles" of languages, functional programming won out and Lisp has been lost to the mainstream.
So, the question is this: When will we step back and proclaim that what we have created is the pinnacle of AI development? Will we make programs that think and act like humans, or programs that think and act rationally given a definition of what is rational?
Is there a "kill-all" technology for AI, like neural networks or fuzzy logic, or will an AI system be made up of chucks of all these things?
Personally, I think the scifi vision of an AI that acts human, one that can think and feel but also crunches numbers like a calculator, is flawed. If we create a program in our own image, it will have the same pitfalls we do - it will remember names but not faces, or have to use tricks to add two three digit numbers.
I'm more inclined to bio-engineer cybernetic implants to ourselves. Is this inhumane? If you create a life form that can feel and think as you do, and you do the same thing to it, isn't that inhumane also?
Really interesting stuff, what do yall think?