AI's Deception

Artificial Intelligence Research, despite its enormous successes, will always be considered a failure. There is nothing that AI researchers can do about this. The reason for this is as follows.

A century ago, mundane tasks like multiplying or dividing numbers (or even harder: computing the square root of a number) were considered tasks that require intelligence. After all, stupid people could not perform them. If anyone would ever present a machine that could do such computations, this machine would surely be intelligent. So people thought.

Then in 1946, ENIAC was presented, the first Turing-complete programmable digital computer. At first, it was heralded as a “Giant Brain” in the press. Then people analyzed more closely how the machine worked, and when they realized that there was nothing remotely intelligent in the computer’s behavior, they were frustrated: “It’s all just brute force number-crunching! What a deception!”

The humans then concluded that basic computations have nothing to do with intelligence. Maybe computing path integrals or solving systems of differential equations, that would be acceptable, as these were tasks that still only a mathematician could do, but certainly elementary computations would not qualify.

A few years later, the software to do the advanced mathematical tasks was written, so computers could do them now. So it was concluded that they don’t require intelligence either. “This whole math stuff is just number-crunching”, the people cried. “It is not what we mean by intelligence. You should build a machine that can play chess! That would be intelligent.”

So scientists started to research game strategies. In 1984, chess computers were already pretty good. At least, I had no chance of beating “Colossus Chess” on my Commodore 64. Nor had my classmates any chance against the machine. But the public was not convinced yet.

“Build a chess computer that beats the world champion! Not some amateur kid!”, the people demanded. “Only then we believe it.”

In 1996, Gary Kasparov was beaten closely by IBM Deep Blue. In 1997, he was beaten quite clearly by IBM Deep Blue 2. People were stunned. But not for long.

They analyzed how the machine did it, and after realizing how the algorithm worked, they turned away in disgust again. “It’s just a brute-force approach using lots of computing power!”, they accused the programmers. “What a cheap trick! This is not intelligent at all!”

It was then decided that chess has nothing to do with intelligence anyway. “Build a machine that beats the world champion in Go! That would be something. Chess is too easy, Go is the real thing. If you beat the Go champion, then we’ll believe it.”

By 2009, computers were well on their way to beat the Go champion. In fact, only a few Go players remained who still had a chance against a machine.
http://www.wired.com/wiredscience/2009/03/gobrain/

But humans were not interested anymore. Playing board games has nothing to do with intelligence, they had decided meanwhile, at it can be done by crunching numbers.

“Games are for kids. We want to see real things. Can your machines drive a car? Can they translate a text into another language? ”

Well, of course, they can drive a car:
http://en.wikipedia.org/wiki/Google_driverless_car
They are not as good as the Formula One World Champion yet. That will take another decade or two, but it won’t prove anything. It will only prove that driving a car has nothing to do with intelligence, right?

Translating texts they can do too, as Google Translate shows us. And contrary to popular belief, I think Google Translate is awesome. In fact, it is much better than me, as there are only two languages where I am superior to Google Translate: German and English. In French, I am about on the same level as GT, and in all other languages GT is better than me.
And regarding the translation quality, GT is probably better than 50% of the population that speaks the two languages involved. You don’t believe this? Well, just give a German text to a random person on the street and ask him or her to translate it to English. For fairness, let’s select a person who speaks German as a mother language and had English in school. But still, I am not sure that the average person would come up with a translation that is better than the one given by GT. Not because GT’s translation is awesome, but because most humans are even worse translators.

Text translation - as many other mental tasks - can be done by crunching numbers. And this is the underlying problem. GT uses a statistics-based approach and lots of computing power, reducing the noble task of translation to a stupid brute-force job. Humans are unable to accept this as intelligent behaviour. Even when in 20 years the translation quality has improved to 99.9% - meaning that only one in thousand humans could provide a better translation than Google Translate – humans will just take it as proof that the ability to translate texts is not correlated to intelligence either.

And it will continue like this. The machines will get better. They will be able to recognize images and people. Speech recognition, translation, and speech synthesization will make them effective interpreters. They will surpass human capabilities in many areas. And still, they won’t be considered intelligent, as their way of doing all these things seems so unintelligent to us.

Artificial Intelligence is a failure by definition, as in the heads of most of us intelligence is defined as the ability that computers do not have and will never have. As long as this preoccupation doesn’t change, AI will be perceived as a deception.

Crunching numbers and being unaware of it

Now what if we humans are solving problems by a brute-force approach too, and we are just unaware of it? What do you think our 100 billion neurons are doing, when we are playing chess?

The one thing that Artificial Intelligence Research might actually achieve is to explain the phenomenon of intelligence away. By showing how one problem after the other can be solved by a computer, the domain of tasks that seemingly require intelligence is receding. In this sense, intelligence plays the role of a gap filler. It just stands for the part of our cognitive abilities that science cannot explain yet. This is quite similar to the receding role of god who has been invented by humans as another gap filler to explain those phenomena of nature that have not been explained by science yet.

If AI research finally manages to show how any mental task that can be solved by a human can be solved by a computer, they will have eliminated the gap filler and thus shown that intelligence (in the definition above) does not exist. If this ever happens – if it will, it will probably be at least a few decades in the future – humans probably won’t take it easily. Similar to the Copernican insult (that the Earth is not the center of the universe) or the Darwinian insult (that human is just an animal), this insult might be even harder to take for many people: that the brain is just a very complex machine.