Heaven for Humanity

It was interesting to read about and see the studio discussion of Ray Kurzweil, Google’s Director of Engineering in the SXSW Conference. He is a well-known futurist and he claims “Of his 147 predictions since the 1990s, …86 percent accuracy rate.” An undeniably smart guy with (probably merited) high self-confidence. Let’s see his latest forecasts!

“2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

The related article confirms that “Kurzweil’s timetable for the singularity is consistent with other predictions,– notably those of Softbank CEO Masayoshi Son, who predicts that the dawn of super-intelligent machines will happen by 2047.”

Ray Kurzweil may even be right. The future is unpredictable and computers are still developing rapidly. New technologies are developed daily. However, there are also reasonable doubts here.

Even if we assume Moore’s law will be valid for the next 18 years (not fully realistic), computer’s speed may increase by about 260 thousand times “only”. We can’t see the million times here. Besides our intelligence is difficult to be measured. We can hardly estimate our memory capacity, let alone the number and nature of calculations our brain makes automatically during e.g. image/pattern recognitions. Moreover, how can we “merge” our brain/intelligence with that of the machines? It sounds great, but any programmer can tell you that even building interfaces between computer programs are difficult sometimes. What about building functional connections between two entirely different “hardware”, “software” and “operations”, between human brains ad silicon chips?

These problems somewhat resonate with those expressed regarding any other “singularity” theory. Singularity theories usually rely on assumptions of exponential growth – the growth of knowledge, growth of performance. However, it is known that in several areas of science new discoveries requires investments increasing more that linearly. USD 5 bn price tag of the Large Hadron Collider is a good example. Moreover, there are physical limits to certain developments, as there are limits for the speed (the speed of light) and the accuracy of certain physical measurements (Heisenberg’s uncertainty principle). It is simply too bold to say the exponential growth is feasible anyway in a limited environment, on Earth. And yes, we used the word “environment” not accidentally.

But we don’t have to rely on word only. We can test his predictions relatively soon. In his 2005 book “The Singularity Is Near” he predicted that we can buy a computer with computational capacity of the human brain for 1000 dollars in 2020. So we can just sit back and wait for the first test results.

What is also very interesting in his speech is his positive outlook of these developments for us. “What’s actually happening is [machines] are powering all of us,” Kurzweil said. “They’re making us smarter.” Yes, there are many positive effects on computers. We can hardly wait to be cleverer – we all know that we need it, right? But they can also make us weaker and stupider. It is proven that those parts of the brain and the body, which are not used and exercised usually, become weaker. Brain and body functions taken over by machines will not be better – they will be artificially augmented, resulting in dependencies. Remember the cars/elevators and obscenity, glasses and weaker eyesight, orthodontics and tooth degradation. Such effects can happen in short-term (lack of exercise results in weaker muscles) and long-term (lack of evolutionary pressure can allow the inheritance of unfavorable genes variants).

So while we sincerely hope that Ray Kurzweil is right in every possible aspect, we recommend not to lay down our mental weaponry and give up thinking. Chance favors the prepared mind – not the lazy one.