Singularity as a Heaven for Humanity?

It was interesting to read about and see the studio discussion of Ray Kurzweil, Google’s Director of Engineering in the SXSW Conference. He is a well-known futurist and he claims “Of his 147 predictions since the 1990s, …86 percent accuracy rate.” An undeniably smart guy with (probably merited) high self-confidence. Let’s see his latest forecasts!

“2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.”

The related article confirms that “Kurzweil’s timetable for the singularity is consistent with other predictions,– notably those of Softbank CEO Masayoshi Son, who predicts that the dawn of super-intelligent machines will happen by 2047.”

Ray Kurzweil may even be right. The future is unpredictable and computers are still developing rapidly. New technologies are developed daily. However, there are also reasonable doubts here.

Even if we assume the Moore’s law will be valid for the next 18 years (not fully realistic), computer’s speed may increase by about 260 thousand times “only”. We can’t see the million times here. Besides our intelligence is difficult to be measured. We can hardly estimate our memory capacity, let alone the number and nature of calculations our brain makes automatically during e.g. image/pattern recognitions. Moreover, how can we “merge” our brain/intelligence with that of the machines? It sounds great, but any programmer can tell to you that even building interfaces between computer programs is difficult sometimes. What about building functional connections between two entirely different “hardware”, “software” and “operations”, between human brains ad silicon chips?

These problems are somewhat resonate with those expressed regarding any other “singularity” theory. Singularity theories usually relies on assumptions of exponential growth – growth of knowledge, growth of performance. However, it is knows that in several areas of science new discoveries requires investments increasing more that linearly. USD 5 bn price tag of the Large Hadron Collider is a good example. Moreover, there are physical limits to certain developments, as there are limits for the speed (the speed of light) and the accuracy of certain physical measurements (Heisenberg’s uncertainty principle). It is simply too bold to say the exponential growth is feasible anyway in a limited environment, on Earth. And yes, we used the word “environment” not accidently.

But we don’t have to rely on word only. We can test his predictions relatively soon. In his 2005 book “The Singularity Is Near” he predicted that we can buy a computer with computational capacity of the human brain for 1000 dollars in 2020. So we can just sit back and wait for the first test results.

What is also very interesting in his speech is his positive outlook of these developments for us. “What’s actually happening is [machines] are powering all of us,” Kurzweil said. “They’re making us smarter.” Yes, there are many positive effect of the computers. We can hardly wait to be cleverer – we all know that we need it, right? But they can also make us weaker and stupider. It is proven that those parts of the brain and the body, which are not used and exercised usually, become weaker. Brain and body functions taken over by machines will not be better – they will be artificially augmented, resulting in dependencies. Remember the cars/elevators and obscenity, glasses and weaker eyesight, orthodontics and tooth degradation. Such effects can happen in short-term (lack of exercise results in weaker muscles) and long-term (lack of evolutionary pressure can allow the inheritance of unfavourable genes variants).

So while we sincerely hope that Ray Kurzweil is right in every possible aspect, we recommend not to lay down our mental weaponry and give up thinking. Chance favours the prepared mind – not the lazy one.

The Meaning of Life Team

Mortal Dangers Ahead?

In his latest paper in the International Journal of Astrobiology Daniel P. Whitmire, PhD., teacher at the Faculty of Mathematical Sciences of the University of Arkansas found (taking our current state and the Principle of Mediocrity into consideration) that we have high chances to go extinct relatively soon. Needless to say that extinction would be a telling argument against any meaning of our existence. Is the situation really so grave? We can not afford to look away from such a risk, so we shall come back to this topic as soon as possible.

The original article: Implication of our technological species being first and early

Meaning of Life Team

The Steve Jobs myth – obvious lessons

As Steve Jobs did not invent the iPhone ( Steve Jobs, the sole innovator?  ) , it is also impossible to solve the mysteries which require complex analysis of systems of wide range of natural phenomena or scientific results and theories – alone.

Some may argue that the sole innovators are rare nowadays, but they are still existing species. However, polyhistors undeniably died out centuries ago, and only their fossils are haunting in some old dusty books. The joint knowledge of humanity or even just one branch of science is too large, none can hope to hold it entirely. There are physicists, but all of them are more or less specialized to particle or molecular physics, optic or astrophysics – you name it. Keeping up with recent publications of just one subfield is more than enough for anyone.

Hence, if you would like to attack some complex topics, especially with weapons borrowed from science, we got just one advice for you: don’t do it alone. Learn from others, climb to the shoulders of giants, build a team of similarly thinking clever guys,  and hope that it will be enough.

We hope that too. Would you like to join us in the quest for finding the scientific answer to the meaning of life? Let us know!

 

Meaning Of Life Team