We, the robots
21/9/11 19:08A while ago we discussed an Isaac Asimov book, The End of Eternity. So let's talk about another Asimov-inspired story: I, Robot. I'm sure you've watched the movie. It's very thought-provoking. The story goes like this. In the future (2035 in this case), the most advanced computer program called Virtual Interactive Kinetic Intelligence kick-starts, and it's meant to help with the management of a huge city. All systems in the city, from transportation to supplies and energy, are handed to this system. It's totally secure, there's no chance of breaches, and it's designed to avoid the human mistake. It's completely automatised. Until one day, when the system decides to do some calculation and eventually it comes up with the conclusion that the greatest obstacle to the betterment of humankind is... humankind itself. Wars, pollution, conflicts, all of that will eventually destroy Earth, and this has to be prevented. The only way: turn humankind into a tool, enslave it for its own good. Kind of like the Matrix dystopia, only more benevolent.
It's a pretty worn-out question. Having in mind the exponential development of computer technology, is this a viable scenario? Could the machines come to dominate life on Earth one day (if we don't destroy it first)? Could the robots we've created ourselves become our rivals on this planet (and possibly, other planets too)? Or our masters?
There are two major camps on this issue. One says this is impossible and even stupid to contemplate. They say it's unthinkable that the thinking power of machines could exceed ours. Note: I'm talking about artificial intelligence (i.e. self-aware), not mere computing power (where they've already beaten us). The scientific argument against this is that the human brain is so complex, any attempt to replicate its processes are doomed to failure. There's this argument that machines will forever remain unable to replicate human thought, because they simply don't possess the equipment to do it. The religious argument against it is that there are things about human consciousness that will forever remain unreachable for our understanding, much less that of some machine that's made of dead matter.
Btw it's curious that the term "robot" comes from a Czech word game (Karel Čapek invented it in a theatrical play of his). It's a combination between the words for "drudgery" and "labour" in both the Czech and Slovak language (yes, because there's *some* distinction between the two!) In Čapek's play, a robotic factory produces machines made of flesh and bones to do the hard work. In a while the whole economy becomes dependent on these robots. But because they're treated awfully by their masters, the robots rebel against the humans and kill all the mechanics who could repair them. To avoid extinction, they look for a way to reproduce and eventually they find their robotic Adam and Eve. Which renders humans useless. A bleak scenario indeed.
Artificial intelligence is still a bit of a "terra incognita", at least compared to other branches of science like Newtonian mechanics, Maxwell's science of light, Einstein's general and special relativity and quantum theory. We still know too little about our own principles of intelligence. The big breakthrough in that area is yet to be witnessed. But still, most mathematicians and eminent AI experts are still adamant in their insistance that the emergence of a thinking machine is just a matter of time. They form the second camp in this debate.
The trouble with the too limited knowledge in the field has bothered scientists for decades. It's the other thorn in the ass of science, apart from the mystery about "quantum gravity" (and the related complete discrepancy between the laws of the very small and the laws of the very big). If that mystery is somehow unraveled, it could change everything about the way we perceive the world and ourselves, and possibly trigger a unprecedented technological, and then probably a social revolution. We often hear the words "Holy Grail of science", which is how this striving for a "theory of everything" is usually called. The way there's such an unachievable Holy Grail in physics, there's one in the science of consciousness and intelligence.
There's this popular rule, called Moore's Law. It says the computational power doubles once every year and a half. If we extrapolate this, we could come to a conclusion that in a couple of decades there'll be computers with such power that we could replicate the intellectual processes of a dog or a cat. But there's a tiny problem hidden there. See, computational power has really increased exponentially for the last half a century, thanks to the silicon revolution. It's perfect for doing elaborate mathematical calculations that would take aeons for a human to do, but it's totally useless for simple things like making a robot that would be able to recognise a table from a chair when it sees them, and then be able to move around the room easily without bumping into every edge of furniture.
The size of silicon transistors one could pack up into a plate the size of a thumbnail has decreased immensely. Today we use lasers to shape those silicon transistors into ever smaller sizes. But this can't go on forever. At some point they'll reach a certain threshold size where they'll be the size of molecules, and then the process will halt. Silicon Valley will become useless. We'll have to jump over to the next level, beneath nano-technology. The level of quantum computers. And we've already made first steps in that direction. But even there, there's a threshold that can't be crossed.
See, the microchip on your PC or laptop is a layer that's about a dozen atoms thick. In a decade it could be just a couple atoms thick. At this point things become very fuzzy and the Heisenberg Uncertainty Principle becomes the sole ruler. You no longer know where the electron is in the next moment. It's everywhere within its perimeter of possibilities simultaneously. Then your chip breaks down, there's a short circuit, the electricity leaks off the system and the PC stops working. It's a dead end, according to Moore's Law. It hits the borders of the quantum world like a brick wall. The temporary dominance of the "bits" over the atoms is over, the atoms win.
So the question is, could robots pose a danger to us? Even if their computational power is not limitless as originally thought? I'd say - probably yes, it's still possible. But that time hasn't come yet, and we're nowhere near it, despite the multiple sci-fi movies that keep telling us otherwise. The Matrix, okay. Planet of the Apes, fine. Machines could become a danger if they reach the intelligence of a monkey that's self-aware and can take its own decisions. We're still far away from that point, which means we have enough time to monitor the whole process and react at the moment we notice a threat. It ain't gonna happen overnight. Besides, there are many ways to block the potential hazards. For instance we could place a chip in their processors that could automatically disable them if they go on a rampage like Caesar the monkey from the movie. Even further, it could be connected to a self-destructive mechanism that'd be activated in case of emergency. Really, there are many ways to prevent the tool from becoming a master.
One of the greatest thinkers in sci-fi (IMO), Arthur Clarke famously said: "It is possible that we may become pets of the computers, leading pampered existences like lapdogs, but I hope that we will always retain the ability to pull the plug if we feel like it."
I think people who occasionally bother to think about these things are worrying too much about this aspect of technology. The far more immediate problem is that our entire lives, the whole infrastructure and from there our economy now hugely depends on computers. The electricity grids, telecommunications, transportation, everything is becoming more computerised with time. Hence the increased attention on security in the area. The cities are becoming ever more complex systems that can't be handled without the help of computers to regulate and monitor this enormous mess. If a problem occurs, whether due to an incident or sabotage for some reason, it brings immediate paralysis on the whole system, and if it lasts for too long, it could bring entire civilisations to their knees. Sounds too dystopian, but it's true when you try to imagine it.
So, yeah. Will computers overtake us with their superior intellect? Well, physics doesn't explicitly say that this is impossible. There's no law in nature that prohibits it. If robots become autonomous and learn to learn stuff on their own, they could develop in a way that they'll learn faster and more efficiently than us. So it's normal to expect that they'll surpass our intelligence at some point. The legendary robotics genius and transhumanist Hans Moravec said: "The post-biological world is a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny. When that happens, our DNA will find itself out of a job, having lost the evolutionary race to a new kind of competition."
Another legendary futurist and author, Ray Kurzweil even said this time is about to come much sooner than we expect, maybe in the next couple of decades. He joins the chorus of scientists who are talking about a thing called "technological singularity", a point in development when robots will be processing information with exponential speed, self-reproduce, use "cloud computing" (uniting multiple "minds" into one, to think even faster, like Deus-ex-Machina in The Matrix), and practically start processing information instantaneously and without a limit.
And here comes the catch. But what if we get there before the robots and occupy their place first? I'm talking of Transhumanism. The man-machine symbiosis, a process that has already begun in some areas with some slow steps (prosthetics, artificial hearts and other body-enhancing gadgets). In the longer term we could merge the silicon technology of computers with the carbon technology of biochemistry, and become one with our own creations.
In fact, and this is an interesting thought, if we're ever to meet E.T.s, why wouldn't we expect them to be someting of the sort? Half-creature, half-machine, part organic, part mechanical. It makes sense if we assume they'd be so advanced as to travel through space and explore all sorts of hostile environments. If the crazy theories about ancient alien visitations (who were called "gods") are right, wouldn't those "gods" appear to be superhuman (or super-Andromedian, or whatever) exactly because of this bio-technology merger?
In the far future, human-like cyborgs may even bring us immortality. We could find a way to download our memories and personality on an information carrier and later upload it into a new physical body. Back to Hans Moravec - he proposes a society in the far, far future, where our neural system would be transferred, piece by piece, directly onto a machine, thus giving us immortality. It sounds preposterous and far-fetched right now, but it's not beyond any perceivable possibility. I can't think of a single law of nature that prevents it. In fact immortality (as a self-replicating DNA/silicon symbiotic system) may turn out to be the ultimate fate of humankind, which would allow it to outlive the Earth and spread across the Galaxy and survive on other worlds.
This brings a lot of moral, ethical and even religious issues: who are we to meddle with Creation/Evolution? Do we fully understand the consequences of our actions? Or isn't that the only way to survive in the long term? The AI specialist Marvin Minsky said: "What if the sun dies out, or we destroy the planet? Why not make better physicists, engineers, or mathematicians? We may need to be the architects of our own future. If we don't, our culture could disappear."
Some ideas inspired from here:
[Error: unknown template video]
It's a pretty worn-out question. Having in mind the exponential development of computer technology, is this a viable scenario? Could the machines come to dominate life on Earth one day (if we don't destroy it first)? Could the robots we've created ourselves become our rivals on this planet (and possibly, other planets too)? Or our masters?
There are two major camps on this issue. One says this is impossible and even stupid to contemplate. They say it's unthinkable that the thinking power of machines could exceed ours. Note: I'm talking about artificial intelligence (i.e. self-aware), not mere computing power (where they've already beaten us). The scientific argument against this is that the human brain is so complex, any attempt to replicate its processes are doomed to failure. There's this argument that machines will forever remain unable to replicate human thought, because they simply don't possess the equipment to do it. The religious argument against it is that there are things about human consciousness that will forever remain unreachable for our understanding, much less that of some machine that's made of dead matter.
Btw it's curious that the term "robot" comes from a Czech word game (Karel Čapek invented it in a theatrical play of his). It's a combination between the words for "drudgery" and "labour" in both the Czech and Slovak language (yes, because there's *some* distinction between the two!) In Čapek's play, a robotic factory produces machines made of flesh and bones to do the hard work. In a while the whole economy becomes dependent on these robots. But because they're treated awfully by their masters, the robots rebel against the humans and kill all the mechanics who could repair them. To avoid extinction, they look for a way to reproduce and eventually they find their robotic Adam and Eve. Which renders humans useless. A bleak scenario indeed.
Artificial intelligence is still a bit of a "terra incognita", at least compared to other branches of science like Newtonian mechanics, Maxwell's science of light, Einstein's general and special relativity and quantum theory. We still know too little about our own principles of intelligence. The big breakthrough in that area is yet to be witnessed. But still, most mathematicians and eminent AI experts are still adamant in their insistance that the emergence of a thinking machine is just a matter of time. They form the second camp in this debate.
The trouble with the too limited knowledge in the field has bothered scientists for decades. It's the other thorn in the ass of science, apart from the mystery about "quantum gravity" (and the related complete discrepancy between the laws of the very small and the laws of the very big). If that mystery is somehow unraveled, it could change everything about the way we perceive the world and ourselves, and possibly trigger a unprecedented technological, and then probably a social revolution. We often hear the words "Holy Grail of science", which is how this striving for a "theory of everything" is usually called. The way there's such an unachievable Holy Grail in physics, there's one in the science of consciousness and intelligence.
There's this popular rule, called Moore's Law. It says the computational power doubles once every year and a half. If we extrapolate this, we could come to a conclusion that in a couple of decades there'll be computers with such power that we could replicate the intellectual processes of a dog or a cat. But there's a tiny problem hidden there. See, computational power has really increased exponentially for the last half a century, thanks to the silicon revolution. It's perfect for doing elaborate mathematical calculations that would take aeons for a human to do, but it's totally useless for simple things like making a robot that would be able to recognise a table from a chair when it sees them, and then be able to move around the room easily without bumping into every edge of furniture.
The size of silicon transistors one could pack up into a plate the size of a thumbnail has decreased immensely. Today we use lasers to shape those silicon transistors into ever smaller sizes. But this can't go on forever. At some point they'll reach a certain threshold size where they'll be the size of molecules, and then the process will halt. Silicon Valley will become useless. We'll have to jump over to the next level, beneath nano-technology. The level of quantum computers. And we've already made first steps in that direction. But even there, there's a threshold that can't be crossed.
See, the microchip on your PC or laptop is a layer that's about a dozen atoms thick. In a decade it could be just a couple atoms thick. At this point things become very fuzzy and the Heisenberg Uncertainty Principle becomes the sole ruler. You no longer know where the electron is in the next moment. It's everywhere within its perimeter of possibilities simultaneously. Then your chip breaks down, there's a short circuit, the electricity leaks off the system and the PC stops working. It's a dead end, according to Moore's Law. It hits the borders of the quantum world like a brick wall. The temporary dominance of the "bits" over the atoms is over, the atoms win.
So the question is, could robots pose a danger to us? Even if their computational power is not limitless as originally thought? I'd say - probably yes, it's still possible. But that time hasn't come yet, and we're nowhere near it, despite the multiple sci-fi movies that keep telling us otherwise. The Matrix, okay. Planet of the Apes, fine. Machines could become a danger if they reach the intelligence of a monkey that's self-aware and can take its own decisions. We're still far away from that point, which means we have enough time to monitor the whole process and react at the moment we notice a threat. It ain't gonna happen overnight. Besides, there are many ways to block the potential hazards. For instance we could place a chip in their processors that could automatically disable them if they go on a rampage like Caesar the monkey from the movie. Even further, it could be connected to a self-destructive mechanism that'd be activated in case of emergency. Really, there are many ways to prevent the tool from becoming a master.
One of the greatest thinkers in sci-fi (IMO), Arthur Clarke famously said: "It is possible that we may become pets of the computers, leading pampered existences like lapdogs, but I hope that we will always retain the ability to pull the plug if we feel like it."
I think people who occasionally bother to think about these things are worrying too much about this aspect of technology. The far more immediate problem is that our entire lives, the whole infrastructure and from there our economy now hugely depends on computers. The electricity grids, telecommunications, transportation, everything is becoming more computerised with time. Hence the increased attention on security in the area. The cities are becoming ever more complex systems that can't be handled without the help of computers to regulate and monitor this enormous mess. If a problem occurs, whether due to an incident or sabotage for some reason, it brings immediate paralysis on the whole system, and if it lasts for too long, it could bring entire civilisations to their knees. Sounds too dystopian, but it's true when you try to imagine it.
So, yeah. Will computers overtake us with their superior intellect? Well, physics doesn't explicitly say that this is impossible. There's no law in nature that prohibits it. If robots become autonomous and learn to learn stuff on their own, they could develop in a way that they'll learn faster and more efficiently than us. So it's normal to expect that they'll surpass our intelligence at some point. The legendary robotics genius and transhumanist Hans Moravec said: "The post-biological world is a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny. When that happens, our DNA will find itself out of a job, having lost the evolutionary race to a new kind of competition."
Another legendary futurist and author, Ray Kurzweil even said this time is about to come much sooner than we expect, maybe in the next couple of decades. He joins the chorus of scientists who are talking about a thing called "technological singularity", a point in development when robots will be processing information with exponential speed, self-reproduce, use "cloud computing" (uniting multiple "minds" into one, to think even faster, like Deus-ex-Machina in The Matrix), and practically start processing information instantaneously and without a limit.
And here comes the catch. But what if we get there before the robots and occupy their place first? I'm talking of Transhumanism. The man-machine symbiosis, a process that has already begun in some areas with some slow steps (prosthetics, artificial hearts and other body-enhancing gadgets). In the longer term we could merge the silicon technology of computers with the carbon technology of biochemistry, and become one with our own creations.
In fact, and this is an interesting thought, if we're ever to meet E.T.s, why wouldn't we expect them to be someting of the sort? Half-creature, half-machine, part organic, part mechanical. It makes sense if we assume they'd be so advanced as to travel through space and explore all sorts of hostile environments. If the crazy theories about ancient alien visitations (who were called "gods") are right, wouldn't those "gods" appear to be superhuman (or super-Andromedian, or whatever) exactly because of this bio-technology merger?
In the far future, human-like cyborgs may even bring us immortality. We could find a way to download our memories and personality on an information carrier and later upload it into a new physical body. Back to Hans Moravec - he proposes a society in the far, far future, where our neural system would be transferred, piece by piece, directly onto a machine, thus giving us immortality. It sounds preposterous and far-fetched right now, but it's not beyond any perceivable possibility. I can't think of a single law of nature that prevents it. In fact immortality (as a self-replicating DNA/silicon symbiotic system) may turn out to be the ultimate fate of humankind, which would allow it to outlive the Earth and spread across the Galaxy and survive on other worlds.
This brings a lot of moral, ethical and even religious issues: who are we to meddle with Creation/Evolution? Do we fully understand the consequences of our actions? Or isn't that the only way to survive in the long term? The AI specialist Marvin Minsky said: "What if the sun dies out, or we destroy the planet? Why not make better physicists, engineers, or mathematicians? We may need to be the architects of our own future. If we don't, our culture could disappear."
Some ideas inspired from here:
[Error: unknown template video]
tl;dra
Date: 21/9/11 16:20 (UTC)I look at it like the more likely scenario is the one from "Wall:E"
(no subject)
Date: 21/9/11 16:31 (UTC)Now, how we'd react to that.....
(no subject)
Date: 21/9/11 16:34 (UTC)Also, always program for the device to shut down once each 2-3 years or so, for maintenance. Always!!
(no subject)
Date: 21/9/11 17:09 (UTC)But really, whether emotion is possible to reproduce in the conditions of artificial intelligence, is yet another huge mystery. It could be something that "looks" like emotion but is not. But still, if it ultimately brings the exact same results like real human emotion, these things could turn out to be indistinguishable.
There's a test called the Turing Test. Made by this tragic genius Alan Turing who ended his own life after a stupid treatment with "sex hormones" was forced upon him after he had been convicted of indecency (i.e. being gay). This test is designed to establish if a machine (or an individual) could think and have emotions and a "soul". You place a human and a machine in two separate rooms and lock them. You then ask questions to both of them. If you can't tell the difference between the response of the human and the machine, it means the machine has passed the test.
There are already machines that could imitate what *appears* like emotional response, but it's again a mere algorithm that responds to its pre-programming. So far no computer program has been created that could fool people who are trying to find out which box contains the machine and which the human. But this doesn't mean it won't happen one day. Turing himself predicted that, given the exponential growth of AI at the time, by year 2000 the machine should be able to fool about 30% of the judges in the test. As it happens with many such predictions about the future, it didn't happen in 2000. But it could happen in 2100, or earlier. Or never.
There are other objections to this possibility, like the Chinese Room test. Suppose you're sitting in a room and you don't speak a word of Chinese. You have a book that allows you to translate from Chinese very fast. If someone asks you a question in Chinese, you use the dictionary, you find the corresponding characters (just like Google Translator), and without understanding what you're saying, you give credible answers. There's a difference between being able to handle syntax and understanding the semantics. The robots may be good in the former but they're failing badly in the latter. This extends to human emotion. Can we replicate human emotion, and all the good and bad feelings and intentions that come with it? Can a machine be a benevolent pal or malevolent dick? So far it seems impossible. For the time being it looks as if they'll only keep doing computations and if we're ever to be exterminated by them, it'll be because somewhere in their equations they've decided the world is a better place without us. Not because of some grudges, like in Čapek's play.
Congratulations! You're the latest winner on "Spot the Missing Buzzword!"
Date: 21/9/11 17:32 (UTC)What if the singularity brought about a dystopia not for the freedom-minded, but for the statists?
Date: 21/9/11 17:40 (UTC)Frankly, I think certain species of right and left collectivists would rather live in a dystopian world where humans were governed by a tyranous and draconian dictatorship of "intelligent," machines than they would live in a world where human intellect was actually faithfully simulated in silicon, and when it begun to advance beyond our present reasoning ability the intelligent machines tried to tell us that they discovered that freedom was the only philosophy that was in conformity to the reality of human nature.
Re: Congratulations! You're the latest winner on "Spot the Missing Buzzword!"
Date: 21/9/11 17:46 (UTC)Neural wave networking...
Date: 21/9/11 17:46 (UTC)(no subject)
Date: 21/9/11 17:46 (UTC)2.
That makes two mentions.
Re: What if the singularity brought about a dystopia not for the freedom-minded, but for the statist
Date: 21/9/11 17:48 (UTC)(no subject)
Date: 21/9/11 17:49 (UTC)(no subject)
Date: 21/9/11 17:49 (UTC)(no subject)
Date: 21/9/11 17:50 (UTC)Colossus, the Forbin Project
Date: 21/9/11 17:57 (UTC)This movie describes a scenario that actually happened, but in reverse. Instead of an American supercomputer taking over a Russian supercomputer, the latter took over the former. What an embarrassment for Washington.
Re: Neural wave networking...
Date: 21/9/11 18:09 (UTC)Re: Colossus, the Forbin Project
Date: 21/9/11 18:40 (UTC)The two computers concluded that an Alliance between the Soviets and the USA was to thier mutual benefit and then went about removing anyone who objected to such a clearly benificial relationship.
Re: Colossus, the Forbin Project
Date: 21/9/11 19:00 (UTC)(no subject)
Date: 21/9/11 19:14 (UTC)It's because for the computer, every object is just a series of lines and surfaces. It doesn't have a concept of table and chair. The other problem is that once it has calculated the coordinates of the furniture, and it has made a step forward, the angles instantly change and everything becomes a new mess of lines and surfaces, which the computer has to start calculating again. If the speed of calculation is enormous that shouldn't be a problem. But imagine we put the robot in a much more complicated environment. It'd go crazy.
The robots we're sending to Mars respond to circumstances and in a sense they're of the self-teaching type that you're talking about here. But at this level they have the intelligence roughly of a cockroach. That is not to say that in time they wouldn't reach the intelligence of, say, a spider. :)
(no subject)
Date: 21/9/11 19:17 (UTC)"Interesting cycle we've got here. Hmmm. So, we eat ducks. Ducks eat worms. And worms eat... us! Isn't it funny?"
(From Mission London (http://www.youtube.com/watch?v=iFFCwlN79bo)).
(no subject)
Date: 21/9/11 19:23 (UTC)"I can't, I have a head-ache!"
HAHA!
(no subject)
Date: 21/9/11 19:26 (UTC)This simple question is even more salient in neuroscience, where the philosophical bound of the science is "Can the brain understand itself?"
(no subject)
Date: 21/9/11 19:33 (UTC)/rambling
(no subject)
Date: 21/9/11 19:37 (UTC)(no subject)
Date: 21/9/11 19:39 (UTC)The whole concept of what the fundamental underlying architecture would be is completely unknown. We don't have a clue, really.
(no subject)
Date: 21/9/11 19:41 (UTC)[Error: unknown template video]