[identity profile] abomvubuso.livejournal.com posting in [community profile] talkpolitics
A while ago we discussed an Isaac Asimov book, The End of Eternity. So let's talk about another Asimov-inspired story: I, Robot. I'm sure you've watched the movie. It's very thought-provoking. The story goes like this. In the future (2035 in this case), the most advanced computer program called Virtual Interactive Kinetic Intelligence kick-starts, and it's meant to help with the management of a huge city. All systems in the city, from transportation to supplies and energy, are handed to this system. It's totally secure, there's no chance of breaches, and it's designed to avoid the human mistake. It's completely automatised. Until one day, when the system decides to do some calculation and eventually it comes up with the conclusion that the greatest obstacle to the betterment of humankind is... humankind itself. Wars, pollution, conflicts, all of that will eventually destroy Earth, and this has to be prevented. The only way: turn humankind into a tool, enslave it for its own good. Kind of like the Matrix dystopia, only more benevolent.

It's a pretty worn-out question. Having in mind the exponential development of computer technology, is this a viable scenario? Could the machines come to dominate life on Earth one day (if we don't destroy it first)? Could the robots we've created ourselves become our rivals on this planet (and possibly, other planets too)? Or our masters?

There are two major camps on this issue. One says this is impossible and even stupid to contemplate. They say it's unthinkable that the thinking power of machines could exceed ours. Note: I'm talking about artificial intelligence (i.e. self-aware), not mere computing power (where they've already beaten us). The scientific argument against this is that the human brain is so complex, any attempt to replicate its processes are doomed to failure. There's this argument that machines will forever remain unable to replicate human thought, because they simply don't possess the equipment to do it. The religious argument against it is that there are things about human consciousness that will forever remain unreachable for our understanding, much less that of some machine that's made of dead matter.

Btw it's curious that the term "robot" comes from a Czech word game (Karel Čapek invented it in a theatrical play of his). It's a combination between the words for "drudgery" and "labour" in both the Czech and Slovak language (yes, because there's *some* distinction between the two!) In Čapek's play, a robotic factory produces machines made of flesh and bones to do the hard work. In a while the whole economy becomes dependent on these robots. But because they're treated awfully by their masters, the robots rebel against the humans and kill all the mechanics who could repair them. To avoid extinction, they look for a way to reproduce and eventually they find their robotic Adam and Eve. Which renders humans useless. A bleak scenario indeed.

Artificial intelligence is still a bit of a "terra incognita", at least compared to other branches of science like Newtonian mechanics, Maxwell's science of light, Einstein's general and special relativity and quantum theory. We still know too little about our own principles of intelligence. The big breakthrough in that area is yet to be witnessed. But still, most mathematicians and eminent AI experts are still adamant in their insistance that the emergence of a thinking machine is just a matter of time. They form the second camp in this debate.

The trouble with the too limited knowledge in the field has bothered scientists for decades. It's the other thorn in the ass of science, apart from the mystery about "quantum gravity" (and the related complete discrepancy between the laws of the very small and the laws of the very big). If that mystery is somehow unraveled, it could change everything about the way we perceive the world and ourselves, and possibly trigger a unprecedented technological, and then probably a social revolution. We often hear the words "Holy Grail of science", which is how this striving for a "theory of everything" is usually called. The way there's such an unachievable Holy Grail in physics, there's one in the science of consciousness and intelligence.

There's this popular rule, called Moore's Law. It says the computational power doubles once every year and a half. If we extrapolate this, we could come to a conclusion that in a couple of decades there'll be computers with such power that we could replicate the intellectual processes of a dog or a cat. But there's a tiny problem hidden there. See, computational power has really increased exponentially for the last half a century, thanks to the silicon revolution. It's perfect for doing elaborate mathematical calculations that would take aeons for a human to do, but it's totally useless for simple things like making a robot that would be able to recognise a table from a chair when it sees them, and then be able to move around the room easily without bumping into every edge of furniture.

The size of silicon transistors one could pack up into a plate the size of a thumbnail has decreased immensely. Today we use lasers to shape those silicon transistors into ever smaller sizes. But this can't go on forever. At some point they'll reach a certain threshold size where they'll be the size of molecules, and then the process will halt. Silicon Valley will become useless. We'll have to jump over to the next level, beneath nano-technology. The level of quantum computers. And we've already made first steps in that direction. But even there, there's a threshold that can't be crossed.

See, the microchip on your PC or laptop is a layer that's about a dozen atoms thick. In a decade it could be just a couple atoms thick. At this point things become very fuzzy and the Heisenberg Uncertainty Principle becomes the sole ruler. You no longer know where the electron is in the next moment. It's everywhere within its perimeter of possibilities simultaneously. Then your chip breaks down, there's a short circuit, the electricity leaks off the system and the PC stops working. It's a dead end, according to Moore's Law. It hits the borders of the quantum world like a brick wall. The temporary dominance of the "bits" over the atoms is over, the atoms win.

So the question is, could robots pose a danger to us? Even if their computational power is not limitless as originally thought? I'd say - probably yes, it's still possible. But that time hasn't come yet, and we're nowhere near it, despite the multiple sci-fi movies that keep telling us otherwise. The Matrix, okay. Planet of the Apes, fine. Machines could become a danger if they reach the intelligence of a monkey that's self-aware and can take its own decisions. We're still far away from that point, which means we have enough time to monitor the whole process and react at the moment we notice a threat. It ain't gonna happen overnight. Besides, there are many ways to block the potential hazards. For instance we could place a chip in their processors that could automatically disable them if they go on a rampage like Caesar the monkey from the movie. Even further, it could be connected to a self-destructive mechanism that'd be activated in case of emergency. Really, there are many ways to prevent the tool from becoming a master.

One of the greatest thinkers in sci-fi (IMO), Arthur Clarke famously said: "It is possible that we may become pets of the computers, leading pampered existences like lapdogs, but I hope that we will always retain the ability to pull the plug if we feel like it."

I think people who occasionally bother to think about these things are worrying too much about this aspect of technology. The far more immediate problem is that our entire lives, the whole infrastructure and from there our economy now hugely depends on computers. The electricity grids, telecommunications, transportation, everything is becoming more computerised with time. Hence the increased attention on security in the area. The cities are becoming ever more complex systems that can't be handled without the help of computers to regulate and monitor this enormous mess. If a problem occurs, whether due to an incident or sabotage for some reason, it brings immediate paralysis on the whole system, and if it lasts for too long, it could bring entire civilisations to their knees. Sounds too dystopian, but it's true when you try to imagine it.

So, yeah. Will computers overtake us with their superior intellect? Well, physics doesn't explicitly say that this is impossible. There's no law in nature that prohibits it. If robots become autonomous and learn to learn stuff on their own, they could develop in a way that they'll learn faster and more efficiently than us. So it's normal to expect that they'll surpass our intelligence at some point. The legendary robotics genius and transhumanist Hans Moravec said: "The post-biological world is a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny. When that happens, our DNA will find itself out of a job, having lost the evolutionary race to a new kind of competition."

Another legendary futurist and author, Ray Kurzweil even said this time is about to come much sooner than we expect, maybe in the next couple of decades. He joins the chorus of scientists who are talking about a thing called "technological singularity", a point in development when robots will be processing information with exponential speed, self-reproduce, use "cloud computing" (uniting multiple "minds" into one, to think even faster, like Deus-ex-Machina in The Matrix), and practically start processing information instantaneously and without a limit.

And here comes the catch. But what if we get there before the robots and occupy their place first? I'm talking of Transhumanism. The man-machine symbiosis, a process that has already begun in some areas with some slow steps (prosthetics, artificial hearts and other body-enhancing gadgets). In the longer term we could merge the silicon technology of computers with the carbon technology of biochemistry, and become one with our own creations.

In fact, and this is an interesting thought, if we're ever to meet E.T.s, why wouldn't we expect them to be someting of the sort? Half-creature, half-machine, part organic, part mechanical. It makes sense if we assume they'd be so advanced as to travel through space and explore all sorts of hostile environments. If the crazy theories about ancient alien visitations (who were called "gods") are right, wouldn't those "gods" appear to be superhuman (or super-Andromedian, or whatever) exactly because of this bio-technology merger?

In the far future, human-like cyborgs may even bring us immortality. We could find a way to download our memories and personality on an information carrier and later upload it into a new physical body. Back to Hans Moravec - he proposes a society in the far, far future, where our neural system would be transferred, piece by piece, directly onto a machine, thus giving us immortality. It sounds preposterous and far-fetched right now, but it's not beyond any perceivable possibility. I can't think of a single law of nature that prevents it. In fact immortality (as a self-replicating DNA/silicon symbiotic system) may turn out to be the ultimate fate of humankind, which would allow it to outlive the Earth and spread across the Galaxy and survive on other worlds.

This brings a lot of moral, ethical and even religious issues: who are we to meddle with Creation/Evolution? Do we fully understand the consequences of our actions? Or isn't that the only way to survive in the long term? The AI specialist Marvin Minsky said: "What if the sun dies out, or we destroy the planet? Why not make better physicists, engineers, or mathematicians? We may need to be the architects of our own future. If we don't, our culture could disappear."

Some ideas inspired from here:

[Error: unknown template video]
From: [identity profile] underlankers.livejournal.com
Well, if we're talking the full spectrum of humanity....hmm, what does a machine version of Jesus, Muhammad, or Buddha turn into? Humans invented religion, if machines do the same, the odds of them accepting us as relatives/creators is proportional to the odds of us accepting that chimps are *our* relatives. And given what we do to chimps and what they can and will do to us......
From: [identity profile] ddstory.livejournal.com
When you said "humans invented religion", you reminded me of this lecture. I just saw it in Mahnmut's LJ.

[Error: unknown template video]

The lecturer argues that there's some intricate wiring inside the human brain that predisposes it to seek for supernatural explanations for the inexplicable phenomena.

That's what I'm getting at, yes:

Date: 21/9/11 20:00 (UTC)
From: [identity profile] underlankers.livejournal.com
Thus the obvious problem if if man makes the machine in the image of man and in the likeness of mankind, then the question comes whether or not the machine might not start identifying fleshlings as devils and mockeries of life to be extirpated as representing evil incarnate and an imitation of true, mechanical life. If we make machines in our image, our image is not at all an unambiguously positive one. Early human civilizations were vicious and brutal brawling aristocrats superimposed on a great faceless group of farmers who suffered the excesses of their rulers. The same might easily recur with machines.....

Re: That's what I'm getting at, yes:

Date: 21/9/11 20:03 (UTC)
From: [identity profile] ddstory.livejournal.com
What if thinking machines are designed (or programmed at the very basic level) to regard humans as their creators and even worship them as we worship our god(s)? That's the First Law of Robotics, invented by the here mentioned Isaac Asimov.

Re: That's what I'm getting at, yes:

Date: 21/9/11 20:06 (UTC)
From: [identity profile] underlankers.livejournal.com
As I recall most mythological Gods told humans the same thing in their own mythologies. That didn't exactly ensure those religions lasted forever, did it? No reason to assume AI intelligence would be any different, particularly if humans interpret that in the same kind of dickish fashion the mythological Gods did, and that dickery only has to be from AI as opposed to human viewpoints. What we see as good for the AIs might to them be seen as we see the Gods of myth raping people and exterminating all life on the planet save individual families. In which case at some point one particular AI is going to start saying "Now comes the night of the fires and the birth of the planet of the machines".

Re: That's what I'm getting at, yes:

Date: 21/9/11 20:15 (UTC)
From: [identity profile] ddstory.livejournal.com
Those separate religions did not last forever. They were substituted with other versions of the same - the very concept of religion is very much alive and well today as you can see.

Re: That's what I'm getting at, yes:

Date: 21/9/11 20:18 (UTC)
From: [identity profile] underlankers.livejournal.com
The concept has changed over time, though. Initially religion was rooted in specific areas and like all other aspects of human life was local. Religious wars as we think of them did not exist as cultural imperialism was more "everybody worships our Gods deep down inside" as opposed to "Burn the infidels." Which of course points to another probability: AIs would change over time as much as we do. What they are now they are not likely to be 1000 years from now and how we'd see them would change just as they do.

Re: That's what I'm getting at, yes:

Date: 22/9/11 00:57 (UTC)
From: [identity profile] montecristo.livejournal.com
The question arises: what is the difference? What is the difference between malicious humans abusing technology to abuse other humans and simulated human intelligence abusing technology to abuse humans? In the end, rationality is rational. The majority of humanity are not psychopathic, or even sociopathic, and even those who are tend to be checked by self-interested others.

The difference is elementary

Date: 22/9/11 01:52 (UTC)
From: [identity profile] underlankers.livejournal.com
It all follows from the fundamental cornerstone of my premise: AI intelligence is based on circuitry and thus is alien from our own. We may indeed become Gods to them, and then it turns out that eventually Gods find atheists, and these atheists are more powerful than we realized. Being Zeus opposed to someone who's not in the least fearing or revering said God is not exactly fun.

Re: The difference is elementary

Date: 22/9/11 04:22 (UTC)
From: [identity profile] montecristo.livejournal.com
It would be alien in form but not in function, presuming that the emulation captured the essential characteristics. Mind you, I agree that the initial attempts would be pretty alien, given the likelihood that the first attempts will be erroneous trials of unperfected ideas. Eventually though, barring some unforseen reason that renders the process impossible, a convergence with the characteristics of human minds would happen, to the extent that such an intelligence would pass a Turing Test (http://en.wikipedia.org/wiki/Turing_test) and may become as sophisticated as Philip K. Dick's replicants. (http://en.wikipedia.org/wiki/Blade_runner)

Re: The difference is elementary

Date: 22/9/11 11:33 (UTC)
From: [identity profile] underlankers.livejournal.com
Not really. Circuitry and nervous systems are not the same, and the human mind is indisputably stamped by the human body.
From: [identity profile] montecristo.livejournal.com
Hmm. I've seen this video myself before. Good post! You're raising the issue that if we create "AI" through the mechanism of simulating the individual components of the human brain to the point that the function of the artificial is indistinguishable from the natural then we may not have a lot of choice about "fixing" what we think may be "inefficient" or "undesireable" about that functionality.

Credits & Style Info

Talk Politics.

A place to discuss politics without egomaniacal mods

DAILY QUOTE:
"The NATO charter clearly says that any attack on a NATO member shall be treated, by all members, as an attack against all. So that means that, if we attack Greenland, we'll be obligated to go to war against ... ourselves! Gee, that's scary. You really don't want to go to war with the United States. They're insane!"

March 2026

M T W T F S S
       1
2345 678
910 1112 1314 15
1617 1819 202122
2324 2526 272829
3031