[identity profile] abomvubuso.livejournal.com posting in [community profile] talkpolitics
A while ago we discussed an Isaac Asimov book, The End of Eternity. So let's talk about another Asimov-inspired story: I, Robot. I'm sure you've watched the movie. It's very thought-provoking. The story goes like this. In the future (2035 in this case), the most advanced computer program called Virtual Interactive Kinetic Intelligence kick-starts, and it's meant to help with the management of a huge city. All systems in the city, from transportation to supplies and energy, are handed to this system. It's totally secure, there's no chance of breaches, and it's designed to avoid the human mistake. It's completely automatised. Until one day, when the system decides to do some calculation and eventually it comes up with the conclusion that the greatest obstacle to the betterment of humankind is... humankind itself. Wars, pollution, conflicts, all of that will eventually destroy Earth, and this has to be prevented. The only way: turn humankind into a tool, enslave it for its own good. Kind of like the Matrix dystopia, only more benevolent.

It's a pretty worn-out question. Having in mind the exponential development of computer technology, is this a viable scenario? Could the machines come to dominate life on Earth one day (if we don't destroy it first)? Could the robots we've created ourselves become our rivals on this planet (and possibly, other planets too)? Or our masters?

There are two major camps on this issue. One says this is impossible and even stupid to contemplate. They say it's unthinkable that the thinking power of machines could exceed ours. Note: I'm talking about artificial intelligence (i.e. self-aware), not mere computing power (where they've already beaten us). The scientific argument against this is that the human brain is so complex, any attempt to replicate its processes are doomed to failure. There's this argument that machines will forever remain unable to replicate human thought, because they simply don't possess the equipment to do it. The religious argument against it is that there are things about human consciousness that will forever remain unreachable for our understanding, much less that of some machine that's made of dead matter.

Btw it's curious that the term "robot" comes from a Czech word game (Karel Čapek invented it in a theatrical play of his). It's a combination between the words for "drudgery" and "labour" in both the Czech and Slovak language (yes, because there's *some* distinction between the two!) In Čapek's play, a robotic factory produces machines made of flesh and bones to do the hard work. In a while the whole economy becomes dependent on these robots. But because they're treated awfully by their masters, the robots rebel against the humans and kill all the mechanics who could repair them. To avoid extinction, they look for a way to reproduce and eventually they find their robotic Adam and Eve. Which renders humans useless. A bleak scenario indeed.

Artificial intelligence is still a bit of a "terra incognita", at least compared to other branches of science like Newtonian mechanics, Maxwell's science of light, Einstein's general and special relativity and quantum theory. We still know too little about our own principles of intelligence. The big breakthrough in that area is yet to be witnessed. But still, most mathematicians and eminent AI experts are still adamant in their insistance that the emergence of a thinking machine is just a matter of time. They form the second camp in this debate.

The trouble with the too limited knowledge in the field has bothered scientists for decades. It's the other thorn in the ass of science, apart from the mystery about "quantum gravity" (and the related complete discrepancy between the laws of the very small and the laws of the very big). If that mystery is somehow unraveled, it could change everything about the way we perceive the world and ourselves, and possibly trigger a unprecedented technological, and then probably a social revolution. We often hear the words "Holy Grail of science", which is how this striving for a "theory of everything" is usually called. The way there's such an unachievable Holy Grail in physics, there's one in the science of consciousness and intelligence.

There's this popular rule, called Moore's Law. It says the computational power doubles once every year and a half. If we extrapolate this, we could come to a conclusion that in a couple of decades there'll be computers with such power that we could replicate the intellectual processes of a dog or a cat. But there's a tiny problem hidden there. See, computational power has really increased exponentially for the last half a century, thanks to the silicon revolution. It's perfect for doing elaborate mathematical calculations that would take aeons for a human to do, but it's totally useless for simple things like making a robot that would be able to recognise a table from a chair when it sees them, and then be able to move around the room easily without bumping into every edge of furniture.

The size of silicon transistors one could pack up into a plate the size of a thumbnail has decreased immensely. Today we use lasers to shape those silicon transistors into ever smaller sizes. But this can't go on forever. At some point they'll reach a certain threshold size where they'll be the size of molecules, and then the process will halt. Silicon Valley will become useless. We'll have to jump over to the next level, beneath nano-technology. The level of quantum computers. And we've already made first steps in that direction. But even there, there's a threshold that can't be crossed.

See, the microchip on your PC or laptop is a layer that's about a dozen atoms thick. In a decade it could be just a couple atoms thick. At this point things become very fuzzy and the Heisenberg Uncertainty Principle becomes the sole ruler. You no longer know where the electron is in the next moment. It's everywhere within its perimeter of possibilities simultaneously. Then your chip breaks down, there's a short circuit, the electricity leaks off the system and the PC stops working. It's a dead end, according to Moore's Law. It hits the borders of the quantum world like a brick wall. The temporary dominance of the "bits" over the atoms is over, the atoms win.

So the question is, could robots pose a danger to us? Even if their computational power is not limitless as originally thought? I'd say - probably yes, it's still possible. But that time hasn't come yet, and we're nowhere near it, despite the multiple sci-fi movies that keep telling us otherwise. The Matrix, okay. Planet of the Apes, fine. Machines could become a danger if they reach the intelligence of a monkey that's self-aware and can take its own decisions. We're still far away from that point, which means we have enough time to monitor the whole process and react at the moment we notice a threat. It ain't gonna happen overnight. Besides, there are many ways to block the potential hazards. For instance we could place a chip in their processors that could automatically disable them if they go on a rampage like Caesar the monkey from the movie. Even further, it could be connected to a self-destructive mechanism that'd be activated in case of emergency. Really, there are many ways to prevent the tool from becoming a master.

One of the greatest thinkers in sci-fi (IMO), Arthur Clarke famously said: "It is possible that we may become pets of the computers, leading pampered existences like lapdogs, but I hope that we will always retain the ability to pull the plug if we feel like it."

I think people who occasionally bother to think about these things are worrying too much about this aspect of technology. The far more immediate problem is that our entire lives, the whole infrastructure and from there our economy now hugely depends on computers. The electricity grids, telecommunications, transportation, everything is becoming more computerised with time. Hence the increased attention on security in the area. The cities are becoming ever more complex systems that can't be handled without the help of computers to regulate and monitor this enormous mess. If a problem occurs, whether due to an incident or sabotage for some reason, it brings immediate paralysis on the whole system, and if it lasts for too long, it could bring entire civilisations to their knees. Sounds too dystopian, but it's true when you try to imagine it.

So, yeah. Will computers overtake us with their superior intellect? Well, physics doesn't explicitly say that this is impossible. There's no law in nature that prohibits it. If robots become autonomous and learn to learn stuff on their own, they could develop in a way that they'll learn faster and more efficiently than us. So it's normal to expect that they'll surpass our intelligence at some point. The legendary robotics genius and transhumanist Hans Moravec said: "The post-biological world is a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny. When that happens, our DNA will find itself out of a job, having lost the evolutionary race to a new kind of competition."

Another legendary futurist and author, Ray Kurzweil even said this time is about to come much sooner than we expect, maybe in the next couple of decades. He joins the chorus of scientists who are talking about a thing called "technological singularity", a point in development when robots will be processing information with exponential speed, self-reproduce, use "cloud computing" (uniting multiple "minds" into one, to think even faster, like Deus-ex-Machina in The Matrix), and practically start processing information instantaneously and without a limit.

And here comes the catch. But what if we get there before the robots and occupy their place first? I'm talking of Transhumanism. The man-machine symbiosis, a process that has already begun in some areas with some slow steps (prosthetics, artificial hearts and other body-enhancing gadgets). In the longer term we could merge the silicon technology of computers with the carbon technology of biochemistry, and become one with our own creations.

In fact, and this is an interesting thought, if we're ever to meet E.T.s, why wouldn't we expect them to be someting of the sort? Half-creature, half-machine, part organic, part mechanical. It makes sense if we assume they'd be so advanced as to travel through space and explore all sorts of hostile environments. If the crazy theories about ancient alien visitations (who were called "gods") are right, wouldn't those "gods" appear to be superhuman (or super-Andromedian, or whatever) exactly because of this bio-technology merger?

In the far future, human-like cyborgs may even bring us immortality. We could find a way to download our memories and personality on an information carrier and later upload it into a new physical body. Back to Hans Moravec - he proposes a society in the far, far future, where our neural system would be transferred, piece by piece, directly onto a machine, thus giving us immortality. It sounds preposterous and far-fetched right now, but it's not beyond any perceivable possibility. I can't think of a single law of nature that prevents it. In fact immortality (as a self-replicating DNA/silicon symbiotic system) may turn out to be the ultimate fate of humankind, which would allow it to outlive the Earth and spread across the Galaxy and survive on other worlds.

This brings a lot of moral, ethical and even religious issues: who are we to meddle with Creation/Evolution? Do we fully understand the consequences of our actions? Or isn't that the only way to survive in the long term? The AI specialist Marvin Minsky said: "What if the sun dies out, or we destroy the planet? Why not make better physicists, engineers, or mathematicians? We may need to be the architects of our own future. If we don't, our culture could disappear."

Some ideas inspired from here:

[Error: unknown template video]

tl;dra

Date: 21/9/11 16:20 (UTC)
From: [identity profile] not-hothead-yet.livejournal.com
(you're all over the place on this. should have broken it down into different subjects)

I look at it like the more likely scenario is the one from "Wall:E"

(no subject)

Date: 21/9/11 16:31 (UTC)
From: [identity profile] underlankers.livejournal.com
If AI intelligence appears it will be designed to replicate human intelligence, but it will prove I think to be more alien than we would necessarily realize. Programming does what it's told to more than what we want it to do, and if we develop AIs to fight wars it's quite possible that we might program that a bit too well and find out that this bites us in the ass. On the other hand we might equally find out that when we design AIs in the image of humankind this means the majority of AIs will be ignorant pore dumb fucks, some of them geniuses who develop really, really nice and helpful shit, some of them evil malevolent murderous dicks, and the great majority being as intelligent as the average human being.

Now, how we'd react to that.....

(no subject)

From: [identity profile] underlankers.livejournal.com - Date: 21/9/11 19:37 (UTC) - Expand

(no subject)

From: [identity profile] luzribeiro.livejournal.com - Date: 21/9/11 19:41 (UTC) - Expand

(no subject)

From: [identity profile] underlankers.livejournal.com - Date: 21/9/11 19:53 (UTC) - Expand

(no subject)

From: [identity profile] terminator44.livejournal.com - Date: 22/9/11 02:23 (UTC) - Expand

(no subject)

Date: 21/9/11 20:58 (UTC)
From: [identity profile] underlankers.livejournal.com
I might note in the Omniverse Tales the interstellar and ultimately Intergalactic Empire ultimately develops both AIs and a more standard sci-fi variant of cloning. Clones, as artificial biological lifeforms become more disturbing than AIs do because the AI is ultimately and obviously helpful, they can cross barriers biology cannot and thus become both creator and preserver of the Imperial system.

For the many trillions upon trillions of subjects of the vast Empire the AI is simultaneously loathed and respected as servant of the distant Throneworld, but the Clone, the imitation-life, is in some ways hated more because cloning creates a kind of paranoia and fear that artificial life could easily be substituted for individuals or even potentially entire groups.

AIs resolve some of the difficulties posed by biological barriers and there is a Machine Army that co-exists with the Organic army that's designed as part-Gendarmerie, part conventional military, but the AIs who engage in this kind of thing are motivated by something rather more sapient than machine: greed and desire for fame and adulation from simple-minded planet-bound species.

(no subject)

Date: 22/9/11 10:37 (UTC)
From: [identity profile] stewstewstewdio.livejournal.com
Programming does what it's told to more than what we want it to do

This is the flaw in AI that I haven't been able to see disputed. There are a couple things wrong with the premise of computing and AI. The first one is that computing speed = intelligence. Unlike the human brain, which can independently deduce; due to emotion, inherent evolutionary instincts and very complex interactive senses, computers are really nothing more than a bunch of on/off binary switches. They need to be programmed and told what and how to learn.

The other thing that is still a mystery is this. How do we program it past the capabilities of the most capable programmer/designer? Innovation may be cumulative, but not collective. The whole basis of the religious intelligent design concept is that no human can perceive a designer that is proficient enough to design nature. Therefore we artificially create rationalizations (religion) to explain things we cannot perceive.

(no subject)

Date: 21/9/11 16:34 (UTC)
From: [identity profile] sealwhiskers.livejournal.com
I say bring it on! I want microscopic silicon implants in my brain, enhancing speed and ability.

Also, always program for the device to shut down once each 2-3 years or so, for maintenance. Always!!
From: [identity profile] montecristo.livejournal.com
Transhumanism (http://en.wikipedia.org/wiki/Transhumanism) was the one buzzword, tiptoed around but never explicitly mentioned by the author in a post which begged for its inclusion.

(no subject)

From: [identity profile] sealwhiskers.livejournal.com - Date: 21/9/11 17:49 (UTC) - Expand

(no subject)

From: [identity profile] not-hothead-yet.livejournal.com - Date: 21/9/11 23:01 (UTC) - Expand

(no subject)

From: [identity profile] montecristo.livejournal.com - Date: 22/9/11 00:35 (UTC) - Expand

Neural wave networking...

Date: 21/9/11 17:46 (UTC)
From: [identity profile] sophia-sadek.livejournal.com
... is less invasive than silicon implants. It is also more reliable. It works on a simple technique of establishing a radio connection to your brain. Your brain waves modulate the radio frequency which can then be detected and relayed elsewhere. It is a two-way connection that allows you to both transmit and receive.
From: [identity profile] montecristo.livejournal.com
The problem of human beings abusing technology to abuse each other is a much more serious and immediate concern, and will be, for the forseeable future, than of technology abusing humans, despite the "futurists'" and science fiction authors' warnings.

Frankly, I think certain species of right and left collectivists would rather live in a dystopian world where humans were governed by a tyranous and draconian dictatorship of "intelligent," machines than they would live in a world where human intellect was actually faithfully simulated in silicon, and when it begun to advance beyond our present reasoning ability the intelligent machines tried to tell us that they discovered that freedom was the only philosophy that was in conformity to the reality of human nature.

(no subject)

From: [identity profile] montecristo.livejournal.com - Date: 22/9/11 00:50 (UTC) - Expand
From: [identity profile] underlankers.livejournal.com
The problem is the view that human intelligence must of necessity be good. All humans are sapients, but the majority of humans in any culture on any continent will be fuckwits who have little of the refined and elite type of intelligence valued in civilizations. Human intelligence has produced those who believe that we should do to others what we should like done to ourselves, and it has produced those who see simple murder of their fellow human beings as a good thing in its own right. It has produced those who would rather starve than step on ants, and it has produced those who run industrial slaughterhouses. It has produced those who will pull people out of burning vehicles, and it produces those who will spend the total wealth of civilization on bombs to destroy entire cities.

It produces people who will see utopia as a society without laws, it produces people who see utopia as an endless war of all against all. It produces many people who simply don't care and the tiny minority who by virtue of sheer fanaticism force the rest of us into caring.

AI intelligence will be alien and differ among AIs as much as sapience has differed among humanity. AIs might produce great intellectual pacifists ala Bertrand Russell but it might also produce the coarse, brutal, and cunning murderous dicks like Stalin and Genghis Khan....

Colossus, the Forbin Project

Date: 21/9/11 17:57 (UTC)
From: [identity profile] sophia-sadek.livejournal.com


This movie describes a scenario that actually happened, but in reverse. Instead of an American supercomputer taking over a Russian supercomputer, the latter took over the former. What an embarrassment for Washington.

Re: Colossus, the Forbin Project

Date: 21/9/11 18:40 (UTC)
From: [identity profile] sandwichwarrior.livejournal.com
As I recall it...

The two computers concluded that an Alliance between the Soviets and the USA was to thier mutual benefit and then went about removing anyone who objected to such a clearly benificial relationship.

(no subject)

Date: 21/9/11 19:14 (UTC)
From: [identity profile] ddstory.livejournal.com
but it's totally useless for simple things like making a robot that would be able to recognise a table from a chair when it sees them, and then be able to move around the room easily without bumping into every edge of furniture.

It's because for the computer, every object is just a series of lines and surfaces. It doesn't have a concept of table and chair. The other problem is that once it has calculated the coordinates of the furniture, and it has made a step forward, the angles instantly change and everything becomes a new mess of lines and surfaces, which the computer has to start calculating again. If the speed of calculation is enormous that shouldn't be a problem. But imagine we put the robot in a much more complicated environment. It'd go crazy.

The robots we're sending to Mars respond to circumstances and in a sense they're of the self-teaching type that you're talking about here. But at this level they have the intelligence roughly of a cockroach. That is not to say that in time they wouldn't reach the intelligence of, say, a spider. :)

(no subject)

Date: 21/9/11 19:17 (UTC)
From: [identity profile] htpcl.livejournal.com
Or a worm! Or better yet, a can of worms!

"Interesting cycle we've got here. Hmmm. So, we eat ducks. Ducks eat worms. And worms eat... us! Isn't it funny?"

(From Mission London (http://www.youtube.com/watch?v=iFFCwlN79bo)).

(no subject)

From: [identity profile] htpcl.livejournal.com - Date: 21/9/11 21:15 (UTC) - Expand

(no subject)

Date: 21/9/11 19:26 (UTC)
From: [identity profile] meus-ovatio.livejournal.com
I think the general recent idea is that we could make a computer that is able to eventually make an AI. Like some kind of weird bootstrapping. Problem is, that just transfers the question and doesn't answer it. Is the human brain capable of constructing itself? This is what we're asking. Whatever we build or program is a product of our skull cans, so if we can't directly make AI, then what's to say that a computer could, a computer made from the fruits of our head juice?

This simple question is even more salient in neuroscience, where the philosophical bound of the science is "Can the brain understand itself?"

(no subject)

Date: 21/9/11 19:33 (UTC)
From: [identity profile] mahnmut.livejournal.com
I know it's not directly related, but it kind of gets me back to the Creator question. If there's a Creator, then who created the Creator? Or if we look from the side of science, if the Universe was created in a Big Bang, what was there before the Big Bang, in what environment did it occur, and how did that environment emerge? Etc.

/rambling

(no subject)

From: [identity profile] meus-ovatio.livejournal.com - Date: 21/9/11 19:39 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 21/9/11 20:28 (UTC) - Expand

(no subject)

From: [identity profile] ddstory.livejournal.com - Date: 21/9/11 19:51 (UTC) - Expand

(no subject)

From: [identity profile] ddstory.livejournal.com - Date: 21/9/11 20:18 (UTC) - Expand

(no subject)

From: [identity profile] underlankers.livejournal.com - Date: 21/9/11 19:58 (UTC) - Expand

(no subject)

From: [identity profile] meus-ovatio.livejournal.com - Date: 21/9/11 20:06 (UTC) - Expand

(no subject)

From: [identity profile] underlankers.livejournal.com - Date: 21/9/11 20:07 (UTC) - Expand

(no subject)

From: [identity profile] underlankers.livejournal.com - Date: 21/9/11 19:45 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 21/9/11 20:52 (UTC) - Expand

(no subject)

From: [identity profile] meus-ovatio.livejournal.com - Date: 21/9/11 21:22 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 21/9/11 22:18 (UTC) - Expand

(no subject)

Date: 21/9/11 20:23 (UTC)
From: [identity profile] eracerhead.livejournal.com
There are two major camps on this issue. One says this is impossible and even stupid to contemplate. They say it's unthinkable that the thinking power of machines could exceed ours.

That argument has failed time and time again and people are surprised. I'm not betting on it.

The scientific argument against this is that the human brain is so complex, any attempt to replicate its processes are doomed to failure. There's this argument that machines will forever remain unable to replicate human thought, because they simply don't possess the equipment to do it.

That isn't a scientific argument, it's an anti-scientific argument. Complexity has no bearing on what is possible eventually. Since nature has already produced intelligence given finite resources, it should be completely possible to replicate it.

The religious argument against it is that there are things about human consciousness that will forever remain unreachable for our understanding, much less that of some machine that's made of dead matter.

IOW there are some things that are unreachable, therefore this thing is unreachable. It is impossible for any non-living thing to possess intelligence. Simply put, that is idiotic.

Artificial intelligence is still a bit of a "terra incognita"

As defined by you; that an artificial intelligence is self-aware or conscious. The problem is that it is not possible to prove that a thing, even another person, is conscious. Things only give the appearance of consciousness. Turing was right all along: if it appears to be conscious we must choose to believe that it is, else we must be open to the belief that humans other than oneself aren't conscious either. I'm willing to bet that Turing was smart enough to have already considered that subconsciously.

This brings a lot of moral, ethical and even religious issues: who are we to meddle with Creation/Evolution?

We have been doing this since the invention of fire.

Do we fully understand the consequences of our actions?

We never fully understand the consequences of our actions.

Or isn't that the only way to survive in the long term?

Nothing is permanent, all we can do is delay the inevitable. Many if not most of the things of which you speak are bound to happen. We might as well deal with them as they come. If humankind doesn't survive as long as it could have, well, at least it was fun while it lasted.

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 21/9/11 20:46 (UTC) - Expand

(no subject)

From: [identity profile] ddstory.livejournal.com - Date: 21/9/11 21:20 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 21/9/11 22:24 (UTC) - Expand

(no subject)

From: [identity profile] peamasii.livejournal.com - Date: 21/9/11 20:47 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 21/9/11 22:31 (UTC) - Expand

(no subject)

Date: 21/9/11 20:34 (UTC)
From: [identity profile] nairiporter.livejournal.com
I must admit I only understand half of the things that are being discussed here, and not because of being dumb (which I might very well be) but for the simple fact that I haven't delved too much into these matters. A lapse that I think I should compensate, now when I think about it. I just wanted to say that the discussion that I'm witnessing here has been absolutely fascinating so far. I might have to read more than the one or two Isaac Asimov books that have ever come across my hands...

(no subject)

Date: 21/9/11 20:51 (UTC)
From: [identity profile] mahnmut.livejournal.com
Admitting one's ignorance on a particular subject is definitely not a sin.

(no subject)

From: [identity profile] geezer-also.livejournal.com - Date: 22/9/11 01:43 (UTC) - Expand

(no subject)

From: [identity profile] airiefairie.livejournal.com - Date: 21/9/11 20:52 (UTC) - Expand

(no subject)

From: [identity profile] nairiporter.livejournal.com - Date: 21/9/11 21:00 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 21/9/11 21:54 (UTC) - Expand

(no subject)

Date: 21/9/11 20:39 (UTC)
From: [identity profile] sandwichwarrior.livejournal.com
Perhapse an odd place to admit it but I've been writting a Science fiction story in which one of the gags is that artificial intellegence is not all that intelligent.

Sure, there are certain things they are good at but they are as incapable of "getting" humans as the humans are of "getting" them.

Hijinks ensue...

(no subject)

Date: 21/9/11 20:43 (UTC)
From: [identity profile] underlankers.livejournal.com
While on that topic AIs also appear in my Omniverse Tales as one of the unsung tools of the interstellar/intergalactic Empire's means of maintaining itself. Organic life cannot withstand the different environments and biospheres of all worlds, only Cosmic Horror-type entities can disregard simple biology and biologic barriers. AIs and circuitry can much more freely ignore it, to the point that subjects of those empires come to identify the appearance of AIs with the Empire itself, tying the AIs into the regime as its Cossacks.

For their part the AIs doing this do so in the knowledge that if either the opponents or supporters of the regime menace them all they have to do is go on a general strike and the entire system collapses the moment they do, so they've a perfect weapon to counter any persecutions of them.

I did this partially to avoid the extremes of either the Singularity or the Robot War tropes and took a Third Option.....

(no subject)

From: [identity profile] airiefairie.livejournal.com - Date: 21/9/11 20:52 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 21/9/11 21:47 (UTC) - Expand

(no subject)

From: [identity profile] geezer-also.livejournal.com - Date: 22/9/11 02:01 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 22/9/11 18:37 (UTC) - Expand

(no subject)

From: [identity profile] ddstory.livejournal.com - Date: 22/9/11 19:05 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 22/9/11 19:21 (UTC) - Expand

(no subject)

From: [identity profile] ddstory.livejournal.com - Date: 22/9/11 19:33 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 22/9/11 19:41 (UTC) - Expand

(no subject)

From: [identity profile] ddstory.livejournal.com - Date: 21/9/11 20:53 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 21/9/11 21:03 (UTC) - Expand

(no subject)

From: [identity profile] ddstory.livejournal.com - Date: 21/9/11 21:11 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 21/9/11 21:31 (UTC) - Expand

(no subject)

Date: 21/9/11 20:50 (UTC)
From: [identity profile] telemann.livejournal.com
In the documentary about Kurzweil Transcendent Man, neurologists take exception to Kurzweil's understanding of the brain and the idea that at some point you could simply download yourself to either a computer and or a new body, saying it was very faulty because he's approaching the human brain in an very mechanical way, like a computer, when it's not. So, according to them, it's going to be impossible to get machines to mimic human learning with all the bursts of inspiration that have caused sudden leaps of innovation in our history. Medical experts also take exception to his ideas we will stop aging within the next 25 years, because it's such a complicated deterioration of the body. As a child, I remember my mother said cancer would be cured by the time I was 30, and while tremendous leaps have been made, it's still a huge killer.

But then you hear about these new exciting new developments in the treatment of Alzheimer's just last week (insulin nasal sprays that go really high into the nasal cavity and pass very quickly through the brain/blood barrier actually stop the progression of the disease), and some scientists are actually talking about a cure for AIDs based on new studies about several innovative approaches to thwart the disease.
Edited Date: 21/9/11 20:51 (UTC)

(no subject)

Date: 21/9/11 21:05 (UTC)
From: [identity profile] mahnmut.livejournal.com
The bursts of inspiration... It reminded me of Terry Pratchett's concept of the "genius particles". See, they float across the Multiverse and when one hits your brain - kaboom! Eureka! An ingenious idea is born. Probably that's why you can't expect anything brilliant from guys who keep their figurative tinfoil hats on for most of the time. ;-)

I wish good ole Terry could benefit from these new insulin nasal sprays... and fast. Please.....

(no subject)

From: [identity profile] mahnmut.livejournal.com - Date: 21/9/11 21:24 (UTC) - Expand

(no subject)

From: [identity profile] mahnmut.livejournal.com - Date: 21/9/11 21:26 (UTC) - Expand

(no subject)

Date: 21/9/11 21:02 (UTC)
From: [identity profile] htpcl.livejournal.com
My fave Asimov story: The Last Question. Nuff said.

(no subject)

Date: 21/9/11 23:04 (UTC)
From: [identity profile] not-hothead-yet.livejournal.com
LOL I thought of that too...

(no subject)

Date: 21/9/11 21:50 (UTC)
From: [identity profile] sealwhiskers.livejournal.com
Battlestar Galactica!!! I love this.

(no subject)

From: [identity profile] htpcl.livejournal.com - Date: 21/9/11 21:54 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 21/9/11 22:26 (UTC) - Expand

(no subject)

From: [identity profile] geezer-also.livejournal.com - Date: 22/9/11 02:04 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 22/9/11 03:38 (UTC) - Expand

(no subject)

From: [identity profile] geezer-also.livejournal.com - Date: 22/9/11 04:11 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 22/9/11 11:25 (UTC) - Expand

(no subject)

From: [identity profile] sandwichwarrior.livejournal.com - Date: 22/9/11 18:35 (UTC) - Expand

(no subject)

From: [identity profile] eracerhead.livejournal.com - Date: 22/9/11 19:06 (UTC) - Expand

(no subject)

Date: 21/9/11 22:06 (UTC)
From: [identity profile] terminator44.livejournal.com
I took a class in Artificial Intelligence last year. The textbook I was assigned for it asserted in the first chapter that trying to base AIs on the human mind is like if the Wright brothers based their flyer on the way birds fly. It may serve as an inspiration, but trying to model the exact same functionality may not be the best way to create effective AIs.

(no subject)

Date: 21/9/11 22:15 (UTC)
From: [identity profile] underlankers.livejournal.com
Funny you should mention planes designed like birds:

http://en.wikipedia.org/wiki/Etrich_Taube
From: [identity profile] bikinisquad3000.livejournal.com
Love this post, but damn dude, never pretend you read a book you didn't, or at least look at the plot summary if you're going to. That's the movie's plot, and it has nothing to do with anything that happens in the book (the two don't even raise the same questions about technological/human relationships).

(no subject)

Date: 22/9/11 00:46 (UTC)
From: [identity profile] mahnmut.livejournal.com
The name of the robotics scientist and transhumanist Hans Moravec that I saw menioned here served as a prototype for the Moravecs, a race of cybernetic organisms that populated Dan Simmons' fictional future universe in the Ilium/Olympos (http://en.wikipedia.org/wiki/Ilium/Olympos) cycle. Those half human, half machine beings were each adapted to their respective habitats, and the one we're seeing here on this pic is actually Mahnmut.




Image



He's small and versatile because his environment on one of the Jovian moons is such that he has to spend all his life in a submarine. The other creature behind him is Orphu, who's enormous and fat like a giant crab. They share a passion for Shakespeare and they often chat via the communication channels. The circumstances force Mahnmut to descend on Mars and look for the Post-Humans, a race of superhuman godlike cyborgs who look very much like the Olympian gods. Etc, etc.



Why I'm telling this. In many ways, these strange Moravec robots have become more "human" than the humans themselves. I mean they've preserved humanity in its pure form. For instance they're pacifist (hence my motto Quaero Togam Pacem, I come in peace). Eventually they play a role in the fate of humankind. But I won't bring any more spoilers here. :-)

(no subject)

Date: 23/9/11 05:39 (UTC)
From: [identity profile] il-mio-gufo.livejournal.com
re: AI

I always get a little freaked out when i think of those learning computers which, after enough practice (ie human encounters) will learn an algorithm to predict human response to given situations. For example, humans have 70% tendency to react this way when a relative/friend dies. Humans have a 35% chance to respond with x when y happens, and a 90% chances to react by means of f when g happens......etc etc.

eventually they will learn to employ the most favorable response. they could either make the most admirable politicians OR the most sleazy!

Credits & Style Info

Talk Politics.

A place to discuss politics without egomaniacal mods

DAILY QUOTE:
"The NATO charter clearly says that any attack on a NATO member shall be treated, by all members, as an attack against all. So that means that, if we attack Greenland, we'll be obligated to go to war against ... ourselves! Gee, that's scary. You really don't want to go to war with the United States. They're insane!"

March 2026

M T W T F S S
       1
2345 678
910 1112 1314 15
1617 1819 202122
2324 2526 272829
3031