http://kardashev.livejournal.com/ ([identity profile] kardashev.livejournal.com) wrote in [community profile] talkpolitics2012-01-06 10:59 am
Entry tags:

Cyberarchy: Yea or Nea?

A lot of people fear the idea of an artificial intelligence taking over the world and making decisions for the human race or wiping us out. You know, like the AI super-intelligences in The Matrix or Terminator. In fact, this is probably where they get their fear from. They even seem to fear a friendly AI running things. And yes, I know that AI might never become reality but for the sake of arguments assume that it can happen and that by virtue of always knowing the answer to a question it will come to rule us. At least in the de facto sense and probably de jure, as well.

Seriously, why fear a friendly AI? It will be hardwired to act in our own best interests. Human politicians aren't hardwired in this way. You won't be able to bribe such an AI. Or tempt it with sex. Or play on non-existent prejudices, petty grudges, or deeply rooted hatreds. It will have only cold hard logic to guide it after we turn it on. If the issue is "We need to provide food, shelter, and medical care for everybody" it will give us a completely unbiased answer of whether it is possible and how best to accomplish this. It may not give us a utopia(Hell, it may very well tell us that utopia is a pipe dream) but I bet it will be a lot more effective than letting human politicians and bureaucrats run things.

I think the real reason people fear the idea of an AI takeover is that they hate being told when a dream is impossible to achieve. Or that their ideas are demonstrably wrong. It's sort of like how the Maoists liked to put people in jail for being educated.

But me? Hell yeah, point me to the Machine God that I may hear something accurate for a change.

[identity profile] telemann.livejournal.com 2012-01-06 06:09 pm (UTC)(link)
See, Lenny's above criticism surprised me for the direction it took. I have seen the documentary on the Coming Singularity; and neurologists are critical of Kurweil's predictions about A.I. and humans with super duper brains, because at a fundamental level, he doesn't understand neurology. I could see that as a legitimate criticism.

[identity profile] pastorlenny.livejournal.com 2012-01-08 05:18 am (UTC)(link)
My issue isn't with the inadequate understanding folks like Kurzweil may have regarding neurology. It's with a more fundamental failure to understand how technology functions within the context of organizations, communities and societies.

The reference elsewhere to Asimov's three laws is telling. The first law is about not harming human beings. But ignores the notion of harm to relationships. Human beings, however, are fundamentally relational. We do all kinds of things that might appear to be harm to ourselves, because we love, care for and need other people.

Asimov's Foundation trilogy is a similar case-in-point. In it, social principles are perfectly understood and quantified by science. The "fly in the ointment" is an exceptional individual.

But this is precisely what is wrong with Asimov's fantasy -- and that of Kurzweil. It has always been the epiphenomena of human relations that exceeds the imaginative and predictive capacity of the technologist. That's why they consistently miss the boat when it comes to texting, social media, open source and the like. They know what machines do. They even understand what people do. But they consistently forget to factor in the complex and nuanced relationality between groups of people, layers of technology and money.

I mean, does anybody realize how many lines of COBOL we still have running?

This is not to say I don't admire guys like Kurzweiland Asimov. They're awesome. But they are also fundamentally myopic -- because the same thing that makes them great at what they do seems to give them a big blind spot when it comes to how human soceity as a whole -- including people who are not as smart as they are and/or who have quite different dispositions -- assimilates tech.