[identity profile] kardashev.livejournal.com posting in [community profile] talkpolitics
A lot of people fear the idea of an artificial intelligence taking over the world and making decisions for the human race or wiping us out. You know, like the AI super-intelligences in The Matrix or Terminator. In fact, this is probably where they get their fear from. They even seem to fear a friendly AI running things. And yes, I know that AI might never become reality but for the sake of arguments assume that it can happen and that by virtue of always knowing the answer to a question it will come to rule us. At least in the de facto sense and probably de jure, as well.

Seriously, why fear a friendly AI? It will be hardwired to act in our own best interests. Human politicians aren't hardwired in this way. You won't be able to bribe such an AI. Or tempt it with sex. Or play on non-existent prejudices, petty grudges, or deeply rooted hatreds. It will have only cold hard logic to guide it after we turn it on. If the issue is "We need to provide food, shelter, and medical care for everybody" it will give us a completely unbiased answer of whether it is possible and how best to accomplish this. It may not give us a utopia(Hell, it may very well tell us that utopia is a pipe dream) but I bet it will be a lot more effective than letting human politicians and bureaucrats run things.

I think the real reason people fear the idea of an AI takeover is that they hate being told when a dream is impossible to achieve. Or that their ideas are demonstrably wrong. It's sort of like how the Maoists liked to put people in jail for being educated.

But me? Hell yeah, point me to the Machine God that I may hear something accurate for a change.

(no subject)

Date: 6/1/12 17:49 (UTC)
From: [identity profile] kayjayuu.livejournal.com
So now it's a baby killer. I feel so much better!

(no subject)

Date: 6/1/12 19:10 (UTC)
From: [identity profile] sandwichwarrior.livejournal.com
For someone who quotes asimov's laws you seem to be unfamiliar with the actual story from which they spring.

(no subject)

Date: 6/1/12 22:36 (UTC)
From: [identity profile] sandwichwarrior.livejournal.com
One of the first (sorry i'm drawing a blank on the title) features a situation not unlike the one described by kayjayjuu.

A robot is faced with two people a father and daughter trapped in a burning building and can only save one. The robot saves the father despite his peas to leave him and save the girl instead. (The father was determined to have a better chance of survival) Conflict ensues.

Now that I think about it might have been a car wreck, It's been close to 15 years since I read it and the story dealt primarily with the emotional aftermath. The main thrust though was how even strictly rational descisions can result drama/conflict.

(no subject)

Date: 7/1/12 00:16 (UTC)
From: [identity profile] sandwichwarrior.livejournal.com
Should the Robot have saved the girl?

(no subject)

Date: 6/1/12 19:40 (UTC)
From: [identity profile] montecristo.livejournal.com
You're assuming that a "collective best interest" can be calculated. It doesn't even exist.

Credits & Style Info

Talk Politics.

A place to discuss politics without egomaniacal mods

DAILY QUOTE:
"Clearly, the penguins have finally gone too far. First they take our hearts, now they’re tanking the global economy one smug waddle at a time. Expect fish sanctions by Friday."

July 2025

M T W T F S S
  123 456
78910 111213
14151617181920
21222324252627
28293031   

Summary