http://kardashev.livejournal.com/ (
kardashev.livejournal.com) wrote in
talkpolitics2012-01-06 10:59 am
![[identity profile]](https://www.dreamwidth.org/img/silk/identity/openid.png)
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
Entry tags:
Cyberarchy: Yea or Nea?
A lot of people fear the idea of an artificial intelligence taking over the world and making decisions for the human race or wiping us out. You know, like the AI super-intelligences in The Matrix or Terminator. In fact, this is probably where they get their fear from. They even seem to fear a friendly AI running things. And yes, I know that AI might never become reality but for the sake of arguments assume that it can happen and that by virtue of always knowing the answer to a question it will come to rule us. At least in the de facto sense and probably de jure, as well.
Seriously, why fear a friendly AI? It will be hardwired to act in our own best interests. Human politicians aren't hardwired in this way. You won't be able to bribe such an AI. Or tempt it with sex. Or play on non-existent prejudices, petty grudges, or deeply rooted hatreds. It will have only cold hard logic to guide it after we turn it on. If the issue is "We need to provide food, shelter, and medical care for everybody" it will give us a completely unbiased answer of whether it is possible and how best to accomplish this. It may not give us a utopia(Hell, it may very well tell us that utopia is a pipe dream) but I bet it will be a lot more effective than letting human politicians and bureaucrats run things.
I think the real reason people fear the idea of an AI takeover is that they hate being told when a dream is impossible to achieve. Or that their ideas are demonstrably wrong. It's sort of like how the Maoists liked to put people in jail for being educated.
But me? Hell yeah, point me to the Machine God that I may hear something accurate for a change.
Seriously, why fear a friendly AI? It will be hardwired to act in our own best interests. Human politicians aren't hardwired in this way. You won't be able to bribe such an AI. Or tempt it with sex. Or play on non-existent prejudices, petty grudges, or deeply rooted hatreds. It will have only cold hard logic to guide it after we turn it on. If the issue is "We need to provide food, shelter, and medical care for everybody" it will give us a completely unbiased answer of whether it is possible and how best to accomplish this. It may not give us a utopia(Hell, it may very well tell us that utopia is a pipe dream) but I bet it will be a lot more effective than letting human politicians and bureaucrats run things.
I think the real reason people fear the idea of an AI takeover is that they hate being told when a dream is impossible to achieve. Or that their ideas are demonstrably wrong. It's sort of like how the Maoists liked to put people in jail for being educated.
But me? Hell yeah, point me to the Machine God that I may hear something accurate for a change.
no subject
(no subject)
no subject
no subject
Thanks. This made my morning.
(no subject)
(no subject)
(no subject)
no subject
no subject
I doubt it will favor either one.
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
no subject
no subject
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
no subject
no subject
(no subject)
(no subject)
no subject
http://interaxon.ca/
no subject
(no subject)
(no subject)
no subject
no subject
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
no subject
no subject
(no subject)
(no subject)
no subject
no subject
no subject
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
(no subject)
no subject
I'm not sure "fear" is quite the right word but I'd say the answer lies in the same reason that (I assume) you do not want live with your parents for the rest of your life.
Children grow up and leave the nest and many people chafe at the idea of be ruled or told what to do even if those descisions may be in their best interest.
To quote CS Lewis
Of all tyrannies, a tyranny exercised for the good of its victims may be the most oppressive. It may be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. Their very kindness stings with intolerable insult. To be ‘cured’ against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.
no subject
You wouldn't have to be "told" though. Such an AI could say, "Are you sure you want to continue with your present course of action?"
Humans say: "Yes!"
Failure results. AI says, "See? Told ya' so."
Enough of that eventually wears down the most stubborn folks. Having AI isn't like having parents. It's closer to having a investment manager, a doctor, an economist, or any other expert on your side. Difference being that the AI is an expert on nearly everything and is rarely ever wrong. It's the difference between "Go to your room! You're grounded!" and "See? I told ya so!".
(no subject)
(no subject)
no subject
What Is Wrong With That Picture (pt 1 of 2)
Your "cyberarchy" artificial intelligence paradigm suffers from what Friedrich Hayek called "the pretense of knowledge." The entire idea is premised upon having access to information that is just not obtainable by a centralized system. Human centralized command and control systems do not work not only because human beings are not perfectable, but also because human society is a complex, non-linear, chaotic system demonstrating emergent behavior and sensitivity to initial conditions. Such systems are inherently unpredictable and uncontrollable.
Look at what your AI would need to do. This has come up before, and one of you on this forum replied to this objection of mine with an intelligent, well-written article on calculability. Coming from a computer science background, I found the article interesting, and very similar to other writings on the topic, but the problem with it was that it failed to address the underlying philosophical and economic reality of the problem. I'm not denying that such an approach is an attempt to solve A problem, rather that the approach is prima facie evidence that the problem is not being understood.
The very root of the problem presupposes an objective, universal definition for something conceived of as the "collective good" or that there even exists an "individual good" that is objective, universal, and definable for all human beings in all contexts. The problem is that this is analogous to claiming a "priviledged frame of reference" in the physical world where one does not exist. Why do people postulate, or even desire, "control" over others? It is because they wish to substitute the judgements, values, and actions of some individuals with other judgements, values and actions. The question then becomes one of defining what is the source of judgements, actions, and values. The answer is: the individual. There is No "Social Mind." Hobbes's Leviathan and Hegel's God-Walking-the-Earth are metaphors, abstractions which cannot be concretized. They exist only in the imaginations of individual human minds. This is not to claim that they do not exist at all; "society" exists, in the same since that "love" exists. It is just not an objective, concrete, thing, in and of itself which can exist independent of the entities which comprise it. You cannot substitute, however benevolently you may desire to do so, some objective valuations, judgements, and actions that are objective, universal and context-independent, for the actions of individual human beings, because such things do not exist at all. The only thing that can be accomplished is to substitute some individual human beings valuations, judgements, and actions for the freely formulated valuations, judgements, and actions of other individual human beings. This raises the question: WHOSE valuations, judgements, and actions are to be considered "more equal" than whose? It does you no good to talk of democracy at that point; democracy implicitly presumes individual sovereignty as the basis for political equality. To simplify the question of just exactly who gets to decide, it reduces ultimately to what Vladimir Lennin explained is the essence of politics: "Who, whom?"
no subject
What Is Wrong With That Picture (pt 2 of 2)
The Church-Turing Thesis (http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) has been raised in here before, as an objection to the idea that only a free market can "calculate" an optimum economic set of circumstances. Basically, the objector is making a point that centralized command and control is not theoretically impossible, because any problem which is calculable by a set of distributed calculators can be theoretically calculated by a machine capable of simulating the operation of the distributed calculators. Of course, this objection misses the fundamental consideration underlying the problem. What are we "calculating" and what is doing the calculation?
The problem facing the collectivists is that individuals come together into society to solve problems cooperatively and to improve their individual circumstances. The "good" of society is the good of the individuals comprising it. The individual "goods" are determined and calculated by those individuals. If a computer were to attempt to calculate my ultimate good, it would first have to "know" me. Essentially, it would have to be me. In other words, it would have to render decisions that I would render myself in order to effect my ultimate happiness. Who determines what my ultimate happiness is? I do; I am the only one who can, since valuation is imputed: it is subjective and context-dependent. Of course, it may be true, that IF such a machine could be invented, which could determine, calculate, and effect each individual's "ultimate happiness," then with a sufficiently large enough and powerful enough Turing Machine, () one could replace all of the individual "happiness calculators" with one, centralized, happiness-calculating, economy-simulating machine. Here is the clincher though: at that point, "humanity" is rendered entirely superfluous. If you start out with a means that consists of overriding the individual's free will from the outset, you've essentially demonstrated that he is superfluous. Why is his preservation or optimization worth anything at all? It isn't, and this is logically necessarilly the case, given the thought experiments' premises about the acceptable means.
Consider, as Locke did, the converse situation: when the individual's good is no longer served by participating in a particular society, it behooves him to remove himself from that society and return to "a state of nature" in order to improve his circumstances. There is no "duty" to die for the collective. The collective has no existence apart from the individuals that comprise it. There is no justification for human sacrifice, no matter how "democratically" the victims are selected. As John Stuart Mill put it:
no subject
(no subject)
no subject
it'll be worse than gay marriage
no subject
no subject
That seems an arbitrary limitation to set on something we already defined as AI.
Anyway, wake me when they can consistently tell a cat from a dog in 2d.
no subject
If nothing can logically replace the individual as the sovereign acting unit within the economy then the category "nothing" embraces both politicians and artificial intelligences. You have to remember, even if the job were given to a hypothetical AI, it's value criteria would have to be chosen by....particular human beings. The problem with "electing" an AI is the same one we face in electing politicians. It just introduces a level of indirection.
Consider the case where the police break into someone's home on the suspicion that illegal drug activity is going on inside. The acting officers claim that "our drug-sniffing dog smelled something outside the front door" as a basis for their "reasonable suspicion." To turn the case into a question of how accurate the drug-sniffing dog is at detecting drugs is entirely to miss the point; it is still the cops' discretion as to what the dog's behavior means or even whether or not that behavior was observed at all! Even if you were able to create an AI capable of "manning" the bureaucracies of government, it's criteria for action, decision, and prioritization would have to be programmed or otherwise determined by some human being(s)! Value is imputed by individual human minds. It does not exist as an objective, context-free entity in and of itself which may be "imported" into the algorythms of some hypothetical AI. Your hypothetical AI cannot escape being the instrumentality of some human agency.
no subject
Your scenario, if it were even possible to implement it, would turn Karl Marx on his head and be a case of history repeating itself, first as farce, second as tragedy.
no subject
no subject
Who would program them just might be a problem :D
(no subject)
no subject
Now extrapolate the various "fat fingered" (read: we're not sure why it happened) financial trades that nearly brought down whole markets a few times to stuff that really matters.
no subject
no subject
no subject
no subject
It is interesting that you brought up this topic. I have worked on human/machine interface projects that enhance the capacity of the human brain. Naturally, there are some weak-kneed people who fear the power of the highly connected class.