
"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists". (link)
Just a few musings on this rather fascinating issue. First off, I expect AI, once it's launched on its own path, would soon become alien-like to us - so in a sense it'd be "Teh Aliens". If we assume an AI would be much more intelligent than any human, given enough time, and given its tendency to grow and develop exponentially, at some point it'd be able to outsmart any controls on its development that might've been initially put there. That said, as a consequence its interests would also become alien and incomprehensible to humans, and that would render any possible relationship between the two "species" that might've initially existed, impossible to sustain in a way that's meaningful from a human POV.
Now that I've cleared that out, I should say I don't necessarily share Dr Hawking's implication that such a relationship would likely be hostile and thus, dangerous to the future of the existence of the human race (provided that man and machine have not actually merged in the meantime, to become a new hybrid species altogether - but that's another story). The reason I believe this is that a higher state of intelligence may not automatically mean aggressive behavior to intellectually inferior species. If nature is any guide, many species that possess some intelligence are not necessarily dangerous to the very existence of others, or at least not as indiscriminately destructive as humans have been. Sure, it's possible that I'm viewing this from an anthropocentric standpoint, and using experience that shows that it's possible to perceive of behavior that does not necessarily want to cause death and destruction for reasons beyond instincts and primitive emotions, or even nihilistic ideologies - and doing this from a narrow point of view, which may not take into account the vast array of different possible attitudes.
If we're to view the motivations of a potentially superior intellect strictly from an utilitarian point of view (i.e., what's useful for furthering its existence and development), then we'd have to ask ourselves, what possible benefit or gain could such a hyper-intelligence have from 1) insisting on staying on Earth as opposed to attempting to colonize space, and 2) dominating and/or harming humankind, whereas it could actually use humankind to further its own development. If a super-intelligent AI "species" develops to a point where it practically obtains god-like capabilities, then the question is, why would it want to waste time, resource and effort to retard its development by staying within the narrow confines of this planet, and even bothering to deal with an inferior species like us humans. I mean, has the population of dolphins on planet Earth hindered our development in some way? What about dogs? We'd rather exploit them and shape them up to become more useful to us (as we've done with dogs) than merely destroy them and get them out of the way. Sure, unfortunately that means it's far more likely that humankind would be enslaved and turned into useful pets - but annihilation of the human race? Um, no, doesn't seem as likely.
Here's something else. Many astronauts, upon their return home, report of the so called "overview effect", where many features of human civilization start to look provincial, petty, banal and insignificant to them - particularly ideals of nationalism/tribalism. If we apply this to a much higher intelligence which has developed to a point where it's no longer considered "Earthly" from our standpoint, what would its attitude be to humankind? What would be its incentive to want to hinder its own development by staying on this one planet alongside us barbarians? Isn't it more likely that it'd want to transcend this world and expand across the universe? We'd be a mere footnote on their agenda, if present there at all.
And there's the argument that mostly stems from imagining how a possible interaction between a far more superior extraterrestrial civilization and ours would look like. It's largely applicable to the human/AI relation as well. See, the universe is 13+ billion years old; Earth is 4.6 billion years old, life on Earth is about 3.5 billion years old, multicellular life is only about 1 billion years old, and humans appeared about 100 thousand years ago. If extraterrestrial explorers had visited earth just 10 thousand years ago, they must've been fairly unimpressed with what they were seeing - humankind would still be living in caves, there'd be no real settlements to take note of, really no noticeable signs of a developing civilization.
It's likely, then, that any life we'd potentially find in this galaxy would be either many millions of years younger than our species (and less developed, respectively), or so much older that its development would be practically incomprehensible to us, and would accordingly be totally disinterested in us (or even unrecognizable as intelligence by ourselves). In other words, the statistical chance is that we wouldn't find any crossing points of interest and/or communication with them. It's conceivable, then, that the difference between a super-intelligent AI on the one side (after X number of years and decades of exponential development) and humankind on the other, would create such a vast gap between themselves that would be equivalent to decades, centuries, if not millenia of biological-species development. Of course here I'm coming out of the assumption that the notion of being planet-bound, or wanting to dominate other species, or wage war for territory or out of sheer need to destroy, would be considered by such an advanced civilization as utterly primitive, meaningless, and petty in the larger picture of things - just as those astronauts coming home have experienced after having spent just a few weeks away from this planet.
Thoughts?
(no subject)
Date: 17/9/15 12:56 (UTC)(no subject)
Date: 17/9/15 13:48 (UTC)Also, we really have developed things with super-human intelligences that keep getting smarter. For example, a stock market will have more information and more intelligence than any human being about how to allocate money. It makes decisions that are unpredictable and unfathomable to individual people. While it's hard to argue that this is being done for the betterment of humanity as a whole, it does seem to work out well, even with occasional meltdowns. Nobody is worried about the NASDAQ nuking humanity into extinction. I guess this fits well into the model where we are already pets, enslaved by an AI of our own making. Insidious yes, but it pays dividends.
Now an alien AI is something completely different because it's not something we're going to be part of the evolution of and therefore it will not be dependent on us in any way. You only need one of these in our galaxy to go wrong in order to turn bad sci-fi into bad sci-fact. If one of these was built a billion years ago, the distances between stars doesn't seem to offer much protection against something that has infinite patience. Of course, it probably won't affect us in our lifetimes like a man-made AI will, but it makes you wonder what really happened to the dinosaurs.
(no subject)
Date: 17/9/15 15:44 (UTC)(no subject)
Date: 17/9/15 23:25 (UTC)(no subject)
Date: 18/9/15 06:11 (UTC)(no subject)
Date: 18/9/15 06:15 (UTC)(no subject)
Date: 18/9/15 06:43 (UTC)(no subject)
Date: 17/9/15 15:46 (UTC)(no subject)
Date: 17/9/15 18:51 (UTC)(no subject)
Date: 18/9/15 21:22 (UTC)(no subject)
Date: 17/9/15 18:50 (UTC)(no subject)
Date: 17/9/15 19:34 (UTC)That really depends on a scale, though, doesn't it? An intelligent civilization would need more than one data point to make a judgment about a species development. Looking at the fossil record an alien paleontologist could see that by c. 10,000 AD Homo sapiens were moving from being solely hunting/gathering nomads to being agriculturalists and herders. Cave paintings and places like Göbekli Tepe are evidence of a complex culture and a highly developed inner-life. Compared to 10,000 years before (during the Ice Age) the changes would have been obvious and striking. Since we have no other criteria than ourselves by which to judge this progress, maybe that was extraordinarily fast. Maybe the aliens took a million years to develop that far. Maybe it was so laughably slow that ET wrote us off as a species duller than a box of rocks. My point is, I don't think that we can assume that we can know what an alien would make of us, if they would make anything at all.
The safest bet is that they probably wouldn't even recognize us a being alive at all. Nor we them.
(no subject)
Date: 18/9/15 02:33 (UTC)(no subject)
Date: 18/9/15 05:58 (UTC)(no subject)
Date: 18/9/15 21:23 (UTC)