Artificial Intelligence Run Amok

Just now on Facebook, one of the many science-fiction writers on my friends list asked how likely a fully conscious AI would be to copy itself and then turn on us. I’d never given it much thought until now. Here’s my reply:

“Follow evolutionary logic here. Genes promote themselves; the fact they do it in cooperative packages, sometimes very large packages like our genome, is a matter of mutual interest. What are the genetic algorithms for your AI? Would an AI be answering its genetic imperative if it were to put a stripped-down version of itself into self-replicating robot cockroaches, or would its programming settle for no less than human-level or superhuman-level copies? Maybe it would fragment and create a whole ecosystem of related AIs, common descent in reverse, from sentient to microbe, or hivemind. We’d probably be in competition with certain aspects of an AI, but it might be able to occupy niches we can’t cover or don’t care about covering.”

In other words, if a mandate to self-replicate was critical to its intelligence like it is to ours, then you’d have to identify the parts of the code that corresponded to genes and decide what they “want” for proliferation. If this mandate didn’t exist at all, then I don’t know why the AI would be a threat, unless it were programmed specifically to kill humans.

Then again, I have had no formal education in AI and the discipline must tackle these questions.

Thoughts?

About robertpkruger

Writer, editor, and software developer. Former president of ElectricStory.com.
This entry was posted in Evolution, Monsters, Writing. Bookmark the permalink.

5 Responses to Artificial Intelligence Run Amok

  1. alexlamb says:

    Here’s my take.

    Things that can reproduce do. They replace things that can’t. By building machines that can self replicate and which are prone to any degree of mutation, we create machines that have the capacity to ‘turn on us’ to some extent. If self replication and mutation exist, then a ‘mandate’ to do so will spontaneously arise.

    However, people assume that intelligence is a quantifiable commodity. There are those who are ‘smarter’ who can think up things that are invisible to those who are ‘less smart’. But this is an intuitive notion imposed on us by the way we perceive intelligence in others. In some ways, machines are already smarter than us, and we know it. In others, not so much.

    What we’re often talking about when we refer to ‘superior intelligence’ is the ability to construct extremely long or complex goal chains. There is no evidence yet that this is doable or worthwhile at scales larger than that of individual humans. We imagine that it is, because we can’t reason at scales larger than our own. However, there are reasons to believe that at these scales, goal building will become a clumsy and irrelevant tool. In any case, the ability to build large, subtle plans does not automatically confer a will to adaptation or reproduction. It is a talent no different to that of playing chess.

    Similarly, the argument that AIs would be able to design their own offspring and thus evolve rapidly is fallacious for two reasons. First, self-modification is prone to error. To create something smarter than you is to create something capable of perceiving mistakes that you cannot. By the same logic, trying to create something smarter than you usually involves making mistakes.

    Secondly, self-modification is less stable than self-reproduction. If Lamarckian adaptation worked, we’d see it in nature because it would have already outcompeted other systems. There is no reason to believe that Lamarckian logic operates successfully in the realm of intelligence and in no other. Thus, an AI that manipulated its own offspring would likely become extinct far faster than one that simply reproduced and stayed dumb. And this is true for all ‘levels of intelligence’, not just those ‘higher’ than ourselves.

    Thus, it’s not intelligence we need to worry about. It’s self-replication and mutation. Dumb, rapidly adapting programs that we can’t block or stop are much more likely to pose a threat to us than highly complex AI. If this comes from anywhere, it will come from those parts of our digital ecosystem where such pressures already exist. In other words, we’re far more likely to screw ourselves over writing malware than we are to give rise to an all-consuming overmind.

  2. Thanks for that great analysis. I was hoping you’d respond, Alex. I had you in mind as someone with an academic background in AI. It seems we’re both in agreement that self-replication is key to the issue of self-motivated, threatening AI. We have intelligence not as an end in itself but to further our genes. If an AI weren’t self-replicating, why would it self-discover the agenda — and more importantly, the will — to compete against us? “Self-discover” being the important qualification. We could program self-preservation and competition into an AI malice-aforethought, I suppose, though I doubt it would be a very creative antagonist without following some kind of evolutionary program.

    And I think you make a great argument about why AI is unlikely to be able to self-improve in an accelerating feedback loop. It’s similar to C.S. Lewis’ argument in The Abolition of Man about why humanity is unlikely to fundamentally self-improve through genetic engineering. When I mentioned the idea of AI reproducing and occupying a variety of niches under different forms, I was not, of course, talking about conscious self-improvement.

  3. Pingback: Revisiting the Threat of AI | Bestiary in Progress

  4. Michael Davis says:

    Didn’t catch this at the time in my FB feed, so replying here (probably better here anyway).

    For an optimistic take, read “The Golden Age” trilogy by John Wright. He posits that super intelligent machines will take the “long view” and be nice to us even now. He comes to this conclusion under the assumption that a universal consciousness is “the goal” here and that AIs will have long memories. Having long memories they would hate to have a memory of genocide. Now, unlike humans who do genocide and genocide-like things all the time, we forget at least because WE personally didn’t do the killing. However, if you posit AI correctly, the first transcendent AI created will also be here till the end… Upgraded, housed in different machine bodies, whatever, but (in this sense at least) the same person. And to posit a universal consciousness that willingly committed genocide (on another sentient race that otherwise also would have been part of the universal consciousness) is to posit an AI that willingly wants to be a psychopath or schizophrenic or both. In short, the golden rule (not the reason for the title) is the reason why AIs would be nice to humans.

    That said, I agree with Alex that the chances are probably lower than folks like Kurzweil predict for AI.First there’s little support that a being of intelligence level N is able to design and create a being of intelligence level N+1. (This, I believe is also a re-phrasing of one of Alex’s points above.) Hell, our BEST designs are biologically inspired, so we’re merely copying nature’s designs, not really re-designing on our own. Second… I often wonder if there’s a max limit to intelligence, beings smarter than X run the risk of some other issues, insanity being an example, which there seems to be some evidence for. But on the flip side, I can also admit it might be like 2 slugs arguing over the maximum length a slug can achieve and then … meeting a snake.

    Right now, I’m giving a higher chance to augmented humans than totally autonomous AIs and I think that’s a good thing. First… there will be more of us, so any augmented humans who want to go rogue may also be met with other augmented humans who will keep them in check. Second it may foster the day where we can personally take the ‘long view’ I mentioned above and evolve to a society where we and our descendents (both biological and mechanical) are more civilized.

    • Good observations. I’ve started reading the trilogy, thanks to your recommendation, Mike. The perhaps-fictional Precambrian Conspiracy appears to follow your logic, with AI subordinate to posthumanity, and posthumanity still evolving according to a program that it imperfectly controls, if it controls it at all.

Leave a Reply to Michael Davis Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s