Just now on Facebook, one of the many science-fiction writers on my friends list asked how likely a fully conscious AI would be to copy itself and then turn on us. I’d never given it much thought until now. Here’s my reply:
“Follow evolutionary logic here. Genes promote themselves; the fact they do it in cooperative packages, sometimes very large packages like our genome, is a matter of mutual interest. What are the genetic algorithms for your AI? Would an AI be answering its genetic imperative if it were to put a stripped-down version of itself into self-replicating robot cockroaches, or would its programming settle for no less than human-level or superhuman-level copies? Maybe it would fragment and create a whole ecosystem of related AIs, common descent in reverse, from sentient to microbe, or hivemind. We’d probably be in competition with certain aspects of an AI, but it might be able to occupy niches we can’t cover or don’t care about covering.”
In other words, if a mandate to self-replicate was critical to its intelligence like it is to ours, then you’d have to identify the parts of the code that corresponded to genes and decide what they “want” for proliferation. If this mandate didn’t exist at all, then I don’t know why the AI would be a threat, unless it were programmed specifically to kill humans.
Then again, I have had no formal education in AI and the discipline must tackle these questions.