Really interesting article explaining three laws of artificial intelligence from The Third Law by George Dyson. Adapted from POSSIBLE MINDS: Twenty-Five Ways of Looking at AI edited by John Brockman.
It is entirely possible to build something without understanding it. We shouldn’t be asking “can we” we should first ask “should we?”
The article’s premise is that we spend too much time focusing on machine intelligence and not enough about “self-reproduction, communication, and control.” Dyson argues that the next revolution in computing will be signaled by the rise of analog systems over which digital programming no longer has control (reminds me of the ending of Battlestar Galactica). Nature’s response to those who believe they can build machines to control everything will be to allow them to build a machine that controls them instead.
The three laws of artificial intelligence listed in the article are:
Ashby’s law, after cybernetician W. Ross Ashby, author of Design for a Brain, stating that any effective control system must be as complex as the system it controls.
The second law, articulated by John von Neumann, states that the defining characteristic of a complex system is that it constitutes its own simplest behavioral description. The simplest complete model of an organism is the organism itself. Trying to reduce the system’s behavior to any formal description makes things more complicated, not less.
The third law states that any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.
The author continues:
The third law offers comfort to those who believe that until we understand intelligence, we need not worry about superhuman intelligence arising among machines. But there is a loophole in the third law. It is entirely possible to build something without understanding it. You don’t need to fully understand how a brain works in order to build one that works. This is a loophole that no amount of supervision over algorithms by programmers and their ethical advisers can ever close. Provably “good” A.I. is a myth. Our relationship with true A.I. will always be a matter of faith, not proof.