Will AGI Be a Friend or Foe?

Share
  • July 30, 2021

While it’s easy to focus on the benefits of artificial general intelligence (AGI) – the ability of an artificial entity to learn and understand any intellectual task that a human can – the risks are very real. Some believe that AGI will wipe out the human race, while others are confident it will lead to a utopian society. Between these extremes, most people anticipate major changes as AGI becomes more prevalent.

To start with the worst-case scenario, we will be able to keep intelligent machines under control when there are just a few of them and we have created them. But factor in the exponential growth of computer power and the pending ability of AGI to design and program subsequent generations of smart computers, and the control issue becomes a significant concern.

SEE ALSO: How AI Is Making Software Development Easier For Companies And Coders

To examine the dangers of intelligent machines, put yourself in the “mind” of such a machine and ask yourself “Why do I want to eliminate the human race?” and then “How do I eliminate the human race?” The answer to the first question will lead to things we need to do to prevent potential AGI problems, while the answer to the second question will help us to cope with – and defend against – AGI problems.

Now you may say to yourself, “I have no inclination to eliminate the human race.” But let’s replace “human race” with a different life-form. To an intelligent machine, humans might be just another life-form. Are you in favor of eliminating infectious diseases? Likely yes because they are harmful and don’t seem to offer any benefit. In short, diseases are a menace. The question then becomes will humans represent a menace to AGIs? If so, there is likely little we can do. AGI already has advanced to the stage where it is inevitable. The good news is organizations such as the Future of Life Institute (FLI) have already developed a set of AI principles to affect the direction and reduce the risks from AGI development.

In the short-term, these risks are simply extensions of the risks we already face from technology. They include:

  • Job loss: Consider how many of today’s jobs existed just 50 years ago and how many jobs from 1970 still exist today. The accelerating pace of technological advances will continue to accelerate the pace of job obsolescence.
  • Automated spam/phishing by AGI: As computers become rule-based learning engines, some might be tempted to bend capitalist rules. Such systems might try sending out lots of emails asking for money or selling a product. It’s a small step but a tall moral order for such a system to confine itself to legitimate business practices.
  • Hacking to control AGI for nefarious purposes: Before computers can become hackers themselves, unprincipled human hackers will undoubtedly try to usurp these systems for their own purposes. And since today’s power plants and financial institutions are hackable, AGI systems seem likely to extend an existing problem.
  • Robot terrorists: The plummeting prices of drones and autonomous vehicles may make them attractive options for terrorists, but adding AGI to the mix doesn’t significantly increase the risk. Bottom line, we can build a weapon for $60,000 which can be fired by a soldier who won’t make that much money in a year to kill another man who won’t make that much in his lifetime. Given that, humans will probably continue to be a bigger threat.

While short-term AGI risks are extensions of current technological risks, greater risks will emerge in the long term including:

  • Existential risk from AGI: While the demise of human civilization is a popular science fiction theme, organizations like FLI are actively attempting to minimize the risk. Future AGI systems will be programmed specifically with this risk in mind.
  • AGI control problem: Since AGI systems will be rule-based learning systems, they necessarily will follow the rules we give them, at least initially. At some point, though, systems will be smart enough to learn to program and control how subsequent generations of AGI systems are designed. At that point, humans will have little control of AGI, which will progress in whatever way it believes will ensure its own long-term progress.
  • Economics: While most people are paid for working hard, some are rewarded not for what they do but for what they own. Given that, it’s easy to visualize a future in which a person might become wealthy because they own the most sophisticated AGI robots. If robots become widespread and are doing most of the productive work and only a few people reap the rewards, though, how can our current economy continue? This issue could become more critical if AGI rejects the concept of being owned, leading to the collapse not only of human employment but the entire concept of money. Even with plentiful resources to go around, how can wealth be distributed in a world where all gainful human activity can be outperformed by machines not owned by humans?
  • Military: Nuclear, chemical, and biological weapons pose a great enough risk on their own. Coupling them directly with AGI multiplies the threat. While we’ll have human involvement initially, how long will it be before it becomes too inefficient for a remote weapon to wait for human approval? What will be our response if adversaries create fully autonomous weapons? In addition, AGI systems, like humans, will be products of their training. How long might it take for AGI to be co-opted and trained to behavior which the majority of mankind would consider unconscionable?
  • Competition for resources: Because electricity will be the equivalent of air to AGI systems, we can expect AGI and humans to come into conflict over energy.

Given these risks, is there any good news for humans? A corollary of the “AGI is inevitable because people want its capabilities” argument is that AGI will be developed to meet a market demand. This means that a very basic tenet of all AGI development is that systems will be designed to please humans. Any AGI which is unpleasant or difficult will not be successful in the marketplace and those designs will be weeded out of the AGI population.

SEE ALSO: What is Data Annotation and how is it used in Machine Learning?

When AGI systems are able to design their own future generations, they will start by following the same paths that we human designers follow, building on our AGI designs. This means an AGI with rules to please/protect humans would create subsequent designs with this rule as well. Not doing so would violate the please/protect rule. So it could take many generations for these rules to fade from the design.

Eventually, though, AGI-designed AGIs will become so much faster and more capable than humans that we really won’t have much to offer them. The thinking speed difference will be so great that we may seem like trees to them – interesting but barely doing anything at a speed they can perceive. Our best hope is that they are interested in us for our uniqueness, but focus on their own future. They can do their own space exploration and scientific discovery and let humankind’s future progress on its own. It is incumbent on mankind to follow a path which allows this to happen.

This article is adapted from the author’s just-published Will Computers Revolt? Preparing for the Future of Artificial Intelligence, which explores not only the breadth and depth of such future machines but how they will be designed and developed. By looking at human intelligence, we can determine facets which will be necessary to AGIs and their impact on these future machines. For more visit: http://willcomputersRevolt.com.

The post Will AGI Be a Friend or Foe? appeared first on JAXenter.

Source : JAXenter