Advertisement

5 problems with robot safety, according to the experts at Google AI

Google is worried about house-training your robot

5_problems_goggle_AI_robot

Artificial Intelligence Robots at Google will soon learn the same way AI algorithms do, through iteration and exploration. However, researchers at Google have come forward with five “practical research problems” concerning knowledge acquisition in AI robots.

For PR reasons, the paper frames these problems as they might relate to a hypothetical cleaning robot.

  • 1.Avoiding Negative Side Effects : how do you stop a robot from negatively disturbing the environment while pursuing its goals, e.g. knocking over a bookcase because it can mop the floor faster by doing so?
  • 2.Avoiding Reward Hacking : if a robot is programmed to enjoy cleaning your room, how do you stop it from messing up the place just so it can feel the pleasure of cleaning it again?
  • 3.Scalable Oversight : how can the robot find a way to do the right thing despite limited information, e.g. how can we efficiently ensure that a cleaning robot should decide to throw out a candy wrapper but not a stray cell phone?
  • 4.Safe Exploration : how do you teach a robot the limits of its curiosity? Google’s researchers give the example, “the robot should experiment with mopping strategies, but [not] putting a wet mop in an electrical outlet.”
  • 5.Robustness to Distributional Shift : how do we ensure a robot recognizes and behaves robustly when in a different environment from its training environment? For example, behaviors it learned for cleaning factory work floors may be dangerous in an infant’s bedroom.

The five research problems have been defined as “unintended and potentially harmful behaviors” that may emerge from real-world AI systems. To address such issues as preventing negative side effects, researchers need to strike a balance between penalizing unwanted behaviors while still allowing a robot some leeway to explore and learn. To solve that, researchers propose solutions like simulated and constrained exploration, human oversight, and goals that heavily weight risk. Although the resolutions seem like common sense, programming AI requires substantial attention and varied possible solutions suggests broad avenues for attacking the issues.

While Google employs a kitschy cleaning example in addressing these five concerns, there is no overlooking the acknowledgment of the dangerous implications AI systems can pose when designed sloppily. As Goggle point out, these are key issues that programmers will need to consider before even thinking about taking the bots for a test-drive in the home. Consequently, Google is cautious in unveiling major AI systems control, as small-scale incidents could cause a justified loss of trust in automated systems.

Sources: The Verge, Engadget

Advertisement



Learn more about Electronic Products Digital

Leave a Reply