The Use of AI in the U.S. Military - Reliability

Programming an AI system for all contingencies can come at a cost of reliability. For example, the AI that learned to play Tetris learned to pause the game just before the final piece would fall. Pausing the game fulfilled the objective it was given, maximizing the chance of winning with every move. “Truly, the only winning move is not to play.” (Murphy, 2013) The AI’s ability to adapt in this manner exhibits uncertainty in behavior that most militaries would not allow in their soldiers or systems. The ability to explain a decision is another issue that AI systems struggle with. A system must be both reliable and be trustworthy. For militaries to trust the system, the system will need to explain why it decides on a decision.

Every five minutes, our cloud-based AI pulls a snapshot of the data center cooling system as represented by thousands of physical sensors. The information is fed into our deep neural networks, which predict the future energy efficiency and temperature based on proposed actions. The AI selects actions that satisfy safety constraints and minimize future energy consumption. Optimal actions are sent back to the data center, where they local system verifies them against its own safety constraints before implementation.
Process flow for how AI is now directly controlling cooling at Google data centers.

Reliability takes into account more than just the design of an AI system. Government agencies are always trying to gain an upper hand, regardless of whether or not we are at war. The same would be just as true when AI systems are in play. Researchers have proven that common image recognition algorithms can be compromised. Using a poisoned data attack, an image can have pixels injected that cause an image to look like something else to a computer (Nguyen, Yosinski, & Clune, 2015). If an AI’s algorithm is trained using data that is freely available on the internet, the training process could be vulnerable as adversaries attempt to poison the data.

Most importantly, our data center operators are always in control and can choose to exit AI control mode at any time. In these scenarios, the control system will transfer seamlessly from AI control to the on-site rules and heuristics that define the automation industry today. his graph plots AI performance over time relative to the historical baseline before AI control. Performance is measured by a common industry metric for cooling energy efficiency, kW/ton (or energy input per ton of cooling achieved). Over nine months, our AI control system performance increases from a 12 percent improvement (the initial launch of autonomous control) to around a 30 percent improvement.
Humans operators can take control at anytime, but why would they when the system proves itself reliable over time.

An AI system would need to be heavily secured and monitored. Hacking could directly exploit algorithms trained on secure networks. Cybersecurity is going to have to evolve to encompass these high-value targets. They will likely need to train AI of their own to keep up with the evolving environment. Of course, let’s not even think about what happens when hackers start using AI to improve their ability to hack.

References

  • Murphy, T. (2013). The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel … after that it gets a little tricky. Regents of the Wikiplia Foundation, 22.
  • Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. Computer Vision and Pattern Recognition, 20.

Leave a comment