The Use of AI in the U.S. Military - Reliability
Programming an AI system for all contingencies can come at a cost of reliability. For example, the AI that learned to play Tetris learned to pause the game just before the final piece would fall. Pausing the game fulfilled the objective it was given, maximizing the chance of winning with every move. “Truly, the only winning move is not to play.” (Murphy, 2013) The AI’s ability to adapt in this manner exhibits uncertainty in behavior that most militaries would not allow in their soldiers or systems. The ability to explain a decision is another issue that AI systems struggle with. A system must be both reliable and be trustworthy. For militaries to trust the system, the system will need to explain why it decides on a decision.
Reliability takes into account more than just the design of an AI system. Government agencies are always trying to gain an upper hand, regardless of whether or not we are at war. The same would be just as true when AI systems are in play. Researchers have proven that common image recognition algorithms can be compromised. Using a poisoned data attack, an image can have pixels injected that cause an image to look like something else to a computer (Nguyen, Yosinski, & Clune, 2015). If an AI’s algorithm is trained using data that is freely available on the internet, the training process could be vulnerable as adversaries attempt to poison the data.
An AI system would need to be heavily secured and monitored. Hacking could directly exploit algorithms trained on secure networks. Cybersecurity is going to have to evolve to encompass these high-value targets. They will likely need to train AI of their own to keep up with the evolving environment. Of course, let’s not even think about what happens when hackers start using AI to improve their ability to hack.
References
- Murphy, T. (2013). The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel … after that it gets a little tricky. Regents of the Wikiplia Foundation, 22.
- Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. Computer Vision and Pattern Recognition, 20.
Leave a comment