The U.S. Department of Defense is looking to interconnect and deploy AI systems that give it a leg-up over would-be enemies. As part of the Pentagon’s Global Information Dominance Experiments (GIDE), the North American Aerospace Defense Command (NORAD) and U.S. Northern Command (NORTHCOM) have carried out a series of deployments of a trifecta of interlinked AIs – codenamed Cosmos, Gaia, and Lattice. Working together, the aim of these systems is to process gargantuan amounts of data in real-time in several scenarios (including armed deployments), and aim to preempt actions from hostile actors.
During a one-day conference of the National Security Commission on Artificial Intelligence (NACAI), U.S. Defense Secretary Lloyd Austin described the country's AI strategy for military and security applications as aiming to instrumentalize “ (…) the right mix of technology, operational concepts, and capabilities—all woven together in a networked way that is so credible, flexible, and formidable that it will give any adversary pause." The idea is to not only keep up with AI developments around the globe – and the consequent acceleration in real-time decision making capabilities – but also as a deterrent for any hostile action in itself. If an enemy believes you are able to predict the best-possible strategic scenarios, they are naturally less likely to carry those plans out on fear of the coordinated response.
The implementation of AI systems as accelerators for strategic decision making aims to reduce the processing time during the so-called “gray” zone of any conflict. This gray zone refers to the period of time in which both parties are assessing each other’s strengths, positioning, and weaknesses, and devising a viable plan prior to the strategic moves that put said plan in action being enacted. With ever faster – and more complex – AIs being deployed in this particular camp, the world is poised for yet another arms race in the information technology field. The question can be made regarding whether or not any player in the global geostrategic field can actually choose to not invest in AI systems as it pertains to their national defense; the risk (and thus, expected cost) of a competing player investing and achieving supremacy in this field are essentially too great, one might argue, for them not to be pursued by all parties that can do so.
That is actually one of the arguments put forward by Lloyd on NACAI; as he put it, “In the AI realm, as in many others, we understand that China is our pacing challenge. We’re going to compete to win, but we’re going to do it the right way.” There are, of course, a number of ethical and practical considerations to this approach. U.S. officials related to the field have been attempting to curb reservations towards the program by employing culture and system-adequate underpinnings to the systems’ development; Austin Lloyd explained as much in the same conference, saying that “(…) our use of AI must reinforce our democratic values, protect our rights, ensure our safety, and defend our privacy. Of course, we understand the pressures and the tensions. And we know that evaluations of the legal and ethical implications of novel tech can take time.” Besides AI, the U.S. are also looking towards quantum computing and its implications on national security – the NSA itself has published a paper under the title "Quantum Computing and Post-Quantum Cryptography FAQs", which we covered here.
Those ethical considerations are being studied all over the world – besides the technical and technological difficulties of developing and controlling these systems. Wherever there is the automated processing of data, the potential implications for a citizen’s safety and liberty can’t be understated, which is why the AI field has been increasingly scrutinized. The EU, for instance, is currently in the process of establishing guidelines and harmonizing legislative efforts for both AI developers and their respective end-users, whether private or state-based, and have taken steps contrary to what some other global powers have. Even in its nascent stage, and likely as a tip of the hat towards China’s citizen scoring system, such an application of AI has been clearly ruled-out in the proposal. Other institutions such as The Future of Life Institute - which counts with contributions from philosopher Nick Bostrom and Elon Musk - are working in parallel to produce a relevant enough body of work that political, economic, and technical parties can consider when considering their AI-development philosophies.
There are an incredible amount of elements and implications to consider regarding AI – how it’s written, how it operates, what margin of error it’s allowed to operate in, and even then, there’s also the final, human element to consider as well. There are incredible gains to be found in the harnessing of these new technologies, but as a given technology’s impact increases, so does the danger it poses. We know for a fact that we are going to deploy systems such as these – the main question now has to do with how well we do it. No pressure.