The Pentagon Aims to Predict Enemy’s Actions Through AI

AI Defense
(Image credit: Shutterstock)

The U.S. Department of Defense is looking to interconnect and deploy AI systems that give it a leg-up over would-be enemies. As part of the Pentagon’s Global Information Dominance Experiments (GIDE), the North American Aerospace Defense Command (NORAD) and U.S. Northern Command (NORTHCOM) have carried out a series of deployments of a trifecta of interlinked AIs – codenamed Cosmos, Gaia, and Lattice. Working together, the aim of these systems is to process gargantuan amounts of data in real-time in several scenarios (including armed deployments), and aim to preempt actions from hostile actors.

During a one-day conference of the National Security Commission on Artificial Intelligence (NACAI), U.S. Defense Secretary Lloyd Austin described the country's AI strategy for military and security applications as aiming to instrumentalize “ (…) the right mix of technology, operational concepts, and capabilities—all woven together in a networked way that is so credible, flexible, and formidable that it will give any adversary pause." The idea is to not only keep up with AI developments around the globe – and the consequent acceleration in real-time decision making capabilities – but also as a deterrent for any hostile action in itself. If an enemy believes you are able to predict the best-possible strategic scenarios, they are naturally less likely to carry those plans out on fear of the coordinated response.

The implementation of AI systems as accelerators for strategic decision making aims to reduce the processing time during the so-called “gray” zone of any conflict. This gray zone refers to the period of time in which both parties are assessing each other’s strengths, positioning, and weaknesses, and devising a viable plan prior to the strategic moves that put said plan in action being enacted. With ever faster – and more complex – AIs being deployed in this particular camp, the world is poised for yet another arms race in the information technology field. The question can be made regarding whether or not any player in the global geostrategic field can actually choose to not invest in AI systems as it pertains to their national defense; the risk (and thus, expected cost) of a competing player investing and achieving supremacy in this field are essentially too great, one might argue, for them not to be pursued by all parties that can do so.

That is actually one of the arguments put forward by Lloyd on NACAI; as he put it, “In the AI realm, as in many others, we understand that China is our pacing challenge. We’re going to compete to win, but we’re going to do it the right way.” There are, of course, a number of ethical and practical considerations to this approach. U.S. officials related to the field have been attempting to curb reservations towards the program by employing culture and system-adequate underpinnings to the systems’ development; Austin Lloyd explained as much in the same conference, saying that “(…) our use of AI must reinforce our democratic values, protect our rights, ensure our safety, and defend our privacy. Of course, we understand the pressures and the tensions. And we know that evaluations of the legal and ethical implications of novel tech can take time.” Besides AI, the U.S. are also looking towards quantum computing and its implications on national security – the NSA itself has published a paper under the title "Quantum Computing and Post-Quantum Cryptography FAQs", which we covered here.

Those ethical considerations are being studied all over the world – besides the technical and technological difficulties of developing and controlling these systems. Wherever there is the automated processing of data, the potential implications for a citizen’s safety and liberty can’t be understated, which is why the AI field has been increasingly scrutinized. The EU, for instance, is currently in the process of establishing guidelines and harmonizing legislative efforts for both AI developers and their respective end-users, whether private or state-based, and have taken steps contrary to what some other global powers have. Even in its nascent stage, and likely as a tip of the hat towards China’s citizen scoring system, such an application of AI has been clearly ruled-out in the proposal. Other institutions such as The Future of Life Institute - which counts with contributions from philosopher Nick Bostrom and Elon Musk - are working in parallel to produce a relevant enough body of work that political, economic, and technical parties can consider when considering their AI-development philosophies.

There are an incredible amount of elements and implications to consider regarding AI – how it’s written, how it operates, what margin of error it’s allowed to operate in, and even then, there’s also the final, human element to consider as well. There are incredible gains to be found in the harnessing of these new technologies, but as a given technology’s impact increases, so does the danger it poses. We know for a fact that we are going to deploy systems such as these – the main question now has to do with how well we do it. No pressure.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • gggplaya
    "Shall we play a game?"
    Reply
  • Unolocogringo
    gggplaya said:
    "Shall we play a game?"
    Another Oldie like me?
    Reply
  • JamesSneed
    What if the enemies use AI's to outsmart our AI's?
    Reply
  • USAFRet
    JamesSneed said:
    What if the enemies use AI's to outsmart our AI's?
    That's where we are currently.

    Better stealth results in better radar results in better stealth results in better radar results in better missiles, etc, etc, etc.

    Nothing is static and never changing.
    It has always been thus.
    Reply
  • gggplaya
    Unolocogringo said:
    Another Oldie like me?

    Yes, but technically in the movie Captain America and the Winter Soldier, Scarlett Johansson did reference the line when her and Captain America stumbled into an old tape reel computer room in the basement of an army training camp. Dr. Zola had uploaded himself into the computer room to become an A.I. I'm not sure how many younger people got the reference but it was renewed.

    rkvHzRnR6BUView: https://www.youtube.com/watch?v=rkvHzRnR6BU
    Reply
  • Sippincider
    IIRC the Cold War would've went thermonuclear at least twice, if then-current technology had been left to determine the enemy's actions.

    Thanks but we still need humans and gut instinct.
    Reply
  • alceryes
    AI (of a sorts) has already ALMOST started WWIII.
    I remember reading a long article about the 'computer' that the USSR used in the early 1980s to predict the probability of a nuclear attack. It was fed coutinous data from hundreds (thousands?) of points and would calculate an attack probability from that. In late September, 1983, alarms went off in a secret bunker south of Moscow saying that the probability of attack was 100% and that the US had just launced intercontinental nuclear missles at the Soviet Union. This happened TWICE within a minute!
    A Soviet Lt. Colonel had a gut feeling that it was a 'false alarm,' told his superiors it was a false alarm, and thus we are all still alive today.

    Here's a short synopsis of the long article I read. I can't find the long version. - https://www.forbes.com/sites/kionasmith/2018/09/25/the-computer-that-almost-started-a-nuclear-war-and-the-man-who-stopped-it
    Reply
  • USAFRet
    alceryes said:
    AI (of a sorts) has already ALMOST started WWIII.
    There are a couple of books outlining the (known) 'almost' incidents.

    That Russian one was probably the closest.
    Reply
  • JamesSneed
    USAFRet said:
    That's where we are currently.

    Better stealth results in better radar results in better stealth results in better radar results in better missiles, etc, etc, etc.

    Nothing is static and never changing.
    It has always been thus.

    Exactly where I see AI used for militaries going.
    Reply