Defense official reveals how AI chatbots could be used for targeting decisions

US Military Advances AI Integration for Targeting Decisions

In a candid disclosure at a recent defense conference, a senior US Air Force official outlined plans to incorporate AI chatbots directly into the militarys targeting decision-making process. Schuyler Moore, principal deputy chief of staff for intelligence, surveillance, reconnaissance, and cyber effects operations, made the remarks during a panel at the Armed Forces Communications and Electronics Association (AFCEA) International Cyber Symposium in March 2026. This announcement signals a pivotal shift toward automating elements of the kill chain, the sequence of steps from target identification to engagement.

Moores statement underscores the Pentagons accelerating embrace of generative AI technologies amid ongoing geopolitical tensions. The kill chain traditionally involves human analysts sifting through vast datasets from satellites, drones, and sensors to nominate targets, followed by commanders approval before strikes. Moore described AI chatbots as tools that will assist in this pipeline, potentially querying large language models (LLMs) to refine target recommendations or prioritize threats in real time.

This initiative builds on existing programs like Project Maven, the Departments flagship AI effort launched in 2017. Maven employs machine learning to analyze full-motion video from drones, flagging objects of interest for human review. Moore highlighted how chatbots represent the next evolution, enabling natural language interactions with these systems. For instance, an operator might ask an AI, “What are the highest priority targets in this sector based on current intelligence?” and receive synthesized responses drawing from classified databases.

The Air Force is not alone in this pursuit. The Defense Innovation Unit (DIU) and the Joint Artificial Intelligence Center (JAIC) have been prototyping similar capabilities. Moore referenced the Replicator program, a 1 billion initiative to deploy thousands of attritable autonomous systems by August 2025. These swarms of drones and decoys will rely on AI for coordination, with chatbots potentially serving as interfaces for human oversight.

Technical underpinnings involve fine-tuned LLMs integrated with military-specific data. Vendors like Palantir, Anduril, and Scale AI have secured contracts to adapt commercial models for defense use. These systems ingest multimodal data, including electro-optical imagery, signals intelligence, and open-source information, to generate probabilistic assessments. Moore emphasized safeguards, such as human-in-the-loop protocols, where AI outputs require validation before action. “Chatbots will not pull triggers,” he clarified, “but they will inform decisions faster than ever.”

Challenges abound. Adversarial robustness remains a concern; AI models can be tricked by data poisoning or electronic warfare. Ethical dilemmas also loom large. Critics, including arms control advocates, warn that overreliance on AI could lower barriers to lethal force, echoing debates around autonomous weapons. The Departments Directive 3000.09 on Autonomy in Weapon Systems mandates senior review for new capabilities, yet rapid iteration pressures timelines.

Moore addressed these head-on, noting rigorous testing under the DoDs Responsible AI guidelines. These include bias audits, explainability requirements, and red-teaming exercises. The Air Force is developing a Common AI Framework to standardize deployments across services. Integration with Joint All-Domain Command and Control (JADC2) will further embed chatbots into networked operations, fusing data from air, land, sea, space, and cyber domains.

Budgetary support reflects commitment. The fiscal 2026 defense budget requests 1.8 billion for AI and machine learning, up 10 percent from prior years. Congress has signaled approval, with bipartisan backing for countering peer competitors like China, whose PLA is aggressively pursuing AI-enabled warfare.

Real-world testing is underway. In exercises like Project Convergence, AI prototypes have demonstrated target nomination speeds 100 times faster than manual processes. Moore cited a scenario where chatbots processed hypersonic missile threats, recommending intercepts within seconds.

This trajectory positions AI chatbots as force multipliers, compressing the observe-orient-decide-act loop. Yet, Moore cautioned that full maturity lies years ahead. Interoperability with legacy systems, data silos, and talent shortages pose hurdles. The Air Force aims to train 10,000 AI-literate personnel by 2030.

As the military hurtles toward AI ubiquity, Moores revelation crystallizes a new era. Targeting decisions, once the sole purview of seasoned operators, now blend human judgment with silicon cognition. The implications extend beyond battlefields, reshaping deterrence, escalation dynamics, and international norms.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.