Tech

Using AI to improve flagging of internal threats within the US Army

Share
Share
Army
Credit: Unsplash/CC0 Public Domain

Research published in the International Journal of Applied Decision Sciences describes how artificial intelligence could be used to root out internal threats in the U.S. Army. The research centers on the Army’s Insider Threat Hub, a facility that assesses the danger posed by individuals flagged for potentially harmful behavior. It then introduces a deep learning tool capable of significantly improving how such cases are prioritized and processed.

Insider threats are very different from external threats. That much is obvious, by definition. Individuals with legitimate access to sensitive systems or information can wreak havoc if they have a mind to or even unintentionally. Such individuals might be current or former staff or contractors. In a military context, data disruption can be a matter of life or death.

The U.S. Army has hundreds of thousands of personnel and endless incoming threat reports. According to the researchers, there is an absence of a standardized system to triage threat reports, which complicates efforts to identify risks and so the backlog of unresolved cases continues to accrue.

The research offers a novel response to the problems facing the U.S. Army: a classification model trained on historical data from previously reviewed cases that can determine whether a given individual is a negligible threat or high risk. The output from the system then allows staff to prioritize their efforts in handling the high-risk cases first.

The system uses known personality traits, such as impulsiveness or aggression, and situational indicators such as external financial pressures or personal trauma to judge the threat an individual might pose. The interplay of these elements gives the most predictive insight.

Tests with the trained model on a second set of historical data gave a detection accuracy rate of 96%. The system assessed the severity of most threats accurately or, if it didn’t, it slightly overestimated the risk, which is a better result than overlooking dangerous individuals.

More information:
Saleem Ali et al, Human and machine partnership: natural language processing of army insider threat hub data, International Journal of Applied Decision Sciences (2025). DOI: 10.1504/IJADS.2025.146569

Citation:
Using AI to improve flagging of internal threats within the US Army (2025, June 9)
retrieved 9 June 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Gemini’s new Scheduled Actions feature puts catching up with ChatGPT on its dayplanner
Tech

Gemini’s new Scheduled Actions feature puts catching up with ChatGPT on its dayplanner

Google Gemini’s app has a new Scheduled Actions feature to assign recurring...

15 things we learned at the Apple WWDC 2025 keynote
Tech

15 things we learned at the Apple WWDC 2025 keynote

Apple’s WWDC keynote is over for another year, but it left us...

Apple announces Vision Pro overhaul with visionOS 26 – here are the 6 biggest updates headed to Apple’s VR headset
Tech

Apple announces Vision Pro overhaul with visionOS 26 – here are the 6 biggest updates headed to Apple’s VR headset

At WWDC 2025 Apple has announced its next-generation of software including visionOS...