Validation of Automatic Ground Moving Target Indicator Exploitation Algorithms

Period of Performance: 07/18/2013 - 04/17/2014

$150K

Phase 1 SBIR

Recipient Firm

Black River Systems Co., Inc.
162 Genesee Street
Utica, NY 13502
Principal Investigator

Abstract

ABSTRACT: Today s analysts have an exceptional amount of GMTI intelligence readily available to them for servicing Requests for Information (RFI) because of the growing plethora of GMTI collection systems and multiple forensic data archives; however, only a small fraction of this GMTI is exploited because analysts lack trust in automated GMTI exploitation algorithms that are capable of grander analysis. Analysts need confidence in the automated tools in order to adopt them into their workflow; thus we must sufficiently validate these GMTI exploitation algorithms. Black River will develop a GMTI Algorithm Validation System (GAVS) that ensures high quality, high confidence data products are produced by the exploitation tools delivered to the analysts. The GAVS will use a Design of Experiments process to provide statistically significant validation within reasonable cost constraints and accommodate the evaluation of a diverse set of exploitation algorithms goals, to include target tracking, milling activity detection and other Activity Based Intelligence analysis. Additionally, Black River will promote algorithm acceptance by utilizing analysts own RFI responses as ground truth in the algorithm validation process to overcome simulated data shortcomings and we will identify problem characteristics where the GMTI exploitation algorithm performs well through the use of a Relevance Vector Machine. BENEFIT: Black River s proposed revolutionary approach to GMTI exploitation algorithm validation that employs real-world data in algorithm testing and evaluation, at a statistically significant level, will considerably increase analysts confidence in using automated tools while cutting overall evaluation costs. Additionally, the GMTI Algorithm Validation System will indicate operating conditions under which an algorithm performs well and poorly, which promotes analyst understanding, usage, and trust of automated algorithms and directs developers to areas for algorithmic improvement.