Feature Representations for Enhanced Multi-Agent Navigation Strategies

Period of Performance: 06/20/2013 - 09/20/2015

$750K

Phase 2 SBIR

Recipient Firm

Systems & Technology Research
600 West Cummings Park Array
Woburn, MA 01801
Principal Investigator

Abstract

ABSTRACT: The U.S. military has achieved unsurpassed air superiority in recent conflicts, a feat that is unlikely to continue as we face increasingly sophisticated adversaries. Operating within an Anti-Access / Area Denial (A2AD) environment requires highly capable platforms that can coordinate to achieve a mission objective. To support this vision, we will develop compact feature representations of scenes that enable multiplatform scene matching, geolocation, and mapping over bandwidth constrained data links. Our approach is based on a statistical hierarchical framework that learns the representation directly from the observed data. We will develop a both the statistical models and the corresponding algorithms for generating virtual maps from imagery, evaluate algorithm tradeoffs, and demonstrate real-time mapping and scene recognition from multiple agents. We will evaluate both the accuracy in matching scenes and the computational complexity. We will also evaluate the interpretability of the scene representations by humans to support human-in-the-loop control and processing. BENEFIT: The military is increasingly relying on autonomous agents to keep personnel safe while achieving critical missions. Today, the military uses robots to detect and neutralize IEDs, and the Air Force is initiating research in autonomous weapons platforms. The technology we are developing is directly aimed at supporting these missions. In the near term, our technology is most applicable to mapping systems, for example, platforms that are mapping a region of interest. It will also have commercial application to emergency search and rescue, remote exploration, and other robotic applications.