Automatic Spatial-Temporal Registration and Fusion of Multi-Modal Sensors for Networked Scouting Vehicles

Period of Performance: 11/03/2006 - 05/03/2007


Phase 1 SBIR

Recipient Firm

Mrlets Technologies, Inc.
616 Brookmeade Ct.
Beavercreek, OH 45434
Principal Investigator


The objective of this project is to design and demonstrate the feasibility of an innovative automatic spatial-temporal registration and fusion of multi-modal sensors for networked scouting vehicles which is applicable to unprecedented situational awareness in homeland defense and security. There are two subsystems: navigation and situation awareness. The sensors for the navigation subsystem include multi-antenna GPS, MEMS IMU, and magnetometer. The sensors for situation awareness include active and passive radar, and imaging sensors (CCD camera, IR, FLIR, EO, LIDAR). Each sensor has its advantages and disadvantages, and cannot meet the system requirements alone. Sensor fusion is an important approach in utilizing the advanced capabilities of multi-modal sensors. However, these sensors have system biases which result in unacceptable errors if not registered. In this proposal, an automatic spatial-temporal registration and fusion of multi-modal sensors is proposed. The spatial-temporal system bias models of multi-modal sensors are derived. Advanced filters such as unscented Kalman filters and particle filters are used to implement centralized and distributed registration and fusion algorithms. In open and low jamming environments, multi-antenna GPS receivers provide a good approach to calibrate other sensors. If the GPS information is imprecise under severe jamming or blocking, fusion of MEMS IMU, magnetometer, and vision provides continuous and accurate navigation. The severe drift of MEMS IMU is calibrated by magnetometer and vision navigation. A real-time spatial pyramid Kanade-Lucas-Tomasi (KLT) feature tracker is utilized to track features and register video sequences. Mutual information is applied to register multi-modal imaging sensors. Real-time motion tuned continuous wavelet transform object tracking (CWTOT) is applied to detect the objects in video sequences. The features are used to form the measurement equations of the navigation filter, or mapped to the GIS model database to estimate the platform pose directly. Our technique provides continuous robust navigation and situation awareness abilities under various environments.