Weapon System Operator Multi-Media Tactical Operation Aids

Period of Performance: 02/13/2003 - 08/13/2003

$69.7K

Phase 1 SBIR

Recipient Firm

Technology Engineering Research, Inc.
16 Wildhedge Lane
Holmdel, NJ 07733
Principal Investigator

Abstract

TERI will develop and demonstrate the combined use of voice commands, audio response, hands-free pointing/clicking, Multi-Media Tactical Operation Aid (MTOA), and cueing as an effective and efficient multi-media interface. Innovative technologies have been developed and matured recently that can now support the implementation of a multi-media "smart" interface tactical aid in the typical tactical noisy, mobile, ship and airborne environments. The multi-media combined enhancements to the E-2C ACIS controlled weapon system have the potential to significantly reduce operator workload while increasing the Weapon System Operator (WSO)s ability to react and proact to potentially hazardous situations. Projective task analysis, driven by a decision support activity scenario, will be conducted for various combinations of interface technologies to derive an effective and feasible conceptual design MTOA integrated with natural language. Quantitative performance and qualitative human interaction analysis will be documented and provided with the conceptual design. A demonstration of the integrated technologies for typical combined speech and MTOA input activities is planned. Lightweight and low power commercially available eye-tracking sensors can be integrated with an E-2C ACIS simulator to study the combined benefit for interaction in a vibrating and motion environment. The Technology Engineering Research Inc (TERI) natural language speech and synthesis dialogue contextual software, previously developed for the E-2C high noise environment, will be combined with eye tracking and facial gesture detector (optical or neuromuscular) to improve upon the total hands-free control interface between the WSO and the E-2C ACIS platform display. The use of a multiple-controller approach enables the interaction to be tailored to the E-2C task and environmental constraints, as well as user preferences. A natural language software-based speech recognizer in combination with eye-tracking, facial gesture, brain wave and neuromuscular sensor technologies are to be configured in a variety of ways to provide the required functionality. Unique to this proposal is the development of an integrated set of human-computer control suite technologies using standard Application Program Interfaces (APIs) in the commercial Windows and Unix environment that are compatible with the platform legacy architectures for efficient hands-free operation of computer systems. Unique to this proposal is the visual gesture cue processing of a normal face contour model into a hierarchical probability based framework, which will operate in a vibration environment. This decomposes the complex head shape into two simple layers: the global shape with descriptors for the position, scale and rotation of local shapes (eyes, eyebrows, mouth, chin); and local shape with salient descriptors for the motion of these local shapes. Visual perception will be cooperative and mutually supported by these shapes and movements for independent interpretation of operator-desired point, click and selection commands. Numerous applications in air traffic control, unmanned vehicle command, industrial production monitoring, power plant control and distribution, entertainment, anti-terrorist screening, law enforcement and e-commerce vending. Hands-free control suites for computers have wide military and commercial applications in portable and mobile environments; and whenever it is prudent to replace the traditional keyboard, mouse and trackball because of an unstable or unsuitable workplace.