Reading Signs in Context to Support Perception-Based Autonomous Navigation

Period of Performance: 05/14/2003 - 08/10/2005

$748K

Phase 2 SBIR

Recipient Firm

Perceptek
12395 North Mead Way
Littleton, CO 80125
Principal Investigator

Abstract

Success of autonomous vehicles and driver assistance systems depend on their ability to interpret and operate in unknown environments with little a-priori information. To be successful, systems developed to operate in these environments must detect, recognize and reason about various textual and graphical content from signs, labels, and plaques. Our approach builds upon previous R&D to provide a real-time architecture for detecting, tracking, and recognizing signs from dynamic video imagery. Our approach detects signs by combining visual cues consisting of color, shape, and text. Signs are tracked to improve detection accuracy and provide resiliency to occlusions and environmental effects. The orientation of the sign is determined and the imagery rectified for recognition processing. Text and graphics characters are extracted using a combination of optical character recognition and syntactic/semantic parsing. Our Phase II program will address the areas of the system design that were not fully developed during Phase I, including the use of feedback between processing components, fusion of information in order to reason about the location, size, movement, and appearance of signs, and active detection, tracking, and recognition of signs, where the sensors? parameters are actively controlled in order to overcome difficult viewing scenarios.