System for automatic classification of rodent vocalizations

Period of Performance: 09/01/2015 - 08/31/2016

$293K

Phase 1 SBIR

Recipient Firm

Biospeech, Inc.
Portland, OR 97219
Principal Investigator

Abstract

DESCRIPTION (provided by applicant): Depression, dementia, schizophrenia, substance abuse, developmental disorders and speech- related disabilities can feature impairments in the ability to express emotions. Development of treatments for these impairments presents a formidable challenge. In this regard, several researchers who study rodent models of mental illness have more recently focused upon the frequency modulations associated with rodent ultrasonic vocalizations (USVs). Measures of USVs are an attractive behavioral phenotype because they model the prosodic deficits in existing mental disorders. A critical impasse for researchers is that USV extraction from background noise is difficult and time-consuming and the approaches used to classify these calls into categories remain arbitrary. Thus, most researchers report simple tallies of the numbers of USVs produced by their rodent models. We propose a software system that allows the user to efficiently and effectively interrogate rodent vocalization data for prosodic content. Our collaboration brings together an expert in mouse social behavior (Lahvis) with an expert in computational assessment of human prosody (van Santen). We propose to take advantage of two well-defined genetic strains of mouse sociability, the C57Bl/6 and BALB/c strains, which have been tested extensively for USVs under highly controlled social conditions. With audio recordings of these two strains of mice under conditions of social interaction, social solicitation, and social isolation, we will develop software that automatically segments, tracks, and classifies mouse USVs from acoustic recordings. This software will vastly improve researcher efficiency in academic and industry--based analysis of rodent vocalizations. It will improve routine USV assessments, from counts of call rates toward analyses of frequency modulations within each call. Further, our software will allow users to classify calls according to mathematical approaches, rather than the arbitrary currently used. This analytical capacity is not currently available despite broad demand for software with this function. The proposed software will provide researchers with a powerful new tool for determining how genetic and pharmacological manipulations moderate vocal patterns. It will have widespread value for application in academia and industry.