Researchers from the University of Rochester will describe their work at the IEEE Workshop on Spoken Language Technology this week in Miami.
"We actually used recordings of actors reading out the date of the month -- it really doesn't matter what they say, it's how they're saying it that we're interested in," said Wendi Heinzelman, professor of electrical and computer engineering, in a statement. The program designed by the engineers analyzes 12 features of speech, including pitch and volume, to identify emotions, such as sad, happy, fearful and disgusted.
Tests so far have proven 81% accurate for the program, which has been developed into a prototype app, which displays a happy or sad face after recording and analyzing a user's voice.
It could some day be used for everything from adjusting colors displayed on a phone to launching music that fits your mood. Heinzelman and her Bridge Project team are also working with psychologists at the university to explore issues such as parent-teenager relations.
The program was built by Na Yang, one of Heinzelman's grad students, during a stint at Microsoft Research.