Voice recognition applications are not programmed to understand a variety of black voices in the same way that they recognize white ones. 

The Proceedings of The National Academy of Sciences of The United States Of America (PNAS) released a recent study called, “ Racial disparities in automated speech recognition” where it reveals the way voice recognition apps register black voices differently.

Automated speech recognition (ASR) systems are now used in a variety of applications to convert spoken language to text, from virtual assistants, to closed captioning, to hands-free computing. By analyzing a large corpus of sociolinguistic interviews with white and African American speakers, we demonstrate large racial disparities in the performance of five popular commercial ASR systems,” a summary of the report states. “Our results point to hurdles faced by African Americans in using increasingly widespread tools driven by speech recognition technology. More generally, our work illustrates the need to audit emerging machine-learning systems to ensure they are broadly inclusive.”

If you’re wondering why companies like Apple, Microsoft, Google and Amazon are only programming their ASR systems to recognize the inflections in the way white people speak, you’re not alone. 

John Rickford, a Stanford researcher who contributed to the “Racial disparities in automated speech recognition study,” echoed the point that these major companies are missing something.

“Here are probably the five biggest companies doing speech recognition, and they are all making the same kind of mistake,” Rickford said in a New York Times report. “The assumption is that all ethnic groups are well represented by these companies. But they are not.”

Many people have experienced the struggles that come with allowing an app to translate your speech into an action or text in things like home appliances, in-car systems and digital dictation. However these recent studies show that black people receive two times more errors than their white counterparts who try to access the same automated speech recognition services.

Last Fall Eric Schmidt, the former Google chief executive and chairman, acknowledged the flaws in his artificial intelligence systems. 

“We know the data has bias in it. You don’t need to yell that as a new fact,” he he said in a speech. “Humans have bias in them, our systems have bias in them. The question is: What do we do about it?”

According to the research, the answer could be increasing the amount of black speakers who contribute to ASR systems at these major companies.

“Our findings indicate that the racial disparities we see arise primarily from a performance gap in the acoustic models, suggesting that the systems are confused by the phonological, phonetic, or prosodic characteristics of African American Vernacular English rather than the grammatical or lexical characteristics,” the PNAS study states. “The likely cause of this shortcoming is insufficient audio data from black speakers when training the models.”

 

WP Twitter Auto Publish Powered By : XYZScripts.com
Verified by ExactMetrics