x

Abu Dhabi, UAEWednesday 21 November 2018

Not all accents are boss (great) when it comes to your voice-activated assistant

There are fears artificial intelligence could drive us all to speak the same and drop our regional accents and dialects
A customer tries the Siri voice recognition function on an iPhone 6 Plus in Hong Kong. Jerome Favre / Bloomberg

I’m an early adopter of all things tech. In the late 1990s, while browsing in a computer hardware shop, I came across a transcription application called Dragon Dictate.

The picture on the box showed a smiling young executive speaking into a headset while his words magically appeared on the computer screen. I had to buy it. I arrived home, tore open the box, set it up and began speaking to my computer.

The results ranged from dismal to absurd. Almost every word I spoke appeared on screen misspelt or was not the word I had intended.

I’m from Liverpool in the northwest of England and the speech recognition software spluttered under the weight of my regional accent.

Twenty years later the technology has definitely improved but a recent survey undertaken by researchers at Newcastle University in the UK suggests that four in five of us still have to adjust the way we speak so that speech recognition applications like Siri and Alexa can understand us.

The study in question was undertaken among native English speakers with regional accents but it would be interesting to know how speakers of other languages get on? How accommodating is Arabic Siri of diverse Arabic dialects and accents and how useful is English Siri, with bilinguals barking commands at her in foreign accents?

At present Arabic Siri is tailored to users in the UAE and Saudi Arabia but understands some modern standard Arabic terms. How Arabic speakers from other nations with different dialects get on is unknown –and even within the UAE and Saudi Arabia, accents and dialects can vary.

As speech recognition-driven applications become a more prominent part of our daily lives, one fear is that we will start to lose our regional accents altogether. Regularly adapting our speech to accommodate machines or people who don’t share our accent is undoubtedly likely to modify our pronunciation and word choice.

People who have lived in the UAE long enough might testify to moderating their pronunciation to aid the comprehension of the nation’s cosmopolitan residents. After a few years, such linguistic accommodation can start to take the edge off a regional accent. It certainly has done so with mine.

Some psycholinguists and speech scientists, however, are sceptical about whether speaking to smartphones could lead to accent loss or modification, given that we tend to spend far more time talking to each other than we do to technology.

However, that balance could very well shift in the coming decades as speech-activated artificial intelligence encroaches on more aspects of daily life, from the robo-driver to the robo-barista, robo-waiter and robo-salesperson.

Another possibility, however, is that the machine learning technology associated with speech recognition applications becomes more sophisticated and sensitive to regional variations in word pronunciation and word choice. Siri currently supports 21 languages, including Arabic, while Alexa supports three and Google Assistant can simultaneously interpret bilingual commands from its 11 languages.

_____________________

Read more from Justin Thomas:

There is strength of hate in numbers on social media

Why pupils might benefit from swapping summer holidays for a long winter break

Third culture kids are a by-product of globalisation

_____________________

But it is estimated that the world is losing languages at the rate of about one every 12 weeks. Over the past century, approximately 400 languages have become extinct and linguists project that between 50 and 90 per cent of the world’s remaining 6,500 tongues could vanish by the end of the 21st century. Might we also start losing accents and dialects too?

Some people – my old English teacher for example – would be happy to see regional accents vanish forever. People with regional accents also sometimes pay money for elocution lessons in an attempt to rid themselves of unwanted pronunciation patterns. This unenthusiastic attitude towards language diversity is generally motivated by negative stereotypes associated with certain accents.

Classic psycholinguistic research looking at British accents, for example, tends to report a hierarchy of accent prestige. At the top of this prestige scale is what is known as RP, or received pronunciation (otherwise known as the Queen’s English). Regional accents occupy a middle ground, with the accents of industrial towns traditionally at the bottom. Under experimental conditions, people tend to judge those speaking with received pronunciation as being more intelligent and confident while those with regional accents are judged to be more sincere and more kindhearted. Of course, these are stereotypes, not based in fact.

I love diversity; if nothing else, it makes the world a more interesting place. I would hate to see diverse regional accents being obliterated by the rise of the robots. It would be a tragedy to see khaleeji Arabic's distinctive “ch" and "sh” sound replaced by the hard “k” sound of other Arabic dialects and accents and vice versa. Siri and her likes should know when fajr (dawn prayers) take place, as well as fayer (an alternative pronunciation of fajr).

Good technology adapts to us; we do not adapt to it. That would be a classic case of the tail wagging the dog.

Dr Justin Thomas is professor of psychology at Zayed University