One of the most striking findings from this year’s Accenture UK Financial Services Customer Survey is that, for financial services providers, the barrier of entry to using virtual assistants has never been lower. Customers are increasingly open to receiving advice and services in a way that’s entirely computer generated – and anticipate benefits including higher speed and impartiality along with lower costs.

Given such responses, it’s hardly surprising that artificial intelligence (AI)-enabled “bots” are now becoming a familiar part of UK consumers’ everyday experience of online financial services. But this very omni-presence itself creates risks – not least the potential for instant far-reaching and brand damage.

To minimise this danger, we need to ensure that the virtual world we are creating reflects the standards of ethics and objectivity that we hold ourselves to in our human interactions. Which means designing with the specific purpose of avoiding perpetuating of stereotypes or bias.

No more “Press 1 for….”?

To explain why this is so important, let me start with some context. In recent years, customer-to-provider interactions have evolved dramatically, ending the days of interminable interactive voice response (IVR) options and painfully long forms. Innovations in AI and natural language processing (NLP) mean machines can now understand and participate in conversational exchanges, enabling us to escape from the closed world of menu options to an open landscape of agile and frictionless engagement.

Similar advances are underway in other areas of customers’ personal lives, as they use an expanding range of digital assistants – Alexa, Google Home, Siri, Cortana and more – to do everything from checking the weather to ordering a pizza. And in the workplace, more and more people are using enterprise applications such as Slack that have either built-in assistants or chatbots capabilities. These daily experiences of AI at home and work are boosting customers’ expectations of other interactions – including with financial institutions.

So, how can banks and insurers respond? Well, coinciding with the moves by “GAFA” (Google, Apple, Facebook, Amazon) to open conversation as a channel and provide ready-made assistants, we’re also seeing the emergence of customisable conversational AI platforms designed for white-labelling, such as Amelia and Nuance. Many financial services providers have started experimenting with these technologies to create their own assistants: for example, Swedbank’s Nina uses the Nuance platform.

What does your virtual assistant say about your organisation?

At the same time, a growing number of frameworks and principles have become available to help financial services providers decide which customer journeys to migrate to bot conversations, and how to design these interactions. But an area that many of these frameworks fail to explore in any significant detail is the deeper implications of what an insurer’s or a bank’s virtual assistant may say about it as an organisation.

Why is this a problem? As Accenture’s 2018 Technology Vision highlights, today’s widening array of technology solutions is fuelling an escalating debate about the impacts of AI on society. And nowhere is this issue more evident than in virtual assistants’ apparent reinforcement of gender stereotypes.

In general, gender diversity in virtual assistant identities is poor, with most defined as female through name and voice. An analysis of over 300 AI assistants and film characters by the AI company Integrate found that 67% were female and 33% ‘genderless’. Also, even the assistants categorised as genderless – such as Siri – are perceived as female by their users. While a gender-neutral name may appear to be a good start, the voice given to most assistants has proven to imply a gender.

Stereotyping and bot bias by association?

Does AI gender inference matter? It might – the “Me Too” campaign has shown how quickly and radically the public agenda can shift. If gender stereotyping by financial services bots becomes an issue of public concern, then the banks and insurers behind those bots may be deemed guilty by association. We’ve already seen this happen in other sectors: in 2017, iQiyi had to apologise for the behaviour of its bot ‘Vivi’, which was promoted as a ‘built-in girlfriend’.

And of course, the potential for brand damage from virtual assistants extends well beyond gender stereotyping. Within 24 hours of being released onto Twitter in March 2016, Microsoft’s female millennial-style chatbot ‘Tay’ began to make racist (as well as sexist) comments, and was subsequently withdrawn amid profuse apologies.

The message is clear. The choices that financial services providers make when designing their virtual assistants will have far-reaching impacts for their customers and brand. So what can your organisation do to ensure those impacts are positive?

In my next blog, I’ll set out some core principles to apply to steer clear of the pitfalls. Stay tuned.

My thanks to Brian Fitzgerald, Kiri Pizer and Katharine Pratt for sharing their expertise on this topic.

Submit a Comment

Your email address will not be published. Required fields are marked *