In my first blog in this two-part series on unconscious bias in virtual assistants, I looked at the risks that issues such as gender stereotyping in their bots can pose to financial services providers’ brands and reputations. In this follow-up post, I’ll go on to examine how banks and insurers can minimise these risks, and ensure their bots reflect the behavioural standards they’d expect of their people. 

My work on helping many banks and insurers design and develop online customer experiences – including bots – has enabled me to define a set of five navigation-points for avoiding unconscious bias. Here they are. 

  1. Set your AI design ‘north Star’ to include your core values – Avoiding programmed bias requires conscious decision-making throughout the design process. As well as using design principles to provide clarity and direction when creating user experiences, it’s also vital to ensure the company’s core values are embedded throughout. So your organisation should create a single design vision that guides every decision, and identify a shared, company-wide goal that ensures the wider implications are considered whenever and wherever artificial intelligence (AI) is employed. 
  2. Build diversity into your design team – The appearance, tone and character of your financial services provider’s virtual assistant will influence customers’ perceptions of its brand. Apple, for example, was publicly criticised when it was revealed that its Healthkit app would let users monitor their sodium intake, but didn’t include any women’s health tracking. This omission was widely attributed to the homogenous nature of the male-dominated  design team. So, to bring your design principles to life, your organisation should build a diverse team of unique individuals with varied skillsets and backgrounds. Microsoft’s Cortana Team, for example, includes a poet, a novelist, and a playwright. 
  3. Ask yourself: would you hire your virtual assistant? – By their nature, virtual assistants are accessible to customers simultaneously across many channels. While this is a positive trait, it also creates the potential for them to damage many customer relationships at lightning speed. Alignment with your core values is key to your staff recruitment decisions and selection of brand ambassadors – so it should also be at the heart of your design process for virtual assistants. To achieve this, your organisation should draw on the HR and marketing people’s skills and experience in translating your core values into hiring, training, and performance management processes. 
  4. Apply deliberate design – While the choices open to a designer when “gendering” an assistant may seem simple, they’re actually far from straightforward. Microsoft’s Clippit  assistant was widely perceived as non-human, non-gendered and male all at the same time. More recently, many platforms have allowed users themselves to set the gender of their assistant; interestingly, many still default to female. Your organisation should consider the ideal interaction for your customers, reflect on what the bot’s gender – or lack of one – will say about your brand and core values, and design accordingly. This may include using A/B testing – comparing two versions to determine which one performs better – to define a gender identity that is right for you, be it male, female, anthropomorphic or other. You might even allow users to customise the interaction through modular options.  
  5. Navigate uncharted territory with your customers – Developing solutions in partnership with a representative sample of customers benefits everyone. In particular, gathering early feedback from customers on the impact of your design principles provides invaluable insight to shape the next iteration. These experiments do not need to be expensive: “Wizard of Oz” testing, for example – where the user thinks they’re engaging with a virtual assistant, which is actually being controlled by a human – enables designers to develop a better understanding of the desired interaction with minimal investment. Your organisation should use these types of low-cost techniques to gather insight on customers’ responses to your design principles, and then then launch the bot to a small, controlled audience before expanding it to the wider customer base.  

In combination, these five navigation-points can help your organisation plot a safe route through the complex and ever-evolving world of virtual assistant design. Banks and insurers need to create bots that meet customers’ expectations of increasingly frictionless conversations – but, in doing so, they must be keenly aware of the risk that unconscious bias might creep in. Actions like employing diverse teams, using low-cost prototyping and co-creating with customers will ensure your organisation considers the implications of virtual assistant design from many angles – including those you may not previously have thought of.  

To sum up: conscious design tackles unconscious bias. And this makes for a better bot – one that your customers will love.  

To find out more about how to transform the conversation with customers, please download our 2018 UK Financial Services Customer SurveyAnd read about our recently launched ‘AI Fairness Tool’. 

My thanks to Brian FitzgeraldKiri Pizer and Katharine Pratt for sharing their expertise on this topic.

Submit a Comment

Your email address will not be published. Required fields are marked *