IBM Innovation Hub Event

Can AI and automation really help you manage vulnerable customers
(and their data)?

Steve Sullivan, Performance Solutions Director and Nev Doughty, Partnerships & Growth Director reflect on our recent event.   

Like it or not, AI is coming and it will change how we talk to customers. But there are risks. What if those customers are vulnerable? Can AI look after them properly? And while we’re thinking about vulnerability, as AI increasingly consumes customer data, what needs to be done to look after that too?

On June 25th Customer Contact Panel hosted an event with CXReview at the IBM Innovation Studio in London on this very topic. Speakers included Gemma Woodcock of IBM, Jim Steven of Experian, Elaine Lee of Reynolds Busby Lee and Keith Shanks of CXReview.

Steve Sullivan of CCP was master of ceremonies and led the discussion.
Here we outline the key takeaways from all speakers for what we can do right now to mitigate those risks and set contact centres on a path to responsible use of AI. 

Contact Centre AI: Not if but when

  • The event focused on: Outlining the risks and considerations when implementing AI and automation, and
  • The adoption scope within the Contact Centre environment, specifically supporting agents to deal with the ‘cognitive load’ when handling vulnerability and more challenging customer contacts, whilst maintaining quality and overall customer experience.

While there was much discussion around Al, Generative AI and the inevitable impact on our roles, we are still a long way from a time where agents will not be required in customer contact; we’ve evolved too far as humans for this to change. However, ignore AI at your peril. 

“Consider those that did not take the internet seriously, those on the high street who failed to embrace it and look at what happened to them”

Gemma Woodcock of IBM

The AI data question

Ultimately the use of AI depends upon building models that can make predictions that add value. These are dependent on what the source data looks like. Ensuring correct identification and use of the right material is essential; wrong information can lead at best to incorrect responses and at worst to biases or hallucinations, all of which carry considerable risk to brands and their customers.

Consider also that using the questions that your customers ask you to inform future responses may not be typical – for example they may have arisen from an unusual event at a specific point in time – and if you treat them as such, this may lead to incorrect or just plain weird future responses. . Moreover, if the answers to those questions relate to your own IP and you are using open-source solutions, you may be making your IP available to others.

Additionally, who you are sourcing your solution from needs to be considered. In March 2023 alone there were 14,700 start-ups established in the AI space. Which means due diligence and an understanding of how AI solutions, which are often highly specific by nature, work together is essential.

That isn’t to say everything in AI is new. The early concepts and mathematics of AI date back to the 1950s and 60s. Over the last decade or so, increases in computing power have begun to make AI more commercially accessible. Which means AI and machine learning is already well-established in using models that look at patterns, as we’ve long since seen for Amazon purchase recommendations, or in more recent years the introduction of machine learning and explainable AI in credit scoring.

As is often the way with AI, these are specific use cases with an enormous amount of investment behind them to make them function well, particularly to allow for the use of AI in regulated industries. One aspect of these is that they depend on properly labelled data. Which is already well-ordered and highly specific in the world of credit risk. But the process of labelling data, especially data that isn’t already extremely well-ordered, is intensive and expensive.

Imagine then the complexities of labelling data in natural language usage and you begin to understand the significance of recent advances from the likes of IBM’s Watson – or the oft referenced ChatGPT – in Generative AI. In customer contact, these are what we call “foundation models”. They are pre-trained and need prompts, but can’t answer questions specific to your customers without customer data and an understanding of how customers are likely to interact. These large language models can be applied to this process – they can use

labelled data and the numerous ways to ask the same question in the past to develop further to meet your and your customers’ needs.  

4 key tests for effective AI

The use of Generative AI should have 4 key considerations:

  1. Open: use of best technologies and innovation
  2. Trusted: can you trust the outputs
  3. Targeted: designed on specific use cases
  4. Empowering: ability to augment the human role/experience, not to replace it.

Pretraining and governance funnels need to be in place, with considerations ranging from the benefits of the use case to ensuring only the necessary inputs go into the model. Put simply, if nothing “inappropriate” goes in, then nothing inappropriate can come out. Additionally, this approach means it should cost less to set up the model and take less energy (power) to run. Remembering just how energy intensive AI can be, this is an important consideration.

When it comes to Risks, Regulations, and Technological requirements, there needs to be a level of governance. And during scoping of your solution, you need to consider the ongoing effort required to support it to ensure it meets – and continues to meet – strict governance. AI is not a fire and forget implementation.

AI and the bad guys

“It isn’t only the good guys that are using AI and Automation, ‘mal-actors’ are using it to enhance their processes too.”

We were fortunate to be joined by Jim Steven of Experian who shared with us some of the activity he has seen from his work in managing data breach responses. This has implications on multiple levels.

  • Consolidation of data: bringing everything together to enable automation could mean that you are at greater risk if your business is breached. Additionally, using suppliers with cloud-hosted data may mean that their breach becomes your breach too.
  • Benefits of automation are not exclusive to those doing good: we are not the only ones that can leverage great technology. Those with mal-intent are using it too; and they will typically do so more quickly.
  • Consideration of how those impacted by a breach find the experience: when a breach occurs, it is likely that it will be across your entire estate of stakeholders. Therefore, it is not just your customers that will be impacted – it will also be employees, former employees, pension members, contractors and suppliers.

Communication with people who have had their data compromised is personal, it needs to be managed in different cohorts and isn’t really something to automate – especially with your employees, who you will need to support your customers through the situation with empathy and understanding. Not only that, but it’s important not to lose sight of the millions of customers who don’t currently interact online at all – they aren’t typically digitally savvy and may not even consider that their data is an important asset that can be used for nefarious means.. They must  be informed and supported in ways that work best for them, cognisant that it is us who hold their data digitally.

Moreover, data breaches are not always one way – it’s not always about theft or ‘acquisition’ of data. Looping back to the theme of ensuring that you consider what data is held in the system for interpretation, bear in mind that even the simplest upload of additional data to your systems can impact operations. For example, there was an accidental, entirely innocent, upload of bad data to air traffic control systems in 2023 that resulted in 98% of aircraft being grounded.

People accessing and uploading bad data to your network could equally paralyse your organisation. Be mindful that centralising your information management may enable system vulnerabilities that could be exploited.

What about vulnerable customers and AI?

Elaine Lee joined us to speak about vulnerable customers and the need to ensure that we are considering them in our approach to technology and process solutions.

“40% of Companies are not doing well enough when identifying and supporting vulnerable customers.”

The FCA considers 52% of UK population to be vulnerable. The impact of the cost of living crisis on the UK population is that 25% of adults now have low financial resilience and the drivers of vulnerability are varied, including:

  • Health
  • Capability
  • Life Events
  • Resilience
  • Equality
  • Access

Remember, vulnerability isn’t static. We may be displaying vulnerability and require additional support today, but not in a few weeks’ time.

With 40% of companies already not doing well enough when identifying and supporting vulnerable customers, how will the introduction of AI affect those customers? Of course, there is a spectrum of risk. So it’s important to understand who and where your customers are on that spectrum at any given time, and the potential for harm – understanding what they are vulnerable to?

People may also be vulnerable through lack of access or confidence with digital tools, they may be under pressure financially through the continuing cost-of-living crisis or low financial resilience, they may be experiencing a physical or mental health condition or going through a life event that changes their ability to make decisions in the usual way.

When implementing changes, ensure that you have properly tested your solutions to work effectively for vulnerability. Use diverse customer panels and map the possibilities for different customer types. The effort needed to get it right does have a financial impact on your business, but if you succeed it will reduce customer complaints, increase engagement and therefore increase customer value.

Use cases and how to implement AI safely in Contact Centres

CXReview’s Keith Shanks rounded up the presentations with a reminder of the history of automation and AI use in contact centres. He highlighted the predictive dialler as being a key example of great technology that when implemented either badly or with mal-intent, has negatively impacted consumer perception and experience.

“Automation in Contact Centres: It isn’t new. It’s been around for years with mixed results through implementation.”

Due the nature of the customer contact operations, there are plenty of use cases that can be considered for development and implementation.

  • Ensuring consistency and pace
  • Automation of decision making
  • Increased accuracy
  • Assisting agents in dealing with complex customer contacts
  • Enabling access to the right information at the right time.

However, we need to be cognisant of the potential challenges and ensure that these are properly addressed.

  • Data privacy
  • Security
  • Vulnerabilities of both organisations and individuals
  • How we ask our people to engage with the use of technology

With great opportunity comes great responsibility

If you would like to discuss more around how you are implementing AI and automation within your organisation and what it may mean to your people, customers or processes, then please contact the team.

Additionally, we are able to provide access to content from the day if you would like to read further.

Contact us today and one of our skilled staff will assess your requirements and provide recommendations on future steps.