Banking and finance businesses can’t afford to ignore the streamlining and cost reduction benefits offered by Artificial Intelligence (AI).

Your business can’t effectively beat the competition marketing any product in the 21st century without using big data and AI. Given the immense amount of consumer data available – and the number of channels, segments and competitors – marketers need to use AI and algorithms to operate successfully in the online environment.

But AI must be used prudently. Business managers must be meticulous in setting up rules for the algorithms’ decision making to prevent AI, which lacks a human’s inherent moral and ethical guiding force, from targeting ads towards unsuitable or vulnerable customers, or making decisions that exacerbate entrenched racial, gender, age, socio-economic, or other disparities and prejudices.

The Banking and Finance Oath’s 2021 Young Ambassadors recognised a gap in the research and delivered report: AI driven marketing in financial services: ethical risks and opportunities. It unravels the complexities of AI’s impact across the financial services industry and government, and establishes a framework that can be applied to other contexts.

In a marketing environment, AI can be used to streamline processes by generating personalised content for customers or targeting them with individual offers; leveraging customers’ data to personalise web and app pages based on their interests; enhancing customer service with chatbots; and supporting a seamless purchasing journey from phone to PC to in-person at a storefront.

Machine learning algorithms draw from immense data pools, such as customers’ credit card transactions and social media click-throughs to predict the likelihood of customers being interested in a product, whether to show them an ad and what ad to show them. But there are ethical risks to navigate at every step – from the quality of the data used to how well the developers and business managers understand the business objectives.

Using AI for marketing in financial services comes with two significant risks. The first is the potential for organisations to be seen as preying on people in vulnerable circumstances.

AI has no moral oversight or human awareness – it simply crunches the numbers and makes the most advantageous and profitable decision to lead to a sales conversion.

And if that decision is to target home loan ads at people going through a divorce or a loved one’s funeral, or to target credit card ads at people who are unemployed or living with addiction, without proper oversight, there’s nothing to stop it.

The other risk is the potential for data misuse and threats to privacy. Customers have a right to their own data and to know how it’s being used – and what demographics they’re being placed in. If your data’s out of date or inaccurate, or missing in sections, you’ll be targeting the wrong people.

All demographics – including racial background, socio-economic status, and individual psychological profile – have the potential to be misused by AI to reinforce gender, racial, age, economic and other disparities and prejudices.

Most ethical failings in AI-driven marketing campaigns can be traced back to issues with governance – poor management of data and lack of communication between developers and business managers. These data governance issues include: siloed databases that don’t share definitions; datasets that don’t refresh quickly enough and become outdated; customer flags that are incorrect or missing; and too many people being designated as data owners, resulting in the deferral of responsibility.

In human driven decision making, there’s a clear line of command, from the Board, to management, to the frontline team. But in AI-driven decision making, the frontline team is replaced by two teams – the AI developers and the team of machines.

Communication gaps emerge where management may not be familiar with instructing AI developers and the field’s highly technical nature, and the developers may not be familiar with the jargon of the business. Training across the business can act to fill these gaps.

Before any business begins to integrate AI into its marketing (and overall) strategy, it’s crucial that it adopt a set of basic ethical principles, these being:

  • Beneficence (or do good): personalise products to improve the customer’s experience and improve their financial literacy by delivering targeted advice.
  • Non-maleficence (or do no harm): ensure your AI marketing doesn’t target customers in inappropriate or harmful ways.
  • Justice: ensure your data doesn’t discriminate based on demographics and exacerbate racial, gender, age, socio-economic or other disparities or stereotypes.
  • Explicability: you need to be able to explain how your AI system makes the decisions it does and the relation between its inputs and outputs. Experts should be able to understand its results, predictions, recommendations and classifications.
  • Autonomy: at the company level, governance processes should keep humans informed of what’s happening; and at a customer level, responsible decision making should be supported through personalisation and recommendation tools.

The reality is that no business can afford to ignore the benefits AI offers, but the risks are very real. By acknowledging the ethical issues, businesses can seize the opportunities while mitigating the risks, benefiting themselves and their customers.

Download a copy of the report here.