Fintech

Why financial services face huge risks from AI

Although AI is seen as an opportunity to tap new investment and client opportunities, it could also be a minefield if the safeguards are insufficient, a new study suggests.

It's not just the World Economic Forum (WEF) and central bankers that are worried about the potential threats to financial stability from artificial intelligence. Some within the investment industry are too. 

Last week, the WEF, which organises the annual high-level Davos conference of world leaders, issued a paper that starkly warned how AI would create “a fundamentally different kind of financial system” that could undermine the traditional framework of markets.

If companies and regulators don’t get on top of the technology, it concluded, investors will be put at risk. 

Paul Sandhu, head of multi-assets quant solutions and client advisory at BNP Paribas Asset Management in Hong Kong, views it similarly.

As AI becomes more integrated with the financial system, he said in an interview “it is vitally important that financial service companies adapt their internal governance, compliance and operational structure, to not only optimise the value added through AI, but also identify, ring-fence and resolve any potential risks.”

The WEF study assessed whether AI algorithms could destabilise the financial system. It suggests that systematic risk may be more difficult to anticipate and react to, in a world where numerous highly complex and opaque models are interacting with each other in real time.

“Rapidly shifting interlinkages and risks in the financial system make mapping systemic risks and building system resilience a moving target,” it said.

As Bundesbank official Joachim Wuermeling outlined in a speech last year, AI is both a huge opportunity and a big danger. So embrace it, the German central banker urged, but beware placing too much trust in AI systems because the very stability of financial markets may be at stake.  
 
REGULATORY HEADACHE

The challenge, according to the WEF, is to ensure new technological systems are sufficiently transparent. Often, it observed, “the enormous complexity of some AI systems makes it difficult to obtain an interpretable explanation for why the system has produced a given output."

As a result, the WEF study questions the ability of system supervisors to interrogate the “specific aspects of an opaque model’s thought process”.  

It predicts AI will fundamentally change the role of local regulators. “Supervised authorities will need to reinvent themselves as hubs for system-wide intelligence, lest increased system complexity erodes transparency and threatens investor confidence during crises.”

Sandhu agrees that regulating AI will pose some challenges. “If we take it to the extreme, regulating an AI brain is as difficult as regulating the thought process of a human, which is an activity that has its first line of defence within the company,” he said.

The WEF’s study is an attempt to offer a template, and what it refers to as "guardrails", to help the existing systems and the humans working at the coalface to deal with the disruptions that AI will bring.

“Blind reliance on AI and its enabling technologies could erode system-wide guardrails in the long run – from the skills and intuition of frontline employees to the effectiveness of monitoring mechanisms and regulatory protections,” the study said.

Avoiding these risks, it added, will require using tools such as “explainable AI, to teach and provide ongoing visibility to the humans within automated processes.”

MESSY OVERLAPS

The integration of AI into the financial system is also happening at different speeds and with mixed results, further complicating the challenge.

“One of the main issues I see in integration of AI within financial services companies will be the balance between services that are, for a lack of a better word, ‘legacy’ and those that are focused on new services,” Sandhu said. “The latter can possibly adapt new technology in a more efficient manner than the former, it may lead to messy overlaps if proper governance isn’t applied.”

Using insurance policy pricing as an example, Sandhu said the legacy service uses actuarial data based on long-term studies and is updated infrequently. In contrast, a newer service might work off real-time data supplied via a smart watch on health and activity to adjust pricing.

"The legacy insurance will not be able to readily incorporate the new technology because it would create significant swings in the risk/price," he said.

ROBO CHALLENGE

The WEF study also asks whether an AI can be trusted as a fiduciary – that is to have the best interests of the client in mind at all times.

It suggests that ‘algorithmic bias’ may go undetected, since some forms of AI lack transparency on the inferences being drawn from the underlying data.

This challenge has been highlighted by two robo advice services recently shut down after a review by the Australian Securities and Investments Commission (ASIC). The review concluded that the client due diligence offered by one of these firms, Sydney-based Lime Financial Services, was insufficient and that the robo-advice given was inappropriate. As a result the firm announced the closure of the services.

ASIC’s stated position is that "The advice provided through these digital advice tools must meet the same legal obligations required of human advisers." Lime's decision to close the robo services was on the basis that the cost of meeting that obligation was still too high, given the current state of the technology.

In general, the task of costing the integration of AI into traditional financial services firms has barely begun, as Sandhu sees it.

"Currently, managers and heads of departments work with human resources to first allocate human resources and then manage those resources to maximise the unit or department's contribution to the company,” he said. "In this same way an AI mechanism should be compartmentalised, allocated and then managed. The management part should involve defining objectives, setting risk tolerances and even allocating a figure or theoretical salary to the AI component.”

¬ Haymarket Media Limited. All rights reserved.
Share our publication on social media
Share our publication on social media