AI systems are increasingly mediating financial decisions and transactions. This brings efficiency but also new risks.

Risks

Unexplainable AI

As Machine Learning models learn and grow on their own through increased training data, they can start to function as complex “black boxes” with decision making processes that are not easily understandable. This can lead to market risk as the usage of these models proliferates and potential bias in decision making based on data that is not accurate, clean, and up-to-date, or that reflect historical biases

Cybersecurity

As financial institutions deploy AI, they face a double-edged sword: AI can enhance cybersecurity (e.g. fraud detection), but it also introduces new cybersecurity vulnerabilities and threat vectors.

Cybersecurity Threat

  • Sophisticated hackers may exploit flaws in AI systems or use “adversarial” inputs to dupe them. For example, machine learning models can be tricked by manipulated data – an attacker might subtly alter transaction information to fool a bank’s fraud detection AI into approving fraudulent transfers ​(Zenus). There is also the risk of “data poisoning,” where criminals compromise the training data used by an asset management firm’s trading algorithm or a bank’s credit model, skewing its decisions. 

    Another growing concern is the use of model inversion and membership inference attacks, where adversaries can extract sensitive data from trained AI models. In the financial sector, this could mean that attackers who gain access to a model, such as a robo-advisor trained on high-net-worth client portfolios, could reverse-engineer personal financial data or investment strategies. A 2017 study published in the Proceedings of the IEEE Symposium on Security and Privacy demonstrated how attackers could infer whether specific individuals were part of a model’s training set, raising serious privacy concerns for banks and asset managers using AI for personalized services. In an era of increasingly stringent data protection regulations, such vulnerabilities not only pose technical risks but also legal and reputational ones (NIST).

    The SolarWinds breach in 2020 underscored how AI itself can become collateral damage in broader supply chain attacks. In that case, attackers infiltrated major IT and cloud infrastructure providers, many of which supported AI-powered analytics and decision-making systems used by financial institutions. If AI models depend on cloud services compromised by attackers, their integrity—and the decisions they inform—can no longer be trusted. Similarly, researchers at HiddenLayer revealed that AI models stored in cloud environments could be tampered with directly, enabling attackers to implant subtle changes in weights or logic without altering the underlying application code. For financial firms increasingly deploying AI via APIs or third-party platforms, this opens up a new cybersecurity frontier where the model itself is the attack vector (HiddenLayer).

  • Generative AI tools might be used for large-scale market manipulation or to provoke algorithmic trading cascades. In a forward-looking scenario, autonomous AI agents could conceivably coordinate trades or rumors that lead to flash crashes or bank runs via herd-like behavior ​(Roosevelt Institute). Financial regulators are acutely aware of these dangers. The White House’s 2023 executive order on AI specifically called out financial-sector AI cybersecurity, directing the U.S. Treasury to issue best practices for managing AI-related cyber risks ​(Skadden)

    In the EU, the new Digital Operational Resilience Act (DORA) will require banks and firms to report serious ICT (information and communications technology) incidents – which would include AI system hacks or failures – to regulators ​(Skadden). Industry-wide, there is a growing emphasis on “red-teaming” AI models (stress-testing them for vulnerabilities) and on sharing threat intelligence about AI-driven attacks.

    Imagine a future where loan approvals, stock trades, insurance claims, and payment flows are all decided by interlinked algorithms: a glitch or bias in one AI system could cascade through the financial network.

Lael Brainard, Director of White House National Economic Council (2023-2025)

“As financial institutions increasingly rely on complex algorithms and machine learning, there is a risk that the models may become less interpretable, less auditable, and harder to validate.”

Michael Hsu, Acting Comptroller of the Currency (2021-2025)

“The widespread use of opaque AI models could lead to systemic blind spots in risk management, particularly if many banks rely on similar black-box systems.”

Unexplainable AI

Many advanced AI models (like deep learning networks) do not easily reveal why they made a given decision. In finance, such opaqueness conflicts with the need for accountability and customer understanding.

Insurance: The black-box issue is perhaps most sensitive in insurance underwriting and claims. AI models might set premiums or flag claims with little explanation, potentially hiding bias. This has already led to lawsuits – in 2022, State Farm was sued after a study suggested its claim algorithm discriminated against Black customers (names common among African Americans saw more claim delays)​ (LexisNexis). And in 2023, health insurer Cigna was hit with a class-action alleging an AI system was automatically denying claims without proper human review ​(LexisNexis). These cases highlight how unexplainable AI can translate into real-world harm (unfair denial of coverage or payouts) and legal peril for firms. 

Payments: Even in payments and fraud detection, lack of transparency can cause trouble. Customers whose transactions are blocked or flagged by an AI fraud system often get no clear explanation, which can frustrate users and damage confidence if legitimate activities are mistakenly caught by opaque models.

Financial intermediation (banking/lending): If a loan application is rejected or approved by an AI, the bank should be able to explain the key factors. Yet “nobody can be entirely sure how the AI makes its decisions” in a black-box model​ (LexisNexis). This isn’t just a theoretical concern: it can mask biases and make it hard to contest decisions. A notorious example was the Apple Card controversy in 2019, where an algorithm (allegedly AI-driven) offered significantly lower credit limits to women than men, sparking public outcry about potential bias.

Asset management: Lack of explainability can also be risky in trading and portfolio management. AI-driven investment funds might make rapid-fire trades based on correlations even the developers don’t fully grasp. This can lead to unexpected losses or volatility. From a risk management perspective, banks and funds worry that if an AI model misfires (ex: makes a huge bet based on spurious data patterns), managers may realize it too late because the model’s logic was not transparent. In 2022, the Bank of England cautioned that firms must be able to justify the trade-off if they deploy a more complex, less interpretable model over a simpler, transparent one​ (Skadden).