
In the world of high finance, companies are hedging their bets on artificial intelligence (AI). Indeed, AI has already been used in algorithmic trading and fraud detection. As the technology grows more powerful and autonomous, so there is an urgent need to put in place checks and balances to ensure ethical and valid use of these tools.
Research published in the International Journal of Business Information Systems now offers a structured response to the various challenges and proposes a practical framework to help financial institutions implement ethical AI systems grounded in transparency, interpretability, and accountability.
The researchers explain that “explicability” is key to the development and use of AI ethically in finance. Unfortunately, the term itself lacks a clear operational definition. It can be described as encompassing three interrelated dimensions: transparency (the ability to see how a decision was reached), interpretability (the capacity to understand that decision), and accountability (clarity over who is responsible). These ideas are particularly crucial in high-stakes domains such as lending and insurance, where algorithmic decisions can directly affect people’s lives as well as company profits.
There are already examples of where opaque AI systems have reinforced existing inequalities so that credit-scoring models and insurance-pricing algorithms, trained on historical data, can disadvantage women and minority groups. This is not necessarily happening deliberately, but simply through the inherent biases in the training data. Whether intentional or not, the outcomes can still be damaging and also become harder to correct once automated systems are embedded in a company’s systems.
The study draws from interdisciplinary research and expert interviews and introduces a “maturity framework” that is designed to make explicability actionable. Rather than treating ethics as a checkbox exercise or a set of abstract ideals, the framework outlines incremental steps that organizations can, and perhaps should, take, based on their technological sophistication and the complexity of the AI models they employ.
The framework benefits from inherent adaptability, acknowledging from the start that a “one-size-fits-all” solution is neither realistic nor desirable. Instead, it offers a pathway tailored to different institutional contexts, encouraging continuous improvement over time. Among its recommended practices are adopting interpretable AI models, the inner workings of which can be readily understood by humans, creating internal ethics committees, and conducting regular audits for bias and fairness.
More information:
Sam Solaimani et al, Beyond the black box: operationalising explicability in artificial intelligence for financial institutions, International Journal of Business Information Systems (2025). DOI: 10.1504/IJBIS.2025.146837
Citation:
New framework guides ethical use of AI in financial decision-making (2025, July 1)
retrieved 1 July 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Leave a comment