A.I In loan allocation

In the rapidly evolving landscape of financial technology, AI based decision-making algorithms have emerged as powerful assets in various domains, including loan allocation. However, in this paper my goal is to explain why sole dependence on AI-driven algorithms for loan allocation decisions may bring ethical dilemmas. These ethical challenges encompass issues such as biases, epistemic harm, the opaque nature of algorithms, inadequate assessment, and limitations on transparency imposed by financial institutions.

Consider This Scenario

YVR Bank, a financial institution, strives to enhance efficiency by implementing an advanced AI system for evaluating loan applications. This system, trained on historical data, considers factors like credit history, income, and spending patterns to determine creditworthiness. Now, let’s consider a specific scenario where a couple, Jack, and Taylor, applies for a home loan. Although Jack, an engineer with a stable job, and Taylor, a freelance music composer, both have good individual credit scores, the AI system, biased towards traditional employment structures, categorizes Taylor’s freelance income as “less stable.” Unfortunately, this oversight negatively impacts Taylor’s financial profile, resulting in a lower combined creditworthiness for the couple. Consequently, when the loan application is rejected or approved with less favorable terms than anticipated, Jack and Taylor express frustration, questioning the fairness of the system. They believe that their overall financial stability and ability to repay the loan were not accurately considered. Initially, the bank explains that they are unable to provide a detailed explanation for the decision due to confidentiality since it was assessed by their “AI system”, thus depriving the couple of their right to an explanation and justification for fairness. This lack of transparency can be seen as a form of epistemic harm. However, after several requests, the bank agrees to shed light on the decision. Unfortunately, the explanation given is vague, unclear, and unconvincing, as even the engineers interpreting the AI’s decision-making process are unsure themselves. Now, let’s consider the implications if this were to occur on a larger scale. If a significant number of loan applications are rejected, it could’ve resulted in a credit crunch situation where both businesses and individuals face difficulties in securing financing.

Biases

AI systems rely on training data provided by humans, and this data can often be biased and discriminatory, which in turn creates a risk of unfair loan denials for individuals from marginalized or underrepresented groups. A simplified model might inadvertently overlook essential details such as historical financial hardships or structural inequalities, significantly affecting loan eligibility. Deeming certain indicators as irrelevant could lead to approving loans to financially incapable applicants or denying loans to deserving candidates based on an oversimplified assessment. Biases enter AI systems because the data used to train these algorithms is often sourced from human activities, whether through direct input or collected via human-designed systems (Ntoutsi et al., 2020). Thus, any existing human biases can be transferred into the AI systems. Moreover, these biases may be magnified due to the complex dynamics of sociotechnical systems. The data, which reflects human prejudices, serves as the input for AI algorithms, subsequently influencing their decisions and potentially reinforcing those biases further. AI can be biased in making decisions for loans using data that inadvertently captures historical biases or socially constructed disparities. For instance, if the training data for the AI system includes patterns where certain neighborhoods, which are highly correlated with race, have been denied loans, the AI may learn to do the same. Since neighborhoods can act as a proxy for race, the AI could perpetuate a kind of systemic bias, where people from those neighborhoods are unfairly judged as less creditworthy. This form of bias occurs even if race is not an explicitly used feature in the AI model, due to the existence of correlated features that can serve as proxies for sensitive attributes. Thus, this indirect discrimination can lead to unfair outcomes, such as higher loan denial rates or less favorable loan terms for individuals from certain demographics.

Epistemic Harm

When an applicant is rejected from the loan, they have the full right to seek an explanation and justification for fairness. Withholding information in financial decision-making can reinforce power imbalances in various ways. When certain individuals or groups are not provided with crucial financial information, they may not be able to negotiate or advocate for themselves effectively. Denying applicants an explanation of AI’s decision-making processes in banking can constitute an epistemic harm because it undermines the applicants’ ability to understand and potentially challenge the decisions made about them. Borrowing from the concept of epistemic injustice and opaqueness addressed in the paper by Symons and Alvarado (2022), when banks use algorithmic decision-making tools and refuse to provide transparency, they strip applicants of their epistemic agency—namely, their capacity as knowers and decision-makers. This opaqueness denies individuals the “relevant terms, criteria, words, ideas, etc. necessary to understand or articulate that a harm has been done to them,” as the paper articulates. Without comprehension of the decision-making process, individuals cannot assess the fairness or accuracy of the decision, nor can they correct misunderstandings or biases that may have been embedded in the AI’s algorithms. This scenario creates a situation where the bank’s AI is seen as a more credible source than the applicants themselves, thus diminishing the applicants’ epistemic status and reinforcing a testimonial epistemic harm. Moreover, the inability to obtain explanations from AI systems can be perceived as a type of “benevolent condescension” when financial institutions believe that applicants wouldn’t understand the complex processes involved, which further exacerbates the diminishment of their epistemic standing.

Black Box Problem

The term “black box” in AI refers to systems where the decision-making process is not transparent or interpretable to the user. This typically happens with complex algorithms, especially deep learning models, where it is challenging to trace how the AI arrived at a particular decision or output. Their computations involve numerous layers and nonlinear transformations that are not intuitively understandable to humans, even to their developers. Various interpretability and explanability techniques, such as Layer-wise Relevance Propagation (LRP), have been developed to make AI systems more transparent. Nonetheless, these techniques fall short because they typically address only parts of the model, leading to potentially misleading interpretations that fail to provide a complete picture of how decisions are made (Manuel Carabantes, 2020). For instance, neural networks with multiple layers can carry out highly abstract computations that are not readily expressible in a form comprehensible to humans. When the internal workings of AI systems are not understandable, it’s difficult for users to trust the decisions. Without insight into how decisions are arrived at, evaluating their reliability, fairness, or bias is problematic, which is essential in high-stakes applications like loan decision. In the event of a malfunction or poor decision by an AI algorithm, the ability to audit and trace the decision-making process is vital for accountability. Black-box models hinder the identification of root causes of errors, making it hard to hold any entity responsible. One solution could be to implement explainable AI (XAI) models to interpret the decisions. However, this is fairly new and excessively relying on XAI models introduces challenges and risks that must be addressed to ensure trust and robustness (Boukherouaa et al., 2021). Despite offering transparency, these models may struggle with reliability and validity compared to human evaluators.

Transparency of Algorithm

Banks may deliberately limit transparency - methods, data, and specifics of algorithms are often kept confidential for competitive advantage, which can limit external scrutiny and accountability, and such lack of such can undermine trust in the financial institution. If people lack the necessary information, they cannot effectively advocate for themselves, or challenge decisions made by those in positions of power (i.e., banks). They may limit transparency about their decision-making processes for a variety of reasons. One primary motivation is the protection of proprietary information. The algorithms used for making financial decisions such as credit scoring and loan approvals are often considered intellectual property. Revealing the details of these algorithms could compromise a bank’s competitive advantage, as competitors could replicate or counteract the strategies being used (Boscoe, 2019). Additionally, there is a concern that efforts to make algorithms understandable could inadvertently weaken their efficiency or accuracy, as simplification might leave out nuanced decision-making steps that are key to their performance (Boscoe., 2019). Finally, transparency may be limited because of regulatory constraints and the potential for increased liability. Exposing algorithms to external scrutiny might raise legal challenges, especially if the algorithms inadvertently reflect bias or discrimination. Rather than risk potential lawsuits or regulatory actions, banks may opt to keep the inner workings of their decision processes opaque. This, however, raises ethical questions and concerns about fairness, as it is difficult to evaluate and ensure the neutrality of these algorithms without transparency.

Effects of mass Loan denial

The concerns I highlighted earlier—including biases, potential lack of explanation, the opaque nature of AI-based algorithms, inadequate tools for interpreting AI decision-making processes, and banks restricting transparency - have the potential to escalate, leading to unjust loan assessments. These outcomes can extend beyond the immediate impact on individuals’ loan eligibility, yielding far-reaching consequences. It is also important to note that repeated loan rejections not only create obstacles for future borrowing but also signal financial instability or desperation for credit to potential lenders. This perception can further exacerbate the challenge of obtaining financing in the future. Moreover, the cumulative effect of numerous loan rejections can contribute to a credit crunch, where businesses and individuals struggle to access much-needed capital. The ripple effect of reduced consumer spending can reverberate through the business sector, leading to diminished revenues and potentially triggering job losses. Additionally, the challenges in obtaining loans can significantly impact the housing market, as decreased demand for mortgages may drive property values downwards. Even in the best-case scenario where an individual manages to secure a loan after facing rejections, the terms offered by the lender might be less favorable, such as higher interest rates, as a way to mitigate perceived risk. This can further compound the financial burden on borrowers, creating a cycle of economic adversity.

Possible Solutions & its implications

There are some potential solutions to deal with each problem. To mitigate biases, banks must ensure that the training data used to develop AI models is diverse and representative of the population it aims to serve. Biases can arise when training data is skewed or does not adequately capture the diversity of individuals and scenarios. Implement robust techniques for detecting and evaluating biases in AI models. This involves regularly auditing models to identify any disparities in outcomes across different demographic groups. Bias detection tools can help assess the fairness of algorithms. Implement fairness constraints or objectives in the learning algorithm. Fairness can be defined in many ways, such as ensuring equal predictive performance across groups or enforcing demographic parity in the outcomes (Ntoutsi et al., 2020). To address the challenge of the black box issue in AI systems, various strategies can be implemented, focusing on post-hoc analysis and corrections. Regular audits represent a fundamental approach, involving periodic examinations of the AI system’s decisions to identify and rectify any biases. Another technique involves outcome-based adjustments, allowing for post-processing modifications if decisions disproportionately impact specific groups. This can be achieved through methods like disparate impact analysis, applied in both pre and post-processing stages (Ntoutsi et al., 2020). Additionally, techniques within machine learning algorithms are employed to tackle biases directly during the learning process. Modifications to the original data distribution include altering class labels near decision boundaries, assigning different weights to instances, and carefully sampling from each group to achieve balance. These interventions aim to address and mitigate biases inherent in the AI system. Techniques designed for explainability, such as LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive explanations), are also recommended for enhancing transparency and interpretability in AI decision-making processes. These techniques generate post-hoc explanations for black-box models,  offering insights into individual predictions (Gramegna & Giudici, 2021). Furthermore, decision accuracy can be improved in a Hybrid Intelligence (HI) system by enhancing collaboration between AI and humans through interpretability and complementarity. Fleisher (2022) emphasizes the need for a balanced approach, integrating human judgment and expertise alongside XAI models. This dual approach ensures fair, ethical, and accurate loan allocation processes, acknowledging the limitations of relying solely on AI systems in contexts demanding transparency and trustworthiness. Lastly, Banks should invest in reskilling and upskilling programs to prepare the workforce for roles that complement AI technologies. Training humans to interpret AI recommendations enhances decision accuracy by actively involving them in the decision-making process (Hemmer et al., 2021). Humans trained to dynamically interact with the AI can better understand the AI’s logic, use their own expertise to evaluate the AI’s recommendations, and decide when it might be appropriate to override the AI’s suggestions. This collaboration between human intuition and AI’s data-driven approach can lead to improved Complementary Team Performance (CTP), contributing to more accurate and reliable decisions in the context of Hybrid Intelligence systems.

Works Cited

Fleisher, W. (2022). Understanding, Idealization, and Explainable AI. Episteme, 1–27. https://doi.org/10.1017/epi.2022.39

Boukherouaa, E. B., Shabsigh, G., AlAjmi, K., Deodoro, J., Farias, A., Iskender, E. S., Mirestean, A., & Ravikumar, R. (2021). Powering the Digital Economy: Opportunities and risks of Artificial intelligence in finance. Departmental
Papers
, 2021(024). https://doi.org/10.5089/9781589063952.087.a001

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder‐Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., & Broelemann, K. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356

Boscoe, B. (2019). Creating Transparency in Algorithmic Processes. Delphi - Interdisciplinary Review of Emerging Technologies, 2(1), 12–22. https://doi.org/10.21552/delphi/2019/1/5

Gramegna, A., & Giudici, P. (2021). SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.752558

Hemmer, P., Schemmer, M., Vössing, M., & Kühl, N. (2021). Human-AI Complementarity in Hybrid Intelligence Systems: A Structured Literature Review. PACIS 2021 Proceedings. https://aisel.aisnet.org/pacis2021/78/