Artificial Intelligence in Finance and the Case for Regulatory Transparency

By Brooklyn Siedlecki ’27

Artificial intelligence has rapidly become embedded in the infrastructure of American finance. By 2023, JPMorgan Chase began integrating large-language-model tools into risk management and trading analysis, part of a broader industry movement toward algorithmic underwriting, automated fraud detection, and AI-driven customer interaction. At the same time, the Consumer Financial Protection Bureau (CFPB) issued increasingly forceful warnings that creditors cannot evade federal anti-discrimination laws simply because they rely on “black-box” models. The agency emphasized that the Equal Credit Opportunity Act (ECOA), enacted in 1974,  continues to require lenders to provide “specific and accurate reasons” for adverse credit actions, to prohibit discrimination based on protected characteristics such as sex, race, and age, and to ensure those explanations are meaningful to consumers, even when those decisions are produced by complex algorithms. (1) By reaffirming ECOA’s nondiscrimination requirements in the context of machine-learning models, the CFPB made clear that technological complexity does not diminish lenders’ legal accountability. Together, these developments showcase both the transformative ability and the emerging dangers of AI-driven financial systems. 

U.S. financial regulation was designed for a world where human judgment guided financial decisions, making it possible to identify who was responsible. It is not well equipped for an environment where AI systems execute trades, assess risk, and make lending decisions autonomously. Today’s machine-learning models challenge the core assumptions that regulators can clearly understand and trace how financial decisions are made. It is these assumptions that form the basis for disclosure requirements, supervisory expectations, anti-discrimination rules, and market-stability safeguards. Although statutes such as ECOA, the Fair Credit Reporting Act (FCRA), the Securities Exchange Act of 1934, and the Dodd-Frank Act technically apply to algorithmic conduct, their enforcement mechanisms presuppose transparency and stable model behavior, both conditions that modern AI rarely satisfies. To restore accountability, Congress and financial regulators must adopt an explicit AI Transparency Mandate: a requirement that financial institutions document and disclose the logic, risks, and data sources underlying the AI systems that increasingly govern credit access and market activity.

Artificial intelligence is now woven into nearly every aspect of the financial sector. AI underwriting models, used by firms such as Upstart and Zest AI, promise more individualized credit assessments by leveraging machine-learning algorithms capable of detecting patterns in borrower behavior that conventional credit scores overlook. Automated trading systems, such as BlackRock’s Aladdin platform and Goldman Sachs’ algorithmic tools, process massive data sets to make nearly instant investment decisions. Banks deploy machine-learning systems, which are algorithmic models trained on vast sets of historical transaction data that learn to recognize patterns of normal and abnormal behavior to detect fraud, monitor suspicious transactions, and flag compliance risks. Yet, the legal framework governing these systems was designed decades before such technologies existed. ECOA prohibits discrimination in credit and requires lenders to provide “specific reasons” when denying credit. (2) FCRA mandates accuracy and transparency in credit reporting. (3) In the securities markets, the Securities Exchange Act of 1934 gives the Securities and Exchange Commission (SEC) authority over market manipulation and broker-dealer conduct. (4) The Dodd-Frank Act of 2010 created systemic-risk oversight following the financial crisis, but does not meaningfully address algorithmic or autonomous decision-making. (5) The only regulatory document that addresses “models” in finance is the OCC Bulletin 2011-12, also known as SR 11-7, which predates current machine-learning techniques and assumes that models are stable and documentable. (6) However, contemporary AI models are adaptive and difficult to fully break down. 

ECOA and its implementing regulation, Regulation B, require lenders to provide clear, understandable explanations for adverse credit decisions. (7) This requirement assumes that a lender can meaningfully explain the factors leading to a denial. However, modern systems often cannot be reverse-engineered into explanations due to their complexity and non-linear structure. In 2022, the CFPB clarified that lenders cannot comply with ECOA by just claiming to use a “complex algorithm” as the basis for their decisions. (8) While important, this guidance stops short of establishing mechanisms for verifying that lenders can provide adequate reasons in the first place. Without regulatory access to model documentation and training data, enforcement remains incomplete. 

Securities regulation faces parallel challenges. SEC Rule 15c3-5, the “Market Access Rule,” requires firms to maintain supervisory controls over automated trading systems. (9) These controls were designed for algorithmic trading models that, while fast, were still rule-based and interpretable. The risks of insufficient oversight were illustrated in the 2012 Knight Capital Incident, in which a code deployment error triggered a malfunctioning trading algorithm that executed millions of erroneous trades, costing the firm $440 million in under an hour. The SEC fined Knight $12 million for inadequate safeguards. (10) Despite this, nothing in securities law requires firms to test AI models for data drift or emergent behavior. Regulation remains focused on procedural controls rather than on understanding algorithmic decision-making itself. 

Banking regulators have struggled to adapt traditional model-risk frameworks to modern AI. The OCC’s foundational guidance, SR 11-7, assumes a stable relationship between inputs, model logic, and outputs, which are all conditions rarely met by continuously learning AI systems. (11) Because machine-learning models may shift as data environments change, even small divergences can lead to large-scale errors or discriminatory patterns. Regulators lack standardized audit procedures to monitor such models, and banks lack clear expectations for compliance. The result is a supervisory situation where regulators can punish failures after the fact, but cannot meaningfully evaluate models before they go into use. 

In contrast with the systems established in America, the European Union’s (EU’s) Artificial Intelligence Act (2024) classifies credit scoring and algorithmic trading as “high-risk” applications, requiring documentation of model logic, human oversight, and extensive testing. (12) The U.S. has no comparable statute, leaving financial institutions with little guidance, causing them to lack enforceable transparency obligations. To restore accountability and protect consumers, Congress and federal regulators should adopt a nationwide AI Transparency Mandate for financial systems. While states also have the ability to pass legislation, such as California’s Transparency in Frontier AI Act which requires high-capacity AI models to publish safety protocols and conduct risk assessments, having an AI Transparency Mandate on the national level would ensure enforceable guidelines are present across all states. (13)

The first step in achieving this would be enacting statutory reforms. Congress should amend Dodd-Frank § 165, which authorizes enhanced oversight of systematically important financial institutions, so it classifies high-risk AI systems as subject to enhanced prudential standards, including mandatory third-party audits, documentation of model logic, and ongoing performance monitoring. Also, expansions should be made to the ECOA and Regulation B so the law explicitly requires lenders using algorithmic systems to maintain interpretable model documentation and to provide clear, empirically supported reasons for adverse credit decisions. It could also be helpful to establish a Joint AI Oversight Task Force modeled on the Financial Stability Oversight Council (FSOC), which works with federal regulators to identify and mitigate systematic risks to U.S. financial stability. This way, a coordinated interagency body can oversee algorithmic risks across financial systems. 

In order to have better transparency, four important concepts must be emphasized in the reforms: auditability, explainability, accountability, and confidentiality protections. Regulators must have access to training data, model logic, and testing records. The models must produce simplified and understandable decision rationales. Institutions must bear responsibility for model behavior, including discriminatory or unstable outputs, and must take affirmative steps to design and monitor AI systems in ways that minimize the risk of such harms before they occur. Firms’ proprietary information must remain protected through secure regulatory disclosure portals. 

The rapid integration of artificial intelligence into financial services has outpaced the legal frameworks designed to ensure fairness and stability. Existing laws technically apply to AI, but they presume a world of human decision-makers capable of explaining their choices. Modern AI systems, particularly complex machine-learning models, undermine this foundation. To maintain trust in the financial system, regulators must be able to gain a grasp on the model’s logic and process. That requires an AI Transparency Mandate, which would ensure a clear, enforceable, cross-agency framework that creates algorithmic accountability while preserving room for innovation. As financial institutions increasingly rely on models that make billions of decisions each day, the legal system must evolve to ensure that those decisions remain explicable and worthy of public confidence.

Endnotes

1. Consumer Financial Protection Bureau, “CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms,” press release, May 26, 2022, https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/.

2. Equal Credit Opportunity Act, 15 U.S.C. § 1691 et seq. 

3. Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq. 

4. Securities Exchange Act of 1934, 15 U.S.C. § 78a et seq. 

5. Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111-203 (2010). 

6. Office of the Comptroller of the Currency, Supervisory Guidance on Model Risk Management, OCC Bulletin 2011–12, April 4, 2011, https://www.occ.gov/news-issuances/bulletins/2011/bulletin-2011-12.html.

7. 12 C.F.R. § 1002.9. 

8. CFPB, Black-Box Algorithm Guidance (2022). 

9. Securities and Exchange Commission, “Market Access Rule,” 17 C.F.R. § 240.15c3–5. 10. Securities and Exchange Commission, Order Instituting Administrative and Cease-and-Desist Proceedings, In the Matter of Knight Capital Americas LLC, Release No. 70694 (Washington, D.C.: SEC, October 16, 2013). 

11. Office of the Comptroller of the Currency, Supervisory Guidance on Model Risk Management, OCC Bulletin 2011–12. 

12. European Parliament and Council of the European Union, Artificial Intelligence Act, Article 2, accessed December 2025, https://artificialintelligenceact.eu/article/2/.

13. Office of Governor Gavin Newsom, “Governor Newsom Signs SB 53, Advancing California’s World-Leading Artificial Intelligence Industry,” press release, September 29, 2025, https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry/.

Next
Next

Child Marriage and the Law: A Comparative Study of the United States and Nigeria