Not registered? Create an Account
Forgot your password? Reset Password
Artificial intelligence is rapidly reshaping how we assess risk and grant credit, especially for small businesses. The rise of impact-based AI is changing the rules of lending, using behavioral and third-party data to judge financial worthiness. That’s not just a tech shift—it’s a governance earthquake.
As professionals entrusted with security, compliance, and strategy, CISOs and board members must now wrestle with questions far beyond firewalls and frameworks:
Can we trust algorithms that we can’t explain?
Who is accountable when the system makes a discriminatory call?
Are we protecting the very businesses we claim to serve?
Let’s break this down.
Traditional credit models rely on what you’d expect: income, credit history, payment behavior. These are regulated, auditable, and (mostly) explainable.
Impact-based AI, however, pulls in a sea of signals—location data, device behavior, browsing patterns, third-party app interactions. It’s powered by machine learning, making it adaptive and dynamic... but also opaque (Hurley & Adebayo, 2017).
This isn’t hypothetical. These systems already influence lending decisions for small businesses, especially those with “thin” credit files. That’s a game-changer for underserved entrepreneurs. But it comes with major caveats:
Behavioral AI requires massive, intimate datasets, often collected passively through trackers, plugins, and third-party sources (Solove & Schwartz, 2021). Many users aren’t aware of what’s being collected, much less how it’s being used to determine their financial fate.
These systems also risk replicating historical and structural bias. As Virginia Eubanks (2018) warns, data-driven tools can “automate inequality,” reinforcing social hierarchies under the guise of objectivity.
Then there’s the cybersecurity threat. AI systems are vulnerable to adversarial inputs—small, crafted changes in data that manipulate outcomes—or poisoned training datasets. For small business applicants, the stakes are high, but legal and technical resources to fight back are low.
In July 2025, the U.S. passed the One Big Beautiful Bill Act as part of a federal reconciliation package. The original version included a shocking provision:
🛑 A ten-year moratorium on state and local AI regulation.
That would have gutted state laws regulating algorithmic bias, transparency, and accountability in commercial AI—particularly in high-impact sectors like lending, housing, and hiring (Brenner & Slowik, 2025).
Luckily, the Senate repealed that provision in a near-unanimous vote, preserving state authority to regulate AI (Van Demark & Bank, 2025). California’s AB 331 and Colorado’s SB 205 are examples of state-driven innovation in AI ethics—mandating audits, anti-discrimination standards, and more.
🟢 That’s a win for civil rights and democratic governance.
🔴 But it leaves businesses navigating a regulatory patchwork nightmare.
Let’s contrast this with the European Union’s GDPR, particularly Article 22, which restricts fully automated decisions with significant legal effects—like denying a loan. GDPR requires:
That’s the kind of structured governance the U.S. lacks. We have sector-specific rules (like ECOA) but no comprehensive federal AI framework. Consumers in the U.S. have limited recourse, and businesses face fragmented compliance demands.
Still, GDPR isn’t without cost. It’s compliance heavy. But it fosters trust, clarity, and rights, all essential in high-stakes, high-risk decisions (Wachter, Mittelstadt, & Floridi, 2017).
Let’s be blunt: AI isn’t coming—it’s already here. The challenge is governance.
For CISOs:
For Boards:
Impact-based AI can uplift, but without guardrails, it can just as easily entrench injustice.
The failure of OBBBA’s moratorium is a call to action—not complacency.
As a panel, we invite the following commentary:
📩 Add your voice. Let’s shape the governance future we want—before it’s too late.
Brenner, G., & Slowik, J. (2025). Big Beautiful Bill leaves AI regulation to states and localities… for now. Law and the Workplace. https://www.lawandtheworkplace.com/2025/07/big-beautiful-bill-leaves-ai-regulation-to-states-and-localities-for-now/
Brundage, M., Avin, S., Wang, J., Krueger, G., Hadfield, G., Khlaaf, H., ... & Amodei, D. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv:2004.07213.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
European Parliament and Council. (2016). General Data Protection Regulation (EU Regulation 2016/679). https://eur-lex.europa.eu/eli/reg/2016/679/oj
Hurley, M., & Adebayo, J. (2017). Credit scoring in the era of big data. Yale Journal of Law and Technology, 18(1), 148–216.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Solove, D. J., & Schwartz, P. M. (2021). Information privacy law (7th ed.). Aspen Publishing.
Van Demark, D., & Bank, R. (2025). No state AI law moratorium in One Big Beautiful Bill Act. McDermott Will & Emery. https://www.mwe.com/insights/no-state-ai-law-moratorium-in-one-big-beautiful-bill-act/
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005