Friction-Led Intelligence · AI Governance Architecture · Working Prototype
Friction-Led Intelligence (FLI) is a governance framework for lending institutions: it surfaces the gap between institutional credit classification and customer-side representation — typed, traced, and not reconstructed after the fact.
See how it worksWhat should a credit system record when it rejects an application?
The originSystems encode the assumptions of whoever built them.
Inside a large enterprise system, a small backend comment can reveal something larger than a naming convention:
// Field mapping note:
// BUDAT = "Posting Date"
// Not "Budget Date"
String budat = resultSet.getString("BUDAT");
Enterprise platforms often carry the language and assumptions of their original context. A field such as BUDAT can mean Buchungsdatum, Posting Date, while developers working elsewhere encounter it as an inherited abbreviation inside critical infrastructure. The assumptions of the original architects are encoded into the system. Everyone downstream navigates them, mostly without noticing.
The lesson is simple: a schema is not neutral. It carries the worldview of whoever designed it — their language, their categories, their blind spots. The structure of a system shapes what it can see — and what it cannot. In credit decisioning, that has a direct consequence: the institution carries a model of the customer built from historical data and prior assumptions. Customer-side evidence may not fit that model. When those two diverge, most systems do not notice.
That gap — silent, unstructured, ungoverned — is what Friction-Led Intelligence is designed to surface.
Financial institutions have sophisticated credit decisioning systems. What they do not yet have is a structured way to govern what happens when the institution’s model of a customer and the customer-side representation diverge.
Today, a credit officer can still notice context that falls outside a rule. As AI agents begin managing applications and credit workflows, decisions will move faster and with less manual review. Without a dedicated governance layer, important differences between the system’s view and the customer’s reality are simply not recorded.
Post-hoc methods reconstruct a probable account of a decision after it is made. They do not record the actual conditions that produced it. In a regulated lending environment, the distinction between approximation and audit trail is a compliance question — not an academic one.
The architecture of a credit rule and the architecture of an ontology are the same shape. The traceability is not added on. It is native.
Every lending institution maintains an operational model of the customer. Customer-side data and agents can bring a separate representation of financial context. These representations are never identical — and in credit decisioning, that gap is consequential. FLI starts there. Instead of ignoring that divergence, it captures it, types it, and surfaces it as a structured governance signal. Not a failure. A fact.
FLI is a dual-ontology architecture: the institution maintains one structured representation of the customer, while a separate customer-side ontology captures asserted context. When these two representations diverge — and they always diverge, somewhere — the system does not silently decide. It surfaces the conflict as a typed, traceable governance event.
Every time customer-side representation does not match institutional classification, a friction signal is emitted. These signals are structured, typed, and logged — not suppressed. They give risk, compliance, and product teams a concrete record of where the decision architecture may be incomplete.
In FLI, the customer-side ontology is a structured representation of financial context. Today it can be self-declared. In the near future, it may be maintained by a customer’s own AI agent and presented during automated credit workflows. For institutions, that representation becomes an explicit input to governance, review, and policy refinement.
The framework surfaces four types of friction signal, each pointing to a different kind of structural gap:
The customer profile combination matches no existing rule. The signal points to an uncovered policy case rather than an unexplained rejection.
Rejected on numeric thresholds, while customer-side context indicates the rule may be missing material evidence.
Customer-side assertion differs from institutional classification. Two representations of the same applicant require review.
The ML model assigns low risk while the policy rule rejects. The signal exposes an internal conflict between statistical inference and policy logic.
“Metadata infrastructure is not just about storage and interoperability. It is a system of narrative and values.”
Kadel, A. (2025). Friction-Led Intelligence. TechRxiv. doi:10.36227/techrxiv.175099813The result is not just an explanation after rejection. It is a traceable record of where the system and the user’s representation diverged.
In a regulated system, this changes what a decision means.
A rejection is no longer just an outcome. It becomes a structured record of where the system may be incomplete.
That record can be reviewed, audited, and improved.
As credit workflows become more automated, institutions need more than model accuracy. They need evidence that decisions are traceable, reviewable, and capable of exposing their own coverage limits.
epistemic_conflict signals surface where the ML risk model accepts an applicant but the policy rule rejects — an internal disagreement that would otherwise pass unnoticed.semantic_scope_gap and threshold_context_gap signals convert recurring edge cases into structured evidence for updating rule coverage, thresholds, and product policy.semantic_profile_conflict signals surface where the institution’s classification and the customer’s own representation diverge — identifying blind spots before they become operational, compliance, or reputational risk.This is not a concept. A working prototype has been built and tested.
FLI is implemented as a working proof of concept. Every component of the architecture exists in code, produces traceable outputs, and is tested against realistic decision scenarios.
The principle is strict: decision logic lives in the ontology, not in application code. Every decision is traceable to exact inputs and conditions. Nothing is reconstructed after the fact.
Freelance UX designer. 3 years self-employed. Documented customer retainers. Application evaluated for a FlexiCard.
Two ontologies — the institution’s domain ontology and the customer-side ontology — remain intentionally separate. Customer-side representation participates in the evaluation as a governed input. When the two diverge, the system does not silently decide. It surfaces the conflict as a typed governance signal.
The LLM explanation layer reads stored OWL facts — not a reconstructed narrative. The explanation is grounded in the same ontology that made the decision. It cannot confabulate.
FLI does not stop at detection. The governance dashboard below shows what happens next: once a divergence is classified, it enters a review workflow where the institution can intervene, refine ontology coverage, and adjust the policy logic that produced it.
The customer profile falls outside rule coverage or conflicts with institutional classification.
The mismatch is recorded as a governance event: coverage gap, threshold issue, profile conflict, or ML-policy conflict.
The signal enters the admin queue, where governance can update ontology coverage, thresholds, or policy logic.
This is a research prototype — honest about what it is, precise about what it proves.
The architecture is implemented. The signals are generated. The governance loop closes.
The next step is to take this into the field — to work with a financial institution or fintech willing to ask the harder question: not just “does the model work?” but “does the system know where its coverage ends?”
For institutions deploying AI in consequential credit workflows, accuracy is not enough if the system cannot account for its own limits. The question is not only whether a decision can be made, but whether the conditions behind it can be traced, reviewed, and improved.
That is what responsible AI requires in practice: not principles alone, but operational infrastructure that surfaces blind spots before they become unmanaged risk.