Friction-Led Intelligence · AI Governance Architecture · Working Prototype

FLI makes credit decisions traceable, auditable, and contestable.

Friction-Led Intelligence (FLI) is a governance framework for lending institutions: it surfaces the gap between institutional credit classification and customer-side representation — typed, traced, and not reconstructed after the fact.

See how it works

What should a credit system record when it rejects an application?

The observer is the observed.

Systems encode the assumptions of whoever built them.

Inside a large enterprise system, a small backend comment can reveal something larger than a naming convention:

// Field mapping note:
// BUDAT = "Posting Date"
// Not "Budget Date"
String budat = resultSet.getString("BUDAT");

Enterprise platforms often carry the language and assumptions of their original context. A field such as BUDAT can mean Buchungsdatum, Posting Date, while developers working elsewhere encounter it as an inherited abbreviation inside critical infrastructure. The assumptions of the original architects are encoded into the system. Everyone downstream navigates them, mostly without noticing.

The lesson is simple: a schema is not neutral. It carries the worldview of whoever designed it — their language, their categories, their blind spots. The structure of a system shapes what it can see — and what it cannot. In credit decisioning, that has a direct consequence: the institution carries a model of the customer built from historical data and prior assumptions. Customer-side evidence may not fit that model. When those two diverge, most systems do not notice.

That gap — silent, unstructured, ungoverned — is what Friction-Led Intelligence is designed to surface.

Credit decisioning needs a governance layer for representation gaps.

Financial institutions have sophisticated credit decisioning systems. What they do not yet have is a structured way to govern what happens when the institution’s model of a customer and the customer-side representation diverge.

The governance gap

Today, a credit officer can still notice context that falls outside a rule. As AI agents begin managing applications and credit workflows, decisions will move faster and with less manual review. Without a dedicated governance layer, important differences between the system’s view and the customer’s reality are simply not recorded.

The explainability gap

Post-hoc methods reconstruct a probable account of a decision after it is made. They do not record the actual conditions that produced it. In a regulated lending environment, the distinction between approximation and audit trail is a compliance question — not an academic one.

Post-hoc explainability
Approximates feature importance
Applied after the decision is made
Cannot trace the exact rule that fired
Explanation is reconstructed, not recorded
Ontology-based traceability
Decision logic lives in the ontology
Every condition is recorded as it fires
Audit trail is native to the architecture
Traceability is not added on — it is the structure

The architecture of a credit rule and the architecture of an ontology are the same shape. The traceability is not added on. It is native.

Friction-Led Intelligence.

Every lending institution maintains an operational model of the customer. Customer-side data and agents can bring a separate representation of financial context. These representations are never identical — and in credit decisioning, that gap is consequential. FLI starts there. Instead of ignoring that divergence, it captures it, types it, and surfaces it as a structured governance signal. Not a failure. A fact.

FLI is a dual-ontology architecture: the institution maintains one structured representation of the customer, while a separate customer-side ontology captures asserted context. When these two representations diverge — and they always diverge, somewhere — the system does not silently decide. It surfaces the conflict as a typed, traceable governance event.

01

Friction is data, not failure.

Every time customer-side representation does not match institutional classification, a friction signal is emitted. These signals are structured, typed, and logged — not suppressed. They give risk, compliance, and product teams a concrete record of where the decision architecture may be incomplete.

02

Customer representation becomes a governance input.

In FLI, the customer-side ontology is a structured representation of financial context. Today it can be self-declared. In the near future, it may be maintained by a customer’s own AI agent and presented during automated credit workflows. For institutions, that representation becomes an explicit input to governance, review, and policy refinement.

FLI ontology flow Customer information enters the customer-side ontology, is compared with the institutional ontology, emits a typed governance signal when the two diverge, and that signal is routed into a governance loop for review and refinement. routes into review CUSTOMER submitted context CUSTOMER-SIDE ONTOLOGY self-representation INSTITUTIONAL ONTOLOGY institutional classification FRICTION SIGNAL typed governance event GOVERNANCE LOOP review, intervene, refine

The framework surfaces four types of friction signal, each pointing to a different kind of structural gap:

Gap in rule coverage
semantic_scope_gap

The customer profile combination matches no existing rule. The signal points to an uncovered policy case rather than an unexplained rejection.

See example

e.g. A freelancer with stable offshore income applies. No rule exists for that employment + income combination. The gap is logged as rule coverage debt.

Missing context at the threshold
threshold_context_gap

Rejected on numeric thresholds, while customer-side context indicates the rule may be missing material evidence.

See example

e.g. Disposable income falls ₹3,000 below the threshold, but the customer has three years of documented retainer contracts the rule does not account for.

Two representations of the same person
semantic_profile_conflict

Customer-side assertion differs from institutional classification. Two representations of the same applicant require review.

See example

e.g. The institution classifies the applicant as Freelancer. The customer-side agent asserts SelfEmployed with a registered business. Different risk treatment; same person.

The model and the rule disagree
epistemic_conflict

The ML model assigns low risk while the policy rule rejects. The signal exposes an internal conflict between statistical inference and policy logic.

See example

e.g. ML assigns LowRisk. Policy requires Salaried employment. The applicant is a gig worker. The model and the rule disagree, creating a reviewable governance event.

“Metadata infrastructure is not just about storage and interoperability. It is a system of narrative and values.”

Kadel, A. (2025). Friction-Led Intelligence. TechRxiv. doi:10.36227/techrxiv.175099813

What this looks like in practice

1
Decision happens
A user applies for a loan and is rejected by an AI-assisted credit system.
2
Two realities are compared
The institution’s structured view is compared with the user’s own financial reality.
3
Friction is logged
If the mismatch affects the decision, FLI records it as a typed friction signal in the audit trail.

The result is not just an explanation after rejection. It is a traceable record of where the system and the user’s representation diverged.

In a regulated system, this changes what a decision means.

A rejection is no longer just an outcome. It becomes a structured record of where the system may be incomplete.

That record can be reviewed, audited, and improved.

Why this matters for banks and fintechs.

As credit workflows become more automated, institutions need more than model accuracy. They need evidence that decisions are traceable, reviewable, and capable of exposing their own coverage limits.

Model risk governance

epistemic_conflict signals surface where the ML risk model accepts an applicant but the policy rule rejects — an internal disagreement that would otherwise pass unnoticed.

Audit readiness

The 32-column decision record logs every condition that fired, every threshold checked, every rule evaluated. Not reconstructed after the fact — written at the moment of decision.

Policy refinement

semantic_scope_gap and threshold_context_gap signals convert recurring edge cases into structured evidence for updating rule coverage, thresholds, and product policy.

Responsible AI deployment

semantic_profile_conflict signals surface where the institution’s classification and the customer’s own representation diverge — identifying blind spots before they become operational, compliance, or reputational risk.

Built, not just proposed.

This is not a concept. A working prototype has been built and tested.

FLI is implemented as a working proof of concept. Every component of the architecture exists in code, produces traceable outputs, and is tested against realistic decision scenarios.

The principle is strict: decision logic lives in the ontology, not in application code. Every decision is traceable to exact inputs and conditions. Nothing is reconstructed after the fact.

Worked example — Maya

Freelance UX designer. 3 years self-employed. Documented customer retainers. Application evaluated for a FlexiCard.

1
Profile enters
Profile submitted: Freelancer, MediumRisk, disposable income ₹3,000 below the FlexiCard threshold.
2
Institution classifies
Domain ontology: MediumRisk, Freelancer. FlexiCard rule requires ≥₹50,000. Threshold missed. Rejected.
3
Context asserted
Customer-side ontology holds 3 retainer contracts — consistent income that clears the threshold when documented correctly.
4
Signal emitted
threshold_context_gap
Flagged for governance review. 32-field audit log written. Every condition traced to the exact rule that fired.
1
Intake & Semantic Profiling
Application Intake
Customer financial profile enters the evaluation workflow
Domain Ontology
Customer mapped into the institutional knowledge graph
SPARQL Reasoning
Semantic queries extract structured financial profile
2
Risk Assessment & Decision
Risk Model
ML classifier assigns risk band from historical patterns
Rule Matching
Policy rules evaluated against the semantic profile
Decision Ontology
Outcome recorded as traceable OWL facts — not approximated
3
Governance & Friction Layer
Feedback Loop
Every decision logged to a 32-column audit trail
Friction Signals
Divergence between customer-side evidence and system model is typed and surfaced
Admin Detector
Governance dashboard surfaces patterns for human review

Two ontologies — the institution’s domain ontology and the customer-side ontology — remain intentionally separate. Customer-side representation participates in the evaluation as a governed input. When the two diverge, the system does not silently decide. It surfaces the conflict as a typed governance signal.

The LLM explanation layer reads stored OWL facts — not a reconstructed narrative. The explanation is grounded in the same ontology that made the decision. It cannot confabulate.

2
Separate ontologies — institutional and customer-side — intentionally never merged
4
Typed friction signal categories surfacing distinct governance gaps
32
Columns in every decision record — employment type, risk band, matched rule, exact threshold conditions. Fully auditable.
Operational proof

From signal to review

FLI does not stop at detection. The governance dashboard below shows what happens next: once a divergence is classified, it enters a review workflow where the institution can intervene, refine ontology coverage, and adjust the policy logic that produced it.

Prototype governance dashboard showing friction signal counts, breakdowns, and open governance loops awaiting review.
1
Divergence detected

The customer profile falls outside rule coverage or conflicts with institutional classification.

2
Typed signal emitted

The mismatch is recorded as a governance event: coverage gap, threshold issue, profile conflict, or ML-policy conflict.

3
Review loop opened

The signal enters the admin queue, where governance can update ontology coverage, thresholds, or policy logic.

Where this comes from.

This is a research prototype — honest about what it is, precise about what it proves.

Published
TechRxiv preprint, 2025 — DOI 10.36227. Preprint available; peer review in progress.
Origin
Built from direct experience inside enterprise systems, where categories, schemas, and decision structures shape how complex human situations are interpreted in practice.
Available for review
Available for institutional review on request. The full prototype: both ontologies, the decision engine, the friction signal pipeline, the 32-column audit log — everything.

Version 1 is working.
Now it needs to run in the real world.

The architecture is implemented. The signals are generated. The governance loop closes.

The next step is to take this into the field — to work with a financial institution or fintech willing to ask the harder question: not just “does the model work?” but “does the system know where its coverage ends?”

The governance layer is the point.

For institutions deploying AI in consequential credit workflows, accuracy is not enough if the system cannot account for its own limits. The question is not only whether a decision can be made, but whether the conditions behind it can be traced, reviewed, and improved.

That is what responsible AI requires in practice: not principles alone, but operational infrastructure that surfaces blind spots before they become unmanaged risk.

Home Origin The Problem Framework In Practice Institutional Value Prototype Foundations Vision
0%