Friction-Led Intelligence · AI Governance Architecture · Working Prototype

FLI makes credit decisions traceable, auditable, and contestable.

Friction-Led Intelligence (FLI) is a governance framework for lending institutions: it surfaces the gap between institutional credit classification and customer-side representation — typed, traced, and not reconstructed after the fact.

See how it works

Credit decisioning needs a governance layer for representation gaps.

Financial institutions have sophisticated credit decisioning systems. What they do not yet have is a structured way to govern what happens when the institution’s model of a customer and the customer-side representation diverge.

The governance gap

Today, a credit officer can still notice context that falls outside a rule. As AI agents begin managing applications and credit workflows, decisions will move faster and with less manual review. Without a dedicated governance layer, important differences between the system’s view and the customer’s reality are simply not recorded.

The explainability gap

Post-hoc methods reconstruct a probable account of a decision after it is made. They do not record the actual conditions that produced it. In a regulated lending environment, the distinction between approximation and audit trail is a compliance question — not an academic one.

Post-hoc explainability
Approximates feature importance
Applied after the decision is made
Cannot trace the exact rule that fired
Explanation is reconstructed, not recorded
Ontology-based traceability
Decision logic lives in the ontology
Every condition is recorded as it fires
Audit trail is native to the architecture
Traceability is not added on — it is the structure

The architecture of a credit rule and the architecture of an ontology are the same shape. The traceability is not added on. It is native.

Friction-Led Intelligence.

Every lending institution maintains an operational model of the customer. Customer-side data and agents can bring a separate representation of financial context. These representations are never identical — and in credit decisioning, that gap is consequential. FLI starts there. Instead of ignoring that divergence, it captures it, types it, and surfaces it as a structured governance signal. Not a failure. A fact.

FLI is a dual-ontology architecture: the institution maintains one structured representation of the customer, while a separate customer-side ontology captures asserted context. When these two representations diverge — and they always diverge, somewhere — the system does not silently decide. It surfaces the conflict as a typed, traceable governance event.

01

Friction is data, not failure.

Every time customer-side representation does not match institutional classification, a friction signal is emitted. These signals are structured, typed, and logged — not suppressed. They give risk, compliance, and product teams a concrete record of where the decision architecture may be incomplete.

02

Customer representation becomes a governance input.

In FLI, the customer-side ontology is a structured representation of financial context. Today it can be self-declared. In the near future, it may be maintained by a customer’s own AI agent and presented during automated credit workflows. For institutions, that representation becomes an explicit input to governance, review, and policy refinement.

FLI ontology flow Customer information enters the customer-side ontology, is compared with the institutional ontology, emits a typed governance signal when the two diverge, and that signal is routed into a governance loop for review and refinement. routes into review CUSTOMER submitted context CUSTOMER-SIDE ONTOLOGY self-representation INSTITUTIONAL ONTOLOGY institutional classification FRICTION SIGNAL typed governance event GOVERNANCE LOOP review, intervene, refine

The framework surfaces four types of friction signal, each pointing to a different kind of structural gap:

Gap in rule coverage
semantic_scope_gap

The customer profile combination matches no existing rule. The signal points to an uncovered policy case rather than an unexplained rejection.

See example

e.g. A freelancer with stable offshore income applies. No rule exists for that employment + income combination. The gap is logged as rule coverage debt.

Missing context at the threshold
threshold_context_gap

Rejected on numeric thresholds, while customer-side context indicates the rule may be missing material evidence.

See example

e.g. Disposable income falls ₹3,000 below the threshold, but the customer has three years of documented retainer contracts the rule does not account for.

Two representations of the same person
semantic_profile_conflict

Customer-side assertion differs from institutional classification. Two representations of the same applicant require review.

See example

e.g. The institution classifies the applicant as Freelancer. The customer-side agent asserts SelfEmployed with a registered business. Different risk treatment; same person.

The model and the rule disagree
epistemic_conflict

The ML model assigns low risk while the policy rule rejects. The signal exposes an internal conflict between statistical inference and policy logic.

See example

e.g. ML assigns LowRisk. Policy requires Salaried employment. The applicant is a gig worker. The model and the rule disagree, creating a reviewable governance event.

“Metadata infrastructure is not just about storage and interoperability. It is a system of narrative and values.”

Kadel, A. (2025). Friction-Led Intelligence. TechRxiv. doi:10.36227/techrxiv.175099813

Why this matters for banks and fintechs.

As credit workflows become more automated, institutions need more than model accuracy. They need evidence that decisions are traceable, reviewable, and capable of exposing their own coverage limits.

Model risk governance

epistemic_conflict signals surface where the ML risk model accepts an applicant but the policy rule rejects — an internal disagreement that would otherwise pass unnoticed.

Audit readiness

The 32-column decision record logs every condition that fired, every threshold checked, every rule evaluated. Not reconstructed after the fact — written at the moment of decision.

Policy refinement

semantic_scope_gap and threshold_context_gap signals convert recurring edge cases into structured evidence for updating rule coverage, thresholds, and product policy.

Responsible AI deployment

semantic_profile_conflict signals surface where the institution’s classification and the customer’s own representation diverge — identifying blind spots before they become operational, compliance, or reputational risk.

Built, not just proposed.

This is not a concept. A working prototype has been built and tested.

FLI is implemented as a working proof of concept. Every component of the architecture exists in code, produces traceable outputs, and is tested against realistic decision scenarios.

The principle is strict: decision logic lives in the ontology, not in application code. Every decision is traceable to exact inputs and conditions. Nothing is reconstructed after the fact.

Worked example — Maya

Freelance UX designer. 3 years self-employed. Documented customer retainers. Application evaluated for a FlexiCard.

1
Profile enters
Profile submitted: Freelancer, MediumRisk, disposable income ₹3,000 below the FlexiCard threshold.
2
Institution classifies
Domain ontology: MediumRisk, Freelancer. FlexiCard rule requires ≥₹50,000. Threshold missed. Rejected.
3
Context asserted
Customer-side ontology holds 3 retainer contracts — consistent income that clears the threshold when documented correctly.
4
Signal emitted
threshold_context_gap
Flagged for governance review. 32-field audit log written. Every condition traced to the exact rule that fired.
1
Intake & Semantic Profiling
Application Intake
Customer financial profile enters the evaluation workflow
Domain Ontology
Customer mapped into the institutional knowledge graph
SPARQL Reasoning
Semantic queries extract structured financial profile
2
Risk Assessment & Decision
Risk Model
ML classifier assigns risk band from historical patterns
Rule Matching
Policy rules evaluated against the semantic profile
Decision Ontology
Outcome recorded as traceable OWL facts — not approximated
3
Governance & Friction Layer
Feedback Loop
Every decision logged to a 32-column audit trail
Friction Signals
Divergence between customer-side evidence and system model is typed and surfaced
Admin Detector
Governance dashboard surfaces patterns for human review
Article 86 compliance output

The document the system produces.

When a customer is rejected, FLI generates this compliance record automatically — at the moment of decision, from the same ontology that made it. No human drafts it. Nothing is reconstructed after the fact.

It satisfies EU AI Act Article 86(1) by construction: every condition tested, the exact threshold compared, the customer’s counter-evidence, and the OWL property linking their dispute to the specific decision node — all in one downloadable document. Unlike a PDF letter, the record is backed by OWL triples. A regulator can run a structured query across every decision in the portfolio — not read them one by one.

Generated at decision time

Not drafted by a compliance team after the fact. Produced by the pipeline at the moment of rejection.

Cross-party record

The customer’s challenge is formally linked to the institution’s assessment node via challengesAssessment — not a separate complaint thread.

Regulator-queryable

Backed by OWL triples, not readable text. A regulator can query across all decisions in the portfolio without reading each one.

article86_Demo_001_PersonalLoan.html

Two ontologies — the institution’s domain ontology and the customer-side ontology — remain intentionally separate. Customer-side representation participates in the evaluation as a governed input. When the two diverge, the system does not silently decide. It surfaces the conflict as a typed governance signal.

The LLM explanation layer reads stored OWL facts — not a reconstructed narrative. The explanation is grounded in the same ontology that made the decision. It cannot confabulate.

Operational proof

From signal to review

FLI does not stop at detection. The governance dashboard below shows what happens next: once a divergence is classified, it enters a review workflow where the institution can intervene, refine ontology coverage, and adjust the policy logic that produced it.

Prototype governance dashboard showing friction signal counts, breakdowns, and open governance loops awaiting review.
1
Divergence detected

The customer profile falls outside rule coverage or conflicts with institutional classification.

2
Typed signal emitted

The mismatch is recorded as a governance event: coverage gap, threshold issue, profile conflict, or ML-policy conflict.

3
Review loop opened

The signal enters the admin queue, where governance can update ontology coverage, thresholds, or policy logic.

Version 1 is working.
Now it needs to run in the real world.

The architecture is implemented. The signals are generated. The governance loop closes.

The next step is to take this into the field — to work with a financial institution or fintech willing to ask the harder question: not just “does the model work?” but “does the system know where its coverage ends?”

The governance layer is the point.

For institutions deploying AI in consequential credit workflows, accuracy is not enough if the system cannot account for its own limits. The question is not only whether a decision can be made, but whether the conditions behind it can be traced, reviewed, and improved.

That is what responsible AI requires in practice: not principles alone, but operational infrastructure that surfaces blind spots before they become unmanaged risk.

Home The Problem Framework Value Prototype Vision
0%