Knowledge Ridge

AI-Driven Fraud Detection in Indian Banking

AI-Driven Fraud Detection in Indian Banking

May 12, 2026 11 min read Financials
AI-Driven Fraud Detection in Indian Banking

Q1. Could you start by giving us a brief overview of your professional background, particularly focusing on your expertise in the industry?

I've spent close to 15 years working at the intersection of technology and financial services, beginning my career in software engineering at Computer Sciences Corporation before transitioning into banking. Today I serve as Chief Manager in the Fraud Risk Management Department at Bank of Baroda, where I lead our Enterprise Fraud Risk Management System covering more than 20 digital and payment channels — cards, UPI, internet banking, mobile, IMPS, AEPS, and others.

My work sits at the convergence of three areas: fraud risk operations, AI/ML governance, and digital payments regulation in India. I'm an ISO/IEC 42001 Lead Implementer (AI Management Systems),ISO 22301 Lead Implementer (Business Continuity), and an IBM-certified Qiskit developer with a focus on quantum-safe cryptography for payments, and I hold certifications in risk and cybersecurity from CISI and Google. I also serve as a single point of contact for several industry initiatives, including the Digital Payments Intelligence Platform (RBIH), NPCI's federated AI program, and the I4C / CFCFRMS ecosystem for cybercrime coordination.

Beyond the day job, I write and consult on financial crime, AI governance in BFSI, and the emerging intersection of quantum technology and payment security.

 

Q2. How are banks rethinking fraud risk management systems as they move from rule-based monitoring to AI- and behaviour-driven detection?

The honest answer is that banks aren't replacing rules with AI — they are layering them. Rule engines remain the foundation because they are explainable to regulators, fast to deploy, and deterministic in their behaviour. What has changed is that rules alone can no longer keep pace with the velocity and morphology of modern fraud. A rule, by definition, is reactive — it encodes what you already know.

The shift underway has three dimensions. First, machine learning is now the learning layer on top of rules, consuming the same transaction stream but surfacing outliers the rule writer hasn't imagined yet — novel mule account clusters, unusual beneficiary graphs, first-time-seen merchant patterns. Second, the behavioural layer has emerged as a genuinely new signal class: device telemetry, session navigation, typing cadence, swipe pressure — passive signals that don't depend on transaction metadata at all. Third, and uniquely powerful in the Indian context, external intelligence has become an operational input rather than a strategic one. Banks now ingest real-time feeds from MHA's Indian Cyber Crime Coordination Center (I4C) through the CFCFRMS and the 1930 helpline, which routes citizen fraud complaints directly to issuing and beneficiary banks for immediate action on suspect accounts. The Department of Telecommunications' Digital Intelligence Platform (DIP) adds the telecom-layer view — flagging disconnected or revoked mobile numbers, impersonation patterns, stolen devices, and numbers associated with known cybercrime. Alongside these, federated-learning initiatives like RBI Innovation Hub's DPIP(Digital Payment Intelligence Platform) , Mulehunter.AI, and NPCI's federated AI program allow banks to train on industry-wide fraud patterns without any bank having to share raw customer data.

The governance burden has increased correspondingly. Banks now have to evidence model explainability, bias monitoring, and drift detection to the regulator — disciplines that most fraud functions were not resourced to handle three years ago.

 

Q3. What role do technologies like behavioural biometrics and anomaly detection play in improving fraud prevention outcomes?

Behavioural biometrics and anomaly detection address different parts of the same problem — the fact that credentials alone no longer tell you who is operating an account.

Behavioural biometrics works as a silent, continuous authentication layer. It looks at how a user interacts with a device — grip angle, keystroke dynamics, swipe velocity, scroll patterns — and compares that to a learned profile. Its real value is against social-engineering fraud, which is now the dominant vector in India. When a genuine customer is being coached by a fraudster over the phone, the device is legitimate, the credentials are legitimate, and conventional authentication passes. But the behavioral signals under duress — hesitation, atypical navigation, copy-paste patterns — diverge materially from the baseline. That is a signal nothing else can give you.

Anomaly detection complements this on the transaction side. Unsupervised and semi-supervised models flag deviations from the customer's own behavioral history — beneficiary novelty, amount-velocity patterns, time-of-day, device-location mismatches — without needing a labeled fraud example.

The trade-offs are real: behavioral models need a cold-start period, they raise genuine data-minimization questions under the DPDP Act, and false positives translate directly into customer friction. Regulatory push — UAE's CBUAE notification 3057, for example, and emerging RBI thinking — is making this less optional and more standardized.

 

Q4. How are banks leveraging data at scale to enable real-time fraud detection without impacting customer experience?

The scale problem in India is genuinely unique. UPI alone processes over 18 billion transactions a month, and fraud decisioning has to be completed within roughly 100 milliseconds end-to-end. That constraint dictates architecture.

Three things have to work together. The first is a streaming data backbone — Kafka, Flink, or equivalent — that moves feature computation from an overnight batch into real-time. Customer risk profiles, merchant reputation scores, network-graph attributes, and device fingerprints all need to be queryable in single-digit milliseconds. The second is risk-based authentication, which is where customer experience is actually protected. The third, increasingly, is graph analytics for mule-network detection — harder to run in real time, but essential for disrupting the ecosystem rather than just individual transactions.

The customer experience question is really a question about false positives. Every incremental OTP, step-up, or blocked transaction on a genuine customer has a cost — not just in complaint volumes but in customers quietly shifting their primary banking relationship elsewhere. The practical shift is that detection rate alone is no longer the headline metric. False-positive rate, genuine-transaction decline rate, and customer complaints on wrongly blocked transactions are now tracked alongside it — because those are where customer trust actually leaks.

 

Q5. How is the competitive landscape evolving in terms of in-house systems vs external platforms and partnerships?

The market has moved decisively toward hybrid architectures, and I think that's the right equilibrium.

Historically, Indian banks bought commercial off-the-shelf platforms — SAS, FICO, Fiserv, ACI, Clari5 and similar — for the core fraud engine. Building the transaction-processing and case-management core in-house rarely makes sense: the IP cost, operational risk, and talent gap are prohibitive, and a regulator-audited live system is not a place to experiment. What banks are increasingly building in-house is the intelligence layer on top — custom ML models, bespoke feature stores, internal data-science workbenches that plug into the vendor platform.

Alongside this, a specialist partnership ecosystem has emerged. Behavioral biometrics (BioCatch, Callsign, NuData), device intelligence (ThreatMetrix, Sift), identity graph providers, and consortium-data vendors are all typically bought rather than built. The market structure now looks more like financial crime, being a stack of four or five interoperating vendors plus internal ML.

A structural change worth watching is the rise of utility-style infrastructure. NPCI and RBIH increasingly provide ecosystem-level fraud services — negative beneficiary registries, mule-account databases, federated detection — and MHA's I4C and the DoT Digital Intelligence Platform provide the cybercrime and telecom-layer intelligence that no single bank could build on its own. This partially disintermediates vendors and changes the make-buy economics for banks. Vendor lock-in remains a real concern — migrating a live enterprise fraud system is a multi-year undertaking, and that sits behind many procurement decisions that outsiders find hard to explain.

 

Q6. Where do you see the biggest disconnect between investment in fraud technology and actual reduction in fraud incidents?

The largest disconnect is operational, not technological. Banks buy sophisticated platforms and then under-operationalize them — incomplete rule libraries, ML models that aren't retrained on drifting fraud patterns, alert volumes that overwhelm investigator capacity, and case-management workflows still exported to Excel. The tool is a necessary condition; it is nowhere near sufficient.

The second disconnect is organisational. Fraud, cyber, operations, product, and digital banking each own a piece of the customer-risk surface, and end-to-end ownership is often nobody's mandate. A customer onboarded with weak identity verification, authenticating through a compromised device, into a mule network that product teams never had sight of, will not be saved by any amount of model sophistication at the transaction layer.

The third is measurement. Detection rate is the metric that gets reported, but customer-recovery time, dispute resolution latency, mule-account takedown speed, and law-enforcement liaison quality are where losses actually compound. Banks with comparable detection rates can have materially different customer outcomes based on these downstream metrics.

Finally, fraud migrates. Hardening card-not-present pushes fraud to UPI; hardening UPI pushes it to social engineering; hardening that pushes it to account-opening fraud upstream. Investment in any single layer without a whole-of-journey view produces exactly the disappointment the question describes.

 

Q7. If you were an investor looking at companies within the space, what critical question would you pose to their senior management?

I would ask: "What percentage of the fraud losses you absorbed last year came from attack vectors that did not exist two years ago — and walk me through how your organisation detected the first instance of each."

The answer tells you a lot. A company that can answer it confidently is one that spots new fraud patterns early, updates its models quickly, works closely with industry and law enforcement on emerging threats, and listens to its own frontline analysts when they flag something unusual. A company that struggles with the question is usually one still solving last year's fraud problem. In this business, attackers change their methods every few weeks while most fraud platforms take years to upgrade — and that gap is where the real losses sit.

 


Comments

No comments yet. Be the first to comment!

Newsletter

Stay on top of the latest Expert Network Industry Tips, Trends and Best Practices through Knowledge Ridge Blog.

Our Core Services

Explore our key offerings designed to help businesses connect with the right experts and achieve impactful outcomes.

Expert Calls

Get first-hand insights via phone consultations from our global expert network.

Read more →

B2B Expert Surveys

Understand customer preferences through custom questionnaires.

Read more →

Expert Term Engagements

Hire experts to guide you on critical projects or assignments.

Read more →

Executive/Board Placements

Let us find the ideal strategic hire for your leadership needs.

Read more →