How Top Banks Are Engineering AI At Scale

Q1. Could you start by giving us a brief overview of your professional background, particularly focusing on your expertise in the industry?
I began by managing data lakes for consumer banking, which laid the foundation for my next challenge—leading the development of our cloud-native AI factory. Today, it powers over 1,200 internal models across AML, credit, and hyper-personalized customer nudges.
Q2. What makes an enterprise truly “AI-ready” today—maturity in data infrastructure, cloud strategy, org design, or something else?
AI-ready = “data product” discipline + cloud + trust fabric.
We can spin an idea into a governed feature-store-backed micro-service in 48 hours because:
- Every dataset is already catalogued as a product with SLAs
- We’re 95 % on GCP with Anthos for hybrid burst
- Our MAS Tech Risk Guidelines are codified into policy-as-code so compliance gates are automated
Q3. How are leading banks approaching the shift toward in-house GenAI platforms or fine-tuned LLMs?
We took Gemini 1.5, added RAG on top of our own knowledge graph, then fine-tuned on 2.5 M anonymised customer conversations. Instead of “buy or build,” we did “buy-then-shrink-wrap”: the core model is OpenAI, the orchestration layer and guardrails are ours, running inside our VPC. Result: “DBS-GPT” went live in 6 weeks, used by 7,000 staff.
Q4. What are the biggest lessons from scaling GenAI in highly regulated banking environments?
- Regulators want evidence, not promises, so every prompt/response pair is logged immutably on our blockchain audit trail
- Hallucinations hurt twice, once in customer trust, once in capital-markets disclosures, so we auto-generate “explain cards” that map every answer back to source paragraphs
- Talent: you need bilingual translators, people who speak both Basel III and BERT. We created a 40-person “AI Risk Guild” that sits between model owners and audit
Q5. Which categories of AI startups or tech vendors are gaining traction within banks?
Vendors that embed policy-as-code (e.g., Holistic AI, Cranium), GPU-aware feature stores (Tecton), and privacy-preserving synthetic data (Gretel) are on every bank’s short-list.
We’re also piloting two “LLM firewall” startups—Patronus and CalypsoAI—to block prompt injection before it hits our core models.
Q6. Are there any notable enterprise GenAI startups that are becoming “default choices” for specific banking use cases?
For call-centre augmentation, it’s Observe.AI—fastest to integrate with our Avaya stack.
For code-generation, it’s Poolside—beats GitHub Copilot on COBOL-to-Java translation, which matters when you’re modernising 40-year-old core-banking routines.
Q7. If you were an investor looking at companies within the space, what critical question would you pose to their senior management?
Show me your single-tenant deployment bill-of-materials and the exact latency hit when your guardrails are toggled on.” If they can’t give me a 5-slide architecture diagram with cold-start P99 latency <800 ms, I walk.
Comments
No comments yet. Be the first to comment!