AI in Enterprise: From Hype to Impact
Q1. Looking back, how has your career evolved and shaped the way you approach your current role?
I started as a Computer Scientist (Ph.D.), focused on the theoretical limits of data. Over 21 years at ADP and IBM, my focus shifted from 'Code' to 'Commercials.' Today, I operate as a Scientist-Executive: I use deep technical rigor to validate whether a product roadmap will actually drive P&L impact, rather than just technical novelty
Q2. Generative AI has shifted quickly from experimentation to board-level priority. Where do you see the biggest gap between executive expectation and enterprise readiness?
Executives expect GenAI to be a 'Plug-and-Play' employee. The reality is that Enterprise Data is messy. The biggest gap is the 'Data Basement' -frontier models cannot reason effectively over unstructured, siloed legacy data without a massive investment in vectorization and governance first.
Q3. Many organizations pursue AI pilots that demonstrate capability but not scalability. What signals indicate a product is still a vitamin rather than a painkiller?
A 'Vitamin' AI generates text (e.g., 'Draft this email'). A 'Painkiller' AI executes a workflow (e.g., 'Reconcile this invoice and update SAP'). If the AI requires the human to check its work every time, it’s a toy. If it removes the human from the loop for 80% of transactions, it’s a product.
Q4. As multi-agent systems and orchestration layers become more common, what new forms of risk emerge that traditional product metrics fail to capture?
Infinite Loop Spend. Unlike traditional software, Agentic AI consumes compute (tokens) for every 'thought.' A poorly architected agent can enter a reasoning loop that burns $10,000 in API costs overnight without solving the user's problem. Unit economics governance is the new cybersecurity.
Q5. Enterprise clients increasingly demand measurable ROI from AI investments. In your experience, where is value most frequently overstated during early go-to-market cycles?
In L1 Support Deflection. Vendors claim '90% deflection,' but often they are just frustrating customers who eventually call in anyway (churn). True value is in Backend Operations (e.g., Financial Reconciliation) where customers don't see the AI, but the cost-to-serve drops by 40%.
Q6. When modernizing legacy platforms, what trade-offs arise between short-term revenue protection and long-term architectural renewal?
The trade-off is 'Wrapper vs. Refactor'. Short-term revenue protection favors wrapping an API around a legacy mainframe (Fast, but technical debt remains). Long-term architectural renewal requires strangling the monolith, which pauses feature velocity but reduces TCO by 20-30% (as we did at Vitech).
Q7. If you were advising senior leadership evaluating long-term opportunity in enterprise Agentic AI platforms, what structural signals would you examine to determine whether the ecosystem is truly positioned for durable, scalable value creation rather than cyclical hype-driven growth?
Look for 'Proprietary Context', not just 'Proprietary Models.' Models are commodities (GPT-4 vs Claude). The durable moat is the Orchestration Layer—does the platform have unique integrations and permission sets that allow it to act on data in a way a generic LLM cannot?
Comments
No comments yet. Be the first to comment!