From Pilot to Production:

A First-Hand Account of the DSW Enterprise AI Symposium 2025

Victor del Rosal | Panel Speaker & Moderator

27 November 2025 | National College of Ireland, Dublin

TL;DR

Core thesis: Enterprise AI is now a systems conversation, not a models conversation. The winning formula is Autonomy + Coordination + Governance = Enterprise Value.

Key insights from speakers:

  • Prag Sharma (Citi): Trust architecture requires parallel evolution of tech, controls, and mindset

  • Patricia Rojas (Bank of Ireland): The "valley of scaling" kills most pilots before production

  • Marco Blasio (IBM): Governance should be executable code, not documentation

  • Sudarshan Deshmukh (Laya): Production AI demands SLAs, resilience, and serious change management

  • Raj Subbian (Golden Bear): Real-world wins come with real challenges like "agent drift"

  • One of the biggest gap isn't technology, it's mental models. Leaders must unlearn micro-management and learn to collaborate with AI as a colleague, not direct it as a tool.

Opportunity for Ireland: The country's compact, interconnected ecosystem positions it as a potential global testbed for responsible enterprise AI.

Bottom line for leaders: Audit readiness across all dimensions (not just tech), embed governance from day one, invest in AI literacy at the top, and design for human-AI collaboration rather than replacement.

Opening Reflections

On the evening of 27 November 2025, the third-floor conference room at the National College of Ireland transformed into a crucible of ideas about the future of enterprise AI. As both a panellist on the first discussion and moderator for the second, I found myself at the intersection of technology vision and practical implementation, exactly where the conversation about Agentic AI needs to be.

The DSW Enterprise AI Symposium 2025, organised by Data Science Wizards in collaboration with NCI, brought together a remarkable assembly of voices from Citi, Bank of Ireland, IBM, AmTrust International, Laya Healthcare, and Instech.ie. What struck me most was not the impressive credentials in the room, but the candour with which leaders shared both their successes and their struggles in moving AI from laboratory curiosities to production-grade systems.

The Central Thesis: AI as System, Not Model

If there was one message that reverberated through every presentation and panel discussion, it was articulated most clearly by Sandeep Khuperkar, founder of DSW: "AI in production is no longer a model or agent conversation, it's a system conversation."

This framing fundamentally shifts how we must think about enterprise AI adoption. We've spent years obsessing over model performance, accuracy metrics, benchmark scores, inference speed. But as Prag Sharma from Citi's Future of Finance reminded us, the shift from predictive models to autonomous action changes everything. The question isn't whether your model can achieve 95% accuracy; it's whether your entire system, technology stack, control frameworks, and organisational mindset, can support AI that makes decisions and takes actions with real-world consequences.

The Triple Constraint of Enterprise AI

The symposium crystallised a formula that I believe will define successful AI implementations for years to come:

Autonomy + Coordination + Governance = Enterprise Value

Remove any element, and the equation fails. Autonomy without governance creates unacceptable risk, especially in financial services where a single errant decision can have cascading consequences. Governance without coordination creates friction that strangles innovation. And neither autonomy nor governance matters if the system cannot coordinate across business units, data sources, and decision points.

Key Insights from the Speaker Sessions

Prag Sharma: The Architecture of Trust

Dr. Prag Sharma opened the substantive sessions with a crucial insight: when AI shifts from models to autonomous action, organisations must simultaneously transform their technology stacks, control frameworks, and organisational mindset. His framework, focusing equally on People, Process, Technology, Architecture, Data, and Governance, provides a comprehensive lens for evaluating AI readiness.

What resonated most was his warning about the seduction of model deployment without systemic preparation. As he noted, focusing just on deploying models will not lead to sustainable value. The infrastructure that supports AI, both technical and human, must evolve in parallel.

Patricia Rojas: The Valley of Scaling

Patricia Rojas from Bank of Ireland brought the sobering reality of enterprise AI adoption into sharp focus. Her research suggests that many agentic AI projects stall at the pilot stage, a phenomenon she described as the "valley of scaling." The path from proof-of-concept to enterprise value is littered with initiatives that demonstrated technical promise but failed to achieve organisational integration.

Her prescription centres on realistic value measures and careful attention to the operating model. It's not enough to prove that AI can work; organisations must redesign workflows, redefine roles, and rethink how value is created and measured.

Marco Blasio: Governance as Execution Layer

Marco Blasio from IBM deepened the governance conversation with a provocative reframing: governance should not be a post-hoc audit function but an execution layer embedded in real-time operations. His concept of "compliance as executable code" rather than "compliance as documentation" points toward a future where regulatory requirements are built into AI systems rather than retrospectively verified.

This has profound implications for how we design AI systems. If governance must operate in real-time, it cannot be a separate layer bolted onto existing infrastructure. It must be woven into the fabric of the system itself.

Sudarshan Deshmukh: From Pilot to Production

Sudarshan Deshmukh from Laya Healthcare brought the perspective of someone building production AI systems in a heavily regulated environment. His session on "Scaling Agentic AI: From Pilot to Production" addressed the practical challenges that don't appear in vendor presentations: service level agreements, resilient architecture, cross-domain coordination, and the human change management that makes or breaks enterprise adoption.

Raj Subbian: Live Agentic Workflows

Raj Subbian from Golden Bear Insurance shared real-world case studies of agentic workflow deployment in insurance. His honest assessment of both achievements, improved throughput, reduced error rates, enhanced customer satisfaction, and challenges, integration complexity, human acceptability issues, and the phenomenon of "agent drift", provided invaluable lessons for organisations embarking on similar journeys.

Panel 1: From PoC to Enterprise Value

The first panel brought together Prag Sharma, Patricia Rojas, Marco Blasio, and myself, with Sandeep Khuperkar moderating. The discussion ranged from foundational questions about readiness to practical challenges of implementation.

When asked about the biggest gap in Agentic AI adoption, I found myself pointing not to technology but to mental models. Agentic AI requires us to think about AI not as a tool that we direct, but as a colleague that we collaborate with. This shift demands new approaches to leadership, workforce development, and process design.

The second question posed to me, what leaders must unlearn, struck at the heart of organisational transformation. We must unlearn the habit of micro-management that comes naturally when AI was merely a tool executing our instructions. In a world where AI becomes an intelligent collaborator, leaders must learn to set direction, establish guardrails, and then trust the system to operate within those constraints.

Panel 2: Reimagining Insurance with Agentic AI

Moderating the second panel, featuring Ajay Pathak from AmTrust International, Sudarshan Deshmukh from Laya Healthcare, Gary Leyden from Instech.ie, and Sandeep Khuperkar, I had the privilege of steering a conversation specifically focused on the insurance sector's AI transformation.

Ajay Pathak brought the COO's perspective on ecosystem and policy gaps. His insights on "smart regulation", guardrails that protect customers without strangling operational agility, resonated with everyone grappling with the tension between innovation and compliance. The insurance sector, perhaps more than any other, must navigate this balance carefully.

Gary Leyden, as CEO of Instech.ie (Ireland's insurtech hub), painted a picture of Ireland as a potential global AI testbed for insurance. His emphasis on collaborative models, bringing together startups, insurers, academia, and policymakers, suggests that Ireland's relatively small, interconnected ecosystem could be an advantage rather than a limitation.

The conversation repeatedly returned to the theme of explainability. When an agentic system makes a decision affecting a customer's claim or coverage, how do we design for both explainability and autonomy? The tension is real: too much constraint, and the system loses its ability to adapt and improve; too little, and we cannot maintain the trust required in financial services.

Key Takeaways and Lessons Learned

•       AI readiness is systemic, not technical. The organisations that will succeed with Agentic AI are not necessarily those with the best models, but those that have prepared their people, processes, and governance structures for autonomous systems.

•       The pilot trap is real. Many organisations have demonstrated AI capabilities in controlled environments but struggle to cross the valley to production. The skills required for experimentation are different from those required for enterprise integration.

•       Governance must be embedded, not bolted on. Real-time AI decision-making requires real-time governance. This demands new approaches to compliance that treat regulatory requirements as executable code rather than documentation.

•       Human change management remains critical. The most sophisticated AI system will fail if the humans who must work with it don't trust it, understand it, or feel threatened by it.

•       Ireland has a unique opportunity. The combination of a strong financial services sector, supportive academic institutions like NCI, and a collaborative ecosystem creates conditions for Ireland to become a global testbed for responsible, production-grade AI.

•       Open standards matter. Sandeep's advocacy for open ecosystems and platform thinking addresses a real concern: enterprises need freedom to innovate without vendor lock-in while maintaining governance and compliance.

Actionable Insights for Enterprise Leaders

Based on the symposium discussions, here are concrete steps organisations should consider:

1.    Audit your AI readiness across all dimensions, not just technology, but people, processes, data infrastructure, and governance frameworks. The gaps in non-technical areas are usually more critical.

2.    Start with Statement of Business Purpose, view POCs and pilots through the lens of clear business objectives, not technical experimentation. Every pilot should have a defined path to production.

3.    Build governance into the design phase, don't wait until deployment to consider compliance. Embed audit trails, explainability mechanisms, and policy constraints from day one.

4.    Invest in AI literacy at the leadership level, executives need to understand enough about AI capabilities and limitations to make informed strategic decisions without being distracted by vendor hype.

5.    Develop hybrid human-AI workflows, rather than viewing AI as a replacement for human judgment, design systems that leverage the strengths of both. Human-in-the-loop isn't a limitation; it's a feature.

6.    Engage with the ecosystem, the challenges of enterprise AI are too complex for any organisation to solve alone. Participate in industry forums, academic partnerships, and regulatory dialogues.

Looking Forward: BFSI in 2030

The symposium closed with a provocative question: What does Irish Banking, Financial Services, and Insurance look like in 2030 if we get agentic AI right?

The answers varied, but common themes emerged: more personalised customer experiences, faster and fairer claims processing, risk management that anticipates rather than reacts, and financial services that are more accessible and inclusive. Perhaps most importantly, several panellists envisioned a world where AI handles the routine, freeing human professionals to focus on the complex, the creative, and the compassionate.

But getting there requires us to navigate the coming years wisely. The decisions we make today about governance frameworks, talent development, and ecosystem collaboration will determine whether AI becomes a force for broadly shared prosperity or a source of competitive advantage for a few.

Closing Thoughts

As I left Room S3.04 that evening and stepped into the Dublin night, I found myself reflecting on the privilege of being part of these conversations. The symposium reminded me why I do what I do, why teaching AI to business leaders matters, why writing about the human dimensions of AI transformation is important, and why events like this are essential for building shared understanding across sectors.

My gratitude goes to Dr. Anu Sahni and NCI for hosting us, to Sandeep Khuperkar and the entire DSW team, Pritesh Tiwari, Shivam Thakkar, Sandhya Oza, and everyone who made this possible, and to all the speakers and attendees who brought their insights, questions, and genuine curiosity about where we're heading.

The work of building responsible, production-grade AI is far from complete. But if the DSW Enterprise AI Symposium 2025 is any indication, we have the right people thinking about the right questions. Now the task is to translate those conversations into action.

Victor del Rosal

Chief AI Officer, fiveinnolabs

Adjunct Professor, National College of Ireland

Author, HUMANLIKE: The AI Transformation

Source:

https://www.linkedin.com/feed/update/urn:li:activity:7400156052224266240/