In this episode of Re(AI)magine Conversations, Akshay Chitlangia, VP Technology at Persistent Systems, speaks with Simon Thornell, Field CTO at TrustLogix, about one of the biggest challenges emerging with agentic AI: how enterprises can move fast without losing control.

AI agents are no longer just assisting users. They are accessing tools, triggering workflows, touching sensitive data and making decisions at machine speed. But most enterprise governance models were not built for that pace. As organizations move from AI pilots to production-grade agentic systems, the conversation explores why trust can no longer be an added later. It must be designed into the way AI agents access data, inherit permissions, use tools and operate across the enterprise.

The discussion begins with the AI velocity gap. Enterprises are adopting autonomous agents faster than traditional controls can keep up. Standing privileges, manual approvals and legacy logging mechanisms can quickly break down when agents begin making real-time requests across multiple systems. This creates a difficult choice for organizations: slow down innovation or accept unmanaged risk. The episode makes clear that neither path is sustainable.

A key focus of the conversation is the hidden risk of AI super users. When agents operate through broad service-account credentials, they can gain access far beyond what the requesting human user should have. With MCP sprawl, this risk expands even further as every connector, tool server, API and gateway increases the enterprise attack surface. One poorly governed workflow can expose sensitive data across systems.

The episode also examines why visibility is becoming central to AI-era governance. Without deterministic logs, audit trails and runtime observability, AI agents can quickly become black boxes. Security, compliance and data governance teams need to know what an agent requested, what data it accessed, which tool it used, what decision was made and why. That visibility is what turns governance from theory into an operating model.

Another important theme is safe enablement. The answer is not to slow innovation with more approval layers. It is to embed governance into the engineering ecosystem through platform controls, policy-as-code, reusable identity patterns, least-privilege access, tool-access guardrails and audit-ready telemetry. This allows AI agents to operate at speed, but only within trusted boundaries.

The episode closes with a clear message: scaling agentic AI requires more than another governance tool. It requires an architecture and operating model that can hold up as enterprises move from a handful of pilots to hundreds of AI use cases. The organizations that succeed will be the ones that bring identity, access, observability and runtime intelligence together before risk scales faster than trust.

Tune in to learn how enterprises can move from AI velocity to AI trust by securing agentic AI systems with real-time authorization, identity propagation, least-privilege access, runtime observability and governance that keeps pace with innovation.

Join the conversation. Contact us at podcasts@persistent.com.

Speakers

Akshay Chitlangia, VP Technology, Persistent Systems

Simon Thornell, Field CTO, TrustLogix