Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • Home
  • /
  • Interviews
  • /
  • Matthew Carroll Explains Why Data Governance Needs a Reset for AI Workflows

Matthew Carroll Explains Why Data Governance Needs a Reset for AI Workflows

  • April 9, 2026
  • Data Governance & Quality
TipNew
Matthew Carroll Explains Why Data Governance Needs a Reset for AI Workflows

AI is scaling your data mistakes in real time.

Matthew Carroll, Co-Founder & CEO of Immuta, highlights how flawed access models don’t just persist; they amplify under AI. What once caused delays now creates risk at scale, as machines act faster than governance can respond. He outlines how shifting to precise, context-aware provisioning can turn that risk into controlled acceleration.


Matthew, you’ve spent years working across data, security, and governance. What specific gap pushed you to start Immuta, and why did it feel urgent even back in 2015?

What pushed us to start Immuta was a problem we saw firsthand in the US government.

An organization we worked for was hiring data scientists to solve cross-cutting problems, but the data they needed lived in different systems, under different rules, and were owned by different groups. Even when it was fundamentally the same kind of data, each source had its own approval process, its own restrictions, and its own interpretation of who could use it and for what purpose.

So every new task turned into the same mess: request access, chase exceptions, wait for approvals, and then do it all over again for the next source. It was slow, manual, and incredibly hard to scale.

The problem got even worse once people needed to combine data. At that point, the basic questions became hard to answer: Who owns the resulting dataset? Which policies still apply? Whose rules take precedence? And who is actually accountable for how that combined data gets used?

That was the gap. Policy was trapped inside individual platforms and individual teams, which meant governance could not keep up with how data was actually being used.

What felt urgent to me, even back then, was that this was not a niche problem. It was the future arriving early. As soon as organizations started treating data as a shared strategic asset rather than something locked inside silos, they were going to need a way to separate policy from the underlying platforms, apply it consistently across environments, and let the people who actually understand the rules oversee them without depending entirely on technical teams.

That was the founding idea behind Immuta: governed data provisioning has to be consistent, portable, and understandable by the people responsible for the policy, not just the people running the infrastructure.

 

Before AI became mainstream, what were enterprises fundamentally getting wrong about data access, and in what ways are those same assumptions still holding them back today?

Before AI, most enterprises made three bad assumptions: that data consumers would stay relatively few and technical, that access requests would be occasional, and that governance could happen outside the flow of work.

That led to a model built around tickets, static entitlements, and delayed approvals. It was treated like a back-office process rather than a core part of how the business operates.

Those assumptions are still holding companies back because AI has turned data consumption from an occasional event into a continuous one. When every employee can ask questions in natural language, and agents can execute multi-step workflows across systems, the old model breaks down quickly.

The bigger mistake was thinking about access as a permissioning problem instead of a provisioning problem. The real question is not just whether someone can get to data. It is what data should be delivered to which actor, in what form, for what purpose, and under what constraints. If you still operate on the old assumptions, AI simply magnifies both the friction and the risk.

 

Despite modern data stacks, access to data is still slow, manual, and fragmented. Where exactly does the system break – process, ownership, tooling, or mindset?

The system breaks first at ownership. Process is just where people feel it.

In most companies, discovery happens in one place, requests happen somewhere else, approvals sit with another team, and enforcement happens separately inside each platform. Data teams own the platforms. Security owns controls. Compliance wants evidence. Business teams own the use case. Everyone owns part of the outcome, which means no one owns the end-to-end provisioning decision.

So yes, the symptoms show up in process, but the underlying problem is the operating model.

Tooling matters, but I do not think this is mainly a tooling problem. Many organizations have modernized storage, compute, and catalogs, but they are still managing access one platform at a time. That is a legacy model. It does not hold up when data is spread across warehouses, BI tools, catalogs, applications, and now AI systems generating their own access patterns.

That is why environments that look modern on paper still feel slow in practice. The stack changed. The access model often did not.

 

As AI agents start operating across enterprise systems, what’s the most overlooked risk – not from a security lens, but from an identity and accountability standpoint?

The most overlooked risk is blurred agency.

Most enterprise systems were designed around a simple assumption: the actor is a human, so identity and accountability travel together. Agents break that assumption. An agent may be initiated by a user, bounded by an application, and then make its own decisions about how to complete a task, what data to retrieve, and which systems to touch.

If you do not model that explicitly, you lose the chain of responsibility. Who authorized the work? Who actually executed it? What purpose justified the data access? Where did the agent stay within scope, and where did it go beyond it?

That is not just a security problem. It is an accountability problem. Once an organization can no longer clearly explain its own behavior, governance becomes much harder, and trust erodes very quickly.

 

Many agents today inherit human credentials as a shortcut to get things done. What kinds of attribution gaps or false signals does this create in audit trails?

When agents inherit human credentials, the audit trail stops describing reality.

On paper, it looks like the user did everything. In practice, some actions were initiated directly by the user, some were delegated to the agent, and some were autonomous decisions the agent made in the middle of a workflow. Those are not the same thing, but the logs flatten them into one identity.

That creates false confidence. It looks like you have attribution, but you really have attribution theater.

It also pushes organizations toward broader permissions because teams do not want agents to fail halfway through a task. So permissions widen, signal quality drops, and the audit trail becomes less trustworthy right when you need it to become more precise.

The problem is not that there are no logs. The problem is that the logs become misleading. They may be precise in format, but wrong in substance.

 

When you call this a “not if, but when” issue, what’s the tipping point—scale of agents, sensitivity of data, or something else—that makes it unmanageable?

The tipping point is not a specific number of agents. It is the moment the organization is still governed on a human clock, while work is being executed on a machine clock.

You see it when access decisions stop looking like occasional exceptions and start looking like continuous system behavior. Agents do not submit one request and wait two days. They discover data, invoke tools, chain tasks, cross systems, and keep going.

If your control model still relies on manual review, approval queues, or broad-standing entitlements, it will fail in one of two ways. Either it becomes a bottleneck that slows the business, or people widen access to keep work moving and lose precision in the process.

So the tipping point is operational, not numerical. It is when the pace and shape of data use no longer match the pace and shape of your governance model.

 

As both humans and machines consume data at machine speed, how should organizations rethink visibility—moving beyond “who accessed what” to something more contextual and real-time?

Visibility has to move from events to decisions.

“Who accessed what” was a useful audit question in a slower world. It is not enough anymore. In an environment with humans and agents acting continuously, organizations need to know who or what acted, on whose behalf, for what purpose, under which policy, against which data, and what happened next.

In other words, they need the chain of justification, not just the access event.

And that visibility has to be real-time. Static logs are useful for forensics, but they do not help you govern in the moment. By the time you review them, the workflow has already run, data may have moved, and the context is gone.

Real visibility means being able to see whether the current use still makes sense as conditions change, and whether policy still fits what the human or machine is actually doing. Otherwise, you are not governing live systems. You are documenting them after the fact.

 

Looking ahead, as AI-driven workflows become the default, how do you see the balance between speed and control evolving, and where does Immuta aim to redefine that equation?

I think the old tradeoff between speed and control is mostly a symptom of bad architecture.

If the only way to move fast is to loosen controls, or the only way to control risk is to slow everything down, then the model itself is broken.

In AI-driven workflows, speed and control have to be designed together. The answer is not broader standing access plus better monitoring later. It is more precise, dynamic provisioning at the point of use, where policy can be applied in context and adapt as conditions change.

That is the equation we are trying to change at Immuta. Not by adding more gates around the process, but by making policy part of how data is provisioned in the first place. The goal is to make governed access fast enough for modern workflows without giving up precision, accountability, or trust.

Data Governance
AI
Data Security
Data Access
Data Management
Enterprise AI
AI Governance
Data Provisioning
  • Share

As the co-founder and CEO of Immuta, Matthew’s mission is to make the future of data secure. He is known for building and securing scalable data systems, as well as his service and innovation within the US federal government. Matthew is passionate about data policy and the future of risk management. Since its founding in 2015, Immuta has quickly become the leader in data security and data access.

Before co-founding Immuta, Matthew served honorably as an intelligence officer in the US Army, including tours in Iraq and Afghanistan. After his military service, Matthew served as CTO of CSC’s Defense Intelligence Group, where he led data fusion and analytics programs and advised the US Government on data management and analytics issues. While supporting the US intelligence community, Matthew analyzed some of the world’s most complex data sets and solved challenging data management, analytics, and intelligence problems in the field as a forward-deployed engineer.

More about Matthew:

Immuta is the expert in data access and provisioning. Since 2015, Immuta has given Fortune 500 companies and government agencies around the world the power to put their data to work – faster and more safely than ever before. Our platform delivers data security, governance, and continuous monitoring across complex data ecosystems – de-risking sensitive data at enterprise scale. From BI and analytics to data marketplaces, AI, and whatever comes next, Immuta accelerates safe data discovery, collaboration, and innovation.

Learn more at immuta.com