Generative AI Updated May 13 2025

Why CDAIOs Are Losing Sleep—Top 3 Priorities for 2025

3 AI problems for CDAOs to solve
AUTHOR | Barr Moses

The data lake is a swamp! We’re out of tokens! The agents are hallucinating!

Across enterprise organizations, data + AI leaders have found themselves center-stage for a generational leap in technology. And with that new-found spotlight comes the privilege of experiencing everything that’s going right with those data + AI initiatives…and everything that’s going wrong. 

Over the last few months I’ve spoken with over 100 data + AI leaders—from 1:1 conversations and roundtables to more than a few conferences—to discover what’s on their minds and agendas for 2025.

And in the race to production-ready AI, three core priorities have risen to the surface.

From productivity to visibility, let’s take a look at what’s burning for data + AI leaders—and how you can navigate those challenges for your own teams.

If any of these resonate with you, reach out below and let me know!

Accelerate Data + AI Teams

The mandate to functional business leaders is clear: deliver more value. And if that’s a gentle memo for the broader organization, it’s a flashing neon sign for data + AI leaders.

Executives everywhere are demanding greater returns and faster product revolutions—and that makes data + AI team productivity one of the first problems to solve for 2025. Fortunately, “more AI” appears to be both the problem and the solution.

As Pavilion stated eloquently in a recent newsletter, it’s no longer about merely adopting AI — it’s about harnessing it to fundamentally transform what your organization can accomplish. The companies that don’t won’t just be disrupted—they’ll be irrelevant.

Which explains why I’m seeing more and more data + AI leaders prioritizing table-stakes solutions like AI-assisted coding to accelerate productivity, with one leader I spoke with in the pharma space targeting an 80% adoption rate across all data + AI teams by year’s end.

And that makes sense! AI is really good at SQL and even better with a human-in-the-loop. In fact, early enterprise adopters are boasting anywhere from 25% to 200% growth in productivity depending on impact metrics.

As exciting as that is, faster coding is only half the battle. For data + AI leaders, it really doesn’t matter how fast you can code a pipeline if you can’t validate its reliability at the same velocity.

A data + AI team’s productivity will always be determined by their most repetitive workflows—and monitoring and incident management processes are right at the top of that list. 

Many of our data catalog partners are already offering AI solutions to help with documentation. What’s more, we recently announced our own observability agents to drive data + AI reliability, beginning with our monitoring and troubleshooting agents designed to dramatically accelerate incident detection and resolution for data + AI applications.

Deliver Reliable Data for AI Applications

Look, let’s call a spade a spade—companies are struggling with hallucinations. And the painful reality is, there are a lot of ways AI can go bad. (If you need a refresher, check out this article from Shane Murray on the top 5 AI reliability pitfalls.) 

The problem with a black box like AI is that even if you can identify when something goes wrong through some combination of evaluators or testing (and that’s a big if), you’re at a loss to determine why or how to solve it.

And that makes carefully curating and validating the first-party context data that’s feeding those pipelines all the more important.

In other words, we need to get the data “AI-ready.” While there are a variety of definitions for what it means to deliver “AI-ready data,” I find Gartner’s definition to be the most helpful: 

  • Consolidate your data into a modern data platform
  • Give your data semantic meaning so it can be governed and activated
  • Ensure your consolidated data source is trusted and reliable 

While all three factors are critical for AI-readiness, it’s that third factor where leaders can have  the biggest impact—for better or worse.

The biggest misconception I hear about AI reliability is that the job is done once the data quality monitors are in place. Traditional data quality methods are an important first step—but they’re just that, a first step. 

The actual key to success—the effort that truly moves the needle—is operationalizing incident management.

  • Who gets what alerts? 
  • How do you establish ownership and prevent overlapping drills for cascading failures?
  • How do you separate the signal from the noise?
  • How do you give the team as much context as possible to accelerate root cause analysis?
  • How do you define and track SLAs for time to respond and fix that correspond to the business value of each data + AI product?

And the truth is, most teams don’t have good answers to these questions. 

As JetBlue Senior Manager of Data Engineering Ashley Van Name describes it,” An observability product without an operational process to back it is like having a phone line for 911 without any operators to receive the calls.”

The “Who Does What” Guide To Enterprise Data Quality. 

Which brings us to data + AI leaders’ third and final problem to solve this year…

Drive AI Adoption

I’ve heard dozens of companies claim they have ‘hundreds of AI agents in production.’ What I don’t hear many saying is that they’re also reliable and well-adopted. 

I’ll be the first to admit this isn’t a solved problem. We’re still figuring this out as an industry, and there’s still a lot of work to be done. But of the few success stories I have heard, there’s always one common thread. 

“Our AI model looks at volume metrics, trends, and things like that… We’re putting

trust into a machine for something that humans used to do, and having [delivering trustworthy data] lends that trust to the business that AI is going to do the right thing.”

That’s a real quote I heard from a CDAIO from enterprise finserv about how his team was driving adoption. Notice that keyword? 

If we want our AI products to be well-adopted, they first need to be well-trusted.

The internet was a domain for cat videos before it was the de facto solution for banking. Analytics were confined to boardrooms before they were automating trillions of dollars in business operations. What transformed these technologies from novelties to business needs wasn’t their capabilities—it was their reliability.

And that level of reliability is only possible when you have visibility—and control—into the entire system end-to-end.

It’s not enough to simply make your structured data AI-ready.

  • Your unstructured data needs to be tagged with metadata and monitored for quality. 
  • Your embeddings from your vector databases need to be monitored for accuracy and completeness.
  • The cost and model response needs to be monitored for performance and alignment.

And all that and more needs to be monitored and managed in a comprehensive system that doesn’t just identify problems but makes them actionable at every level of an agentic application—the data, system, code, and model levels. 
That’s the foundation of data + AI observability. And at Monte Carlo, we’re working with dozens of data + AI leaders to deliver on that vision.

Excited to share more in future articles!

Our promise: we will show you the product.