Skip to content
AI Observability, AI Culture Updated Oct 14 2025

In the AI Kingdom, Experience is King

Robot wearing a crown in front of the words build vs buy
AUTHOR | Barr Moses

Just in case you’ve been living under a rock, the emergence of AI as a technological super-serum is upending the enterprise economy. 

  • The teams who don’t have agents in production want them. 
  • 95% of the agents that are in production fail to deliver any P&L impact.
  • And almost anything that is profitable can be seemingly commoditized overnight by the model providers encouraging organizations to build them. 

The announcement of OpenAI’s AgentKit provided a poignant example of the volatility that AI is creating in the development world. Almost as soon as the announcement came down about OpenAI’s latest and greatest, LinkedIn was already writing the epitaphs for some of AI’s newest darlings. 

Transformative one day. Archaic the next. 

So, if we’re living on a technological fault-line where any new project could be made irrelevant overnight, where does it make sense to invest your time and resources? 

I recently spoke with a long-time friend, and current Head of Data and AI at Getz Ventures, Guy Fighel, and what he had to say got me thinking—is the world really so different? Or have the lines of enterprise value just settled within their perennial limits?

In this piece, I’ll discuss how AI is clarifying the value of data and AI teams, where the lines of value are being drawn up today, and how to identify the right projects and partners to make the most out of your resources.

Let’s talk.

The solution is the problem

Before I expand on my own observations, I want to share some of Guy’s. 

“I remember when I built dedicated machine learning models for anomaly detection, as a classifier, or for semantic understanding…and it was really hard. Months of data science work gathering the data, and cleaning the data, and understanding systems, and it was all in Python, or core C++… none of the libraries today existed

I’m telling you all of that, not just because I’m like an old dinosaur, but because I have a perspective. I’ve done that, and I’m still doing that today with all the new technologies. And I think the main differentiation is that it became just so much easier to iterate super fast on a lot of the ideas that we have.”

On one hand, that’s great news! But there’s a catch-22 embedded in there (pun somewhat intended): if anyone can vibe-code anything anyway, then what could possibly be worth investing in?

In the land of the blind, the one-eyed man is king. But in a world dominated by AI, experience still wins the day. 

Choosing the right AI pilots

AI may be accelerating development cycles, but as I alluded to in the intro, the real value behind what’s being built is right where it’s always been.

Whether we’re talking about applications or agents, it starts with understanding your business. (*cough* talk to your business users) to understand their pain and how data and AI initiative will drive meaningful value. Whether that’s on the platform development side, the AI/ML side, data governance or all of the above. both.

Once you know what they need, you’re basically still in a classic build versus buy scenario. And in that scenario, it always makes sense to build where you can add unique value.

It’s just that “unique” part that’s becoming a little more challenging these days.

Here’s my hot take: don’t use AI to solve the problems that other people can solve—use AI to solve the problems that only you can solve. 

In other words, focus on pilots and projects where you have the skills and the data to make projects uniquely valuable for your business users. If that was true before AI, it’s even more salient today. 

Build where you have experience and first-party data

Choosing the right AI pilot isn’t about creating the flashiest demo—it’s about bringing your unique experience to bear on a technology that can optimize it. 

Focus your energy where you can establish value based on what’s proprietary to your organizationwhich today is quantified by the context data it owns and the team that supports it.

If you’re a marcom solution, that value might be hidden in years of performance data or ad creatives. What campaigns and channels worked? When? What was the spend? For what industries? What do you know about your customers?

In the age of AI, it’s the context that we bring to the equation (both explicit and implicit) that determines the value of our AI projects—not the speed of the model that’s used to vibe-code it.

Which brings me neatly to the other side of that coin — choosing the right partners. 

Choosing the right partners

If you aren’t building a solution, you’re probably buying it from somewhere (or going without it until the budget gets cleared). But as AI tooling proliferates, longtime legacy solutions and even some modern platforms are being challenged at a staggering pace—and the question of where to invest is getting a whole lot messier in the process.

It goes without saying (but I’ll say it anyway): choosing a partner is about more than plugging the right gap for today. It’s about making a bet that what you’re investing the time to stand-up now will still be solving the right problems 5 years from now.

Make the wrong bet and you’ll find yourself rearchitecting a lot sooner than you anticipated.And in today’s climate, “a lot sooner” really is a lot sooner. Let’s look at a few criteria.

Top 3 criteria for a strong platform partnership

In the same way that it matters how you pick your pilots, it matters how you pick your partners. And again, my conversation with Guy gives me something to chew on.

“I really love teams that are coming with a lot of expertise—that have seen the problems over and over and over again and have access to unique data sets. Bringing all of that experience and creating specific data models, and small models, and creating a solution that grows even when the public models are growing. That’s the ideal company.”

Sound familiar? Let’s break it down:

  • Their team provides experience that makes them uniquely suited to solve a given problem.
  • Their platform has access to unique first-party data that can’t be replicated by existing foundational AI models. 
  • Their product grows with AI but offers value that’s independent from it (i.e. established integrations, proprietary models, etc.)

Just like what your team is building, the right platform partner isn’t solving just any problem—they’re solving a problem that only they can solve.

However, I would amend Guy’s original statement to include one more criteria—

  • Their product team has a specific and experienced point-of-view on the problem they’re solving that will define how the platform is built and maintained in the future. 

Again, you don’t just want a partner who can solve part of one problem today. You want a partner who deeply understands the customer pain points and who’s going to continue building to solve the whole problem—and whatever that problem evolves into in the years to come.

In the AI age, anyone can build anything—and quickly. But what you can’t acquire quickly is context. What you can’t acquire quickly is perspective. If you’re looking for a partner that’s still going to exist in 5 years, look for companies that check those boxes.  

How Monte Carlo is using our experience as the data observability leader to build better AI observability

You might think that the latest and greatest AI-native evaluation tool is the right solution to make AI reliable… but you’d be wrong.

It’s not because tools like evaluations don’t solve part of the problem—it’s because those solutions don’t have enough visibility to solve the whole problem.

Unlike traditional data products, reliable AI isn’t the product of a single pipeline: it’s the byproduct of multiple interdependent technologies, all being accessed, analyzed, and embedded in real time.

It’s part determinism, part improvisation. But it all needs to be managed with intention.

Recency bias tells us that the latest AI observability solution is the right answer simply because it’s the newest. But siloed AI observability tools like evaluations will never be sufficient to make AI reliable for the same reason that data testing was never sufficient to make data products reliable. And that becomes all too clear when an agent moves into production.

The reality is, AI can go wrong in all kinds of ways—and the model is just one of them. You need a system that can cover the entire agent lifecycle end-to-end (data, system, code, and model), to understand not just when an agent delivers a bad output in production, but also why and how to fix it.

If a reliable agent response is the finish-line, then AI observability is only the last mile of the race. 

Bringing data and AI together with agent observability

At Monte Carlo, we’ve helped data and AI teams catch and resolve hundreds of thousands of reliability issues in production based on deep integrations across the data and AI stack. And we don’t just have all that experience baked into our team—we have it baked into products like our Troubleshooting Agent as context data to help teams understand and resolve incidents even faster. 

And now we’re bringing that same expertise to Agent Observability. On its own, AI observability is occasionally helpful. As part of a complete data and AI solution, it’s transformative. That’s how experience and context data come together to create a product that’s growing with AI, not being replaced by it. 

Experienced, context-enabled, AI-powered, and with a north-star point of view on the problem to solve. 

Speed creates opportunity—but only experience creates value

In an age where categories can be upended in a single keynote, where we invest our time and resources matters more than ever. 

Vibe-coding might help you build and iterate faster, but faster doesn’t mean more valuable. Whether you’re building pilots internally or evaluating partners externally, reliable proprietary context data—and the expertise to know how to use it—is the first and only moat for data and AI.

A couple key takeaways:

  • Focus your energy on solving problems that only your team can solve based on  proprietary data and experience.
  • Evaluate partners based on the depth of their solution, the experience of their team, and the value of the proprietary data that supports it. 

Vibe-coding might help you build and iterate faster, but faster doesn’t mean more valuable. Whether you’re building pilots internally or evaluating partners externally, reliable proprietary context data—and the expertise to know how to use it—is the first and only moat for data and AI.

Our promise: we will show you the product.