AI FOMO is Tearing Your Company Apart

FOMO can be a powerful motivator.
For years, the fear of missing out has driven enterprising leaders to dive headlong into all manner of initiatives they weren’t prepared or equipped to handle. And the FOMO driving AI forward might just be the strongest of all.
For the executive class, the AI revolution is already underway and joining the fight today is all but a forgone conclusion. According to a recent survey by Wakefield Research, almost all (91%) of data leaders (VP or above) had built or were currently building a GenAI product. More than that, according to a recent survey by Axios, the vast majority of C-suite executives (75%) feel their company’s AI rollout has been successful.
But what about the data teams responsible for this AI development? How are they feeling?
Turns out they aren’t quite as confident.
Executives want more AI. Employees don’t want AI that won’t drive value. And no one wants AI that doesn’t work. In this piece, we’ll look at some of the latest data from Axios, Wakefield, and Stanford to discuss the disparity between executives and data teams, what’s driving the conflict, and what data teams can do about it.
Let’s dive in.
Survey Says: Executives Want More AI.
It’s a law of the enterprise that being first-to-market is the single greatest contributor to product adoption. In an effort to secure their spots in the annals of corporate history, the majority of executives are pushing hard in the direction of AI. But despite all their positive affirmations/existential fear mongering to build faster, most executives still appear to be disappointed with the outcomes.
According to Axios, nearly all (94%) of the C-suite executives they surveyed said they weren’t satisfied with their current AI solution.
More than that, over 72% of C-suite leaders said their company had faced “at least one challenge” in adopting AI, and 71% of these leaders complained that their AI applications were being created “in a silo.”
Many executives were so dissatisfied with the perceived level of progress, that more than 59% were actively seeking new opportunities with companies they believed to be more “innovative” with their AI.
Of course, it’s one thing to give the orders. It’s another thing to carry them out. And unsurprisingly, this single-minded—dare I say obsessive—focus on shipping AI features is having a demonstrable impact on the data and AI teams responsible for delivering them.
Data Teams Want More AI-lignment.
While 75% of Axios executive survey respondents felt that their company’s AI rollout had been successful, that number drops to a staggering 45% for the teams on the ground.
Responding to the call of AI is important for any data and AI team in 2025. Unfortunately, the pace and relative preparedness of many enterprise organizations has left data and AI teams delivering their latest AI applications in a pressure cooker—with executives unaware of the risks waiting in the wings.
According to results from our own survey of data teams, roughly 90% of data practitioners revealed that their leadership had unrealistic expectations for the technical feasibility or business value of their GenAI initiatives.
Difficult to build and not valuable is a bad enough combination in its own right. But what’s worse is that data teams aren’t just doubtful about deliverability or outcomes, they’re also concerned about the fundamental data powering their applications—with over two-thirds of Wakefield respondents admitting to not fully trusting their data for use in AI, and roughly half admitting to leveraging only limited manual testing (if any at all) to validate the data feeding their AI pipelines.
Unfortunately, when it comes to AI, time and pressure doesn’t always make diamonds—and what it does make tends to be a whole lot messier.
The Result: AI-related incidents are on the rise.
With foundational data that’s unfit for purpose, outdated quality practices failing to keep pace with the scale and complexity of AI, and executives continuing to push forward with sweeping AI initiatives regardless of (perhaps in spite of) everything I just mentioned, it’s no surprise that costly incidents are on the rise.
Two-thirds of our Wakefield survey respondents reported experiencing a data incident that cost their organization $100,000 or more within 6 months from the time of survey—and results from Stanford’s latest AI Index Report found that the number of specifically AI-related incidents is also on the rise.
An anemic approach to reliability paired with an executive energy that’s focused more on delivery than viability has more issues slipping through the cracks than ever—and those issues are multiplying hallucinations, diminishing the safety of outputs, and eroding confidence in both the AI products and the companies that build them. Not to mention delivering a whole host of AI applications and chatbots no one asked for—and no one is planning on using.
Executives are pushing AI as an inevitable revolution, but the employees closest to the data are pushing right back. So, where do we go from here? How do we bridge the gap between the leaders pursuing AI and the teams responsible for building valuable AI that a consumer can use with confidence?
I believe the solution is twofold:
Step 1. Get your data team closer to the business.
Step 2. Invest in data + AI quality practices that allow teams to develop safer and faster.
Step 1: Get the data + AI team closer to the business
If you want to build something that’s successful, you need to start by building something that’s useful.
Before the first line of code gets written (generated?) data teams—and the executives guiding them—need to align on what’s valuable to their stakeholders, and then scope the kind of AI tools that can deliver it.
Take a step back. Ask yourself: Who is this AI product for? What are their current pain points? What real-world problems can we solve?
If you can’t answer these three questions, then it won’t matter how shiny, curated, or well-prompted your AI application is—to say nothing of the opportunity cost to build it.
Building valuable (adoptable) AI applications starts by getting to know your stakeholders. That means understanding and anticipating their needs, modeling the data to meet them, asking for feedback, and iterating accordingly to build a product that consistently delivers meaningful ROI.
Step 2: Invest in your data + AI quality
Aligning development to value is the right first step. But no AI rollout will be successful; no agent will be technically feasible; no product will be broadly adoptable… if it can’t first be trusted.
An AI application that delivers unreliable outputs is an AI application that’s destined for deprecation—unfortunately, not before it wastes a whole lot of money, time, and reputational goodwill in the process. And that’s not a problem you can solve with a few data quality monitors or a random human in the loop.
Data and AI are a single system—and they need to be managed that way. Traditional manual testing, generic governance practices, and a couple point solutions for quality or model evaluation are not enough to meet the demands of a black-box and potentially autonomous solution like AI.
Yes, it’s important that the data is well-curated, governed, and fit for purpose – but none of that work will matter if you can’t trust the outputs in the first place (or at least know when not to).
That’s why modern approaches to quality management—like data + AI observability—are so critical. They provide comprehensive end-to-end visibility into the entire data + AI ecosystem (data, system, code, and model) to not only detect issues anywhere in the ecosystem but actually manage and resolve them efficiently.
Gartner believes that 50% of enterprise companies will implement data + AI observability by 2026…and I think they’re right.
What’s more, modern quality solutions don’t just support AI—they’re powered by it. A scalable and future-proof data + AI reliability solution should leverage AI tools to meet the expanding scale and demand complexity of AI agents in production.
For example, tools like Monte Carlo’s Monitoring and Troubleshooting Agents drastically accelerate monitor creation and incident resolution workflows by leveraging LLMs to abstract away time intensive administration and redirecting focus on true value-additive activities.
By unifying data + AI into a single agentically observed system, data + AI observability platforms like Monte Carlo are empowering data leaders to scale trust, reduce cost, and deliver reliable AI for the enterprise and beyond.
Just because you’re ready for AI doesn’t make you AI-ready
I might be ready for an extravagant vacation on the French Riviera, but that doesn’t mean my bank account is ready to pay for it.
Vibe coding is great. I love vibes. But vibe in the right direction. If we want to realize the real-life not hype-tastic future of production AI, we need to get on the same page about what we’re building and how we’re building it—and that takes some intentionality.
That means executives have to overcome their AI FOMO and get on the same page with their data teams to understand what’s realistic, who it’s realistic for, and what steps need to be taken to make it happen safely. In other words, prioritizing first and foremost all of the components that contribute to AI reliability – stakeholder alignment, proper governance, and comprehensive observability – before diving headfirst into a development sprint.
Executives are right—the AI revolution is happening today. But you need to take some time to suit up before you head out to the battlefield.
Ready to see how data + AI observability can help your team build reliable AI at scale? Let’s chat.
Our promise: we will show you the product.