Are We In An AI Bubble?

I don’t know if we are in an AI bubble, but there are signs suggesting it would be prudent for teams to start battening the hatches.
Let’s be clear, I’m the furthest thing from an AI skeptic. It’s one of the defining technologies of our generation and I’ve already proclaimed 2026 to be the year of data + AI observability.
But the internet was also a defining technology and eToys.com, Pets.com and a lot of other coms went bankrupt long before Amazon launched S3. Revolutions are rarely linear.
People often focus on the risk of being a laggard, especially when they want to sell you something. And yes, being an AI laggard is a clear and existential risk, but there is also the much less discussed risk of moving too recklessly.
Traveling back to the start of the millennium, we saw:
- Internet startups spend lavishly on flashy advertisements with bold promises despite only having a slide deck to back them up.
- Large strategic organizations cut red tape and put aside governance processes so they could buy that vaporware.
- People getting hurt on both ends as the boom went bust.
However, we also saw organizations adopt new technology, integrate new processes and successfully position themselves to thrive post-pop.
Here are 5 recommendations for how data + AI teams can build sustainable value for the long haul.
Table of Contents
Don’t outkick your coverage
New revenue streams are exciting and worth investment. However, bubble exuberance can lead to too large an investment too early.
During the .com bubble, grocery delivery company Webvan quickly expanded into multiple cities chasing market share…and then quickly went bust.
The equivalent for data + AI teams would be spending too many tokens, building what will become commercial-off-the-shelf features, or dedicating too much of their engineering team to new initiatives while ignoring the items that keep the lights on.
In this area, I’ve seen data + AI teams proceed with caution. Teams are perhaps too small and siloed. If a team is monitoring any component of their data + AI system it is most likely their token usage and cost.
A prominent industry example of this restraint is Microsoft, a company that not only survived but thrived through both the early internet and SaaS transitions (remember Clippy?).
They recently pulled back with OpenAI investment and data center construction. Microsoft AI CEO Mustafa Suleyman says their strategy “…is to play a very tight second, given the capital intensiveness of these models.”
Many organizations should take cues from this strategy which The Register notes, “Along with being cheaper, Suleyman’s strategy also means Microsoft can focus more of its energy on building applications and other systems around large language models rather than finding new ways to wrangle neural nets.”
In other words, channel your focus on your data and the infrastructure surrounding AI, that is where your competitive advantage will be.
Have a business case
The reason many data + AI teams haven’t outkicked their coverage yet mainly seems to be a concern about their pilots’ reliability at scale. Teams should be just as concerned that very few of them have a business case, or plans for one.
In any technology venture or investment there is an understanding that it’s not going to be profitable tomorrow. The unit economics may not make sense right away either. That’s OK, but only if there is a timeline, roadmap and reasonable path toward long-term feasibility (AKA a business case).
Initiatives without a business case are not pilots, they are skunkworks. Those have a time and place–our head of AI development even wrote a post discussing the need to separate AI research from product research–but that time and place can’t be always and everywhere.
One data + AI leader responded to our question on this topic saying, “It’s not so much the ROI. I think it’s more like we’ve got a bunch of learnings from [the pilot]. I think it gives us a foothold to build on top of it.”
Just to say it again, this isn’t a bad thing until it’s the only thing you can say about all or most of your pilots. The same data + AI leader recognized the groundwork required for scale, “There’s a level of maturity that we have to gain in terms of how we think about automation workflows before we could apply GenAI on top of that.”
To use another Y2K era example, Walmart and Kmart treated their e-commerce initiatives with distinct strategies. Walmart integrated it into the fabric of their operations and treated it like a business initiative. Kmart launched BlueLight.com as a siloed experiment with Yahoo and included free internet in their offerings as a gimmick. Only one mart is still here today.
Build a business case for your pilots.
Figure out how to measure and govern it
I wrote an article a couple of years ago to let people know I don’t care how big their data is.
Back then, every data conference involved VPs bragging about their terabytes. The point I was making is that it isn’t about the size of your data, it’s about how well you govern and monetize it (and of course ensure its quality and reliability).
I couldn’t help but get a sense of deja vu at the most recent conferences I attended. This time instead of telling me about how many terabytes they could bench press, I heard about the number of AI pilots across the company.
Again, I feel the need to hedge here so my point isn’t misunderstood. Yes, responsible companies should be placing bets on multiple pilots across multiple teams. Some will fail, but the ones that win will win big.
The bubble behavior starts to creep in when the number of pilots becomes the end goal and metric for success.
I enjoyed this recent post from Hamel Husain on rapidly improving AI products. In it he cites Bryan Bischof, the former head of AI at Hex (yes you are now getting this wisdom fourth hand–it’s that good), capability funnel approach. In other words, instead of counting pilots, focus on how advanced they have progressed and the level of utility they provide.
I’ll take the approach a step further and suggest we need governance and reliability funnels too. I’ve heard the “hundreds of AI pilots in production story dozens of times. The “we have a highly-reliable, well-governed, and strongly adopted AI feature” story not so much. That is a story worth a keynote.
My team and I have formally interviewed more than three dozen data + AI leaders, and informally spoken with hundreds on their AI initiatives. In all of these conversations, only ONE was able to articulate any quantifiable measurement of the effectiveness or reliability of their AI application.
In case you’re curious (and I hope you are), it was a project that required an agent to reference and apply responses from a source of truth, and he cited specific improvements in precision and recall.
Listen to your experts
There is a strange phenomenon in the AI space. It’s the divergence between the boardroom and the breakroom.
The people who are working with data and AI day in and day out, the practitioners, have a more pessimistic grounded view of current initiatives than company leadership.
We covered this recently in an article about AI FOMO recently where a recent survey showed “Less than half (45%) of employees — versus 75% of the C-suite — think their company’s AI rollout in the last 12 months has been successful.”
Our own survey has shown that 100% feel pressure from their leadership to implement a GenAI strategy and/or build a GenAI product, but 68% don’t think their data is AI ready. You can’t skip steps or cut corners in this race. If you build your AI on a foundation of bad data you will end up with a house of cards.
As one data and AI leader put it to me, “everyone is so desperate to get these deployments to work but soon there will be thousands of agents active in their organization with no visibility into them…thats the cliff moment.”
As I’ve said previously, it’s not the models that will differentiate and provide competitive advantage, it’s the data.
Instead of pressuring your practitioners and experts or assuming they aren’t visionary enough, listen and address their concerns. They want to win the race too and they are telling you that they don’t have the right shoes.
Be skeptical and ask for the receipts
Perhaps the biggest bubble behavior is vendor vaporware peddling. I see products that exist only as designed screenshots almost weekly on LinkedIn.
When there is a revolutionary technology it becomes harder for organizations to differentiate between the previously impossible and still impossible. Not to mention in a gold rush, you might not necessarily ask for the receipt of that pick axe.
My recommendation is to ask to see a demo or to play around with it in a POC. When we launched our observability agents, we were sure to include product tours and that our monitoring agent was production ready (already deployed across hundreds of customers!) before we announced it.
This may also seem like a basic point, but also be sure to ask yourself if the feature SHOULD be agentic. Agents are great in accelerating workflows, especially when humans are in the loop. I would be very hesitant to give an agent, at least one without extensively proven reliability metrics, the ability to directly manipulate critical data or systems.
I always appreciate when teams double click on the security and permissions aspects as well. We had one team tell us, “We want to use these fancy features because it’s the future, but we must do it in a proper way.”
The right way, regardless
I hope that nothing I’ve relayed here is controversial. Whether or not we are in an AI bubble–and no one knows for sure–the path to sustainable value is the same.
AI initiatives should get extra attention, focus, and budget. Just make sure you aren’t skipping the foundational steps you will need for the long term.
Our promise: we will show you the product.