Skip to content
AI Observability, Data Observability Updated Mar 02 2026

The Missing Context Layer in Production AI

AUTHORS | Barr Moses | Yoni Leitersdorf

Barr Moses is the CEO & cofounder of Monte Carlo. Yoni Leitersdorf is the CEO & cofounder of Solid.

In our recent post, The Enterprise Architecture of the Future, we we described the architectural shifts required to make AI dependable at scale. What we didnโ€™t explore in detail was the semantic โ€” or context โ€” layer, nor did we call out specific solutions operating there. That gap is worth addressing.

Weโ€™re writing this together โ€” Monte Carlo and Solid โ€” because we operate at different layers of the same problem. And the problem is this: enterprise AI doesnโ€™t have a model problem. It has an architecture problem.

Connectivity Is Not Context

Most enterprises operate across multiple warehouses โ€” often a mix of modern cloud platforms and legacy systems โ€” layered with BI tools, dashboards, reverse ETL platforms, and embedded business logic. Even when individual systems offer semantic modeling features, definitions rarely live in just one place.

Revenue is calculated differently across teams. โ€œActive userโ€ varies by function. Important joins, filters, and exceptions live in dashboards or analyst logic rather than governed systems.

Humans reconcile these inconsistencies through shared context and institutional knowledge. AI systems cannot.

When models are connected directly into fragmented environments without a governing context layer, they inherit that fragmentation. Outputs may be technically valid, but they wonโ€™t be consistently aligned across teams or use cases.

This is not fundamentally a data quality issue.

Itโ€™s a missing context layer issue.

Two Layers. Two Kinds of Reliability.

At Monte Carlo, we focus on reliability at the data layer. Observability ensures pipelines are fresh, complete, accurate, and schema-stable. Without that foundation, analytics and AI systems fail quickly. Broken data means broken AI.

At Solid, we focus on the layer above: ensuring that AI systems understand what the data means. We build and maintain semantic models โ€” the business logic, metric definitions, and domain context โ€” that allow AI agents to reason about data accurately, not just retrieve it.

Technically healthy data does not guarantee semantically aligned outcomes. Business definitions drift. Metrics conflict. Logic evolves. None of that necessarily triggers a data incident.

Observability answers: Is the data healthy?

A context layer answers: Is the meaning aligned?

Enterprise AI requires both.

The Expanding Role of the Context Layer

The industry has long recognized the need for shared definitions. BI-native semantic layers like Lookerโ€™s, transformation-driven approaches like the dbt Semantic Layer, and independent platforms such as AtScale and Cube have all aimed to centralize and standardize metrics for analytics.

But AI changes the bar.

Definitions must now be interpretable by models, portable across applications, continuously updated, and reinforced through feedback. Static documentation or dashboard-level logic is no longer sufficient. Business meaning must be engineered as durable infrastructure.

Thatโ€™s what Solid is built to do: automatically generate and maintain AI-native semantic models between enterprise data systems and the AI applications consuming them โ€” so context isnโ€™t a one-time configuration, but a continuously governed layer.

Trust Precedes Automation

At SurveyMonkey, a shared customer of Monte Carlo and Solid, expanding AI required first ensuring systems operated from a shared and governed understanding of metrics and joins.

By centralizing business logic in a semantic layer, the team can define once, validate AI-generated queries transparently, and maintain consistency as adoption scales. Trust preceded automation.

That sequence matters. You canโ€™t automate your way to trust. You have to build it first โ€” at the infrastructure level.

Better Together: Observability + Context

The emerging enterprise AI stack is becoming clearer:

  • Data infrastructure provides storage and compute
  • Observability (Monte Carlo) ensures technical reliability
  • A context/semantic layer (Solid) ensures semantic reliability
  • AI systems sit on top

Without observability, AI operates on unstable data.

Without a context layer, AI operates on unstable meaning.

The next phase of enterprise AI will not be defined by marginal improvements in model capability, but by whether organizations treat business meaning as infrastructure.

Data must be observable.

Meaning must be engineered.

Enterprise AI requires both.

Our promise: we will show you the product.