Organizing Brownfield Data Across Multiple Plants.
What Is an Ontology? And Why Domain Experts, Not Engineers, Should Own It
The foundational layer of every knowledge graph and why getting it wrong affects every AI answer that follows.
Most knowledge graph initiatives fail for a simple reason: the ontology doesn’t reflect how the business actually works. Not because the technology is wrong but because the wrong people own it.
If the ontology is wrong, AI answers will be wrong consistently, and at scale. The model will confidently answer questions in the language of the schema rather than the language of the business. Engineers will wonder why adoption is low. Domain experts will quietly go back to spreadsheets.
This post explains what an ontology is, why ownership determines whether a knowledge graph reflects business reality or merely resembles it, and what changes when the right people are in charge.
What an ontology is
An ontology is a formal definition of the things your business cares about and the rules that govern how they connect. It answers three questions:
- What are the entities? Engineers, assets, turbines, wind farms, contracts, work orders.
- What are the relationships? An engineer is certified for a turbine model. A turbine is located at a wind farm. A work order is assigned to an engineer.
- What rules apply? An engineer can only be assigned to a work order if they hold a current certification for the relevant asset class.
That structure transforms a set of database tables into a connected model of how the business operates. Tables store data. The ontology defines what that data means.
The distinction matters for AI. A question like “Who can service Turbine T-4421 today?” requires not just the data, but the declared relationship between engineers, certifications, asset classes, and asset instances. Without the ontology, an AI system has to infer those relationships and it will get them wrong often enough to undermine trust.
Why ownership determines everything
This is where most knowledge graph initiatives go wrong. The ontology is built by people who can use the tools, not by the people who understand the domain.
Consider what a data engineer typically knows when building an ontology for a wind energy company. They know the schema which tables join to which, what the columns are called. What they generally do not know:
- “Critical asset” means something specifically defined by replacement cost, operational dependency, and regulatory classification, not just a flag in a column.
- A “completed” work order in the maintenance system doesn’t always mean the job was done. Sometimes it means the window is closed.
- The relationship between an asset and a regulatory inspection regime depends on jurisdiction, not just asset class.
None of this is in the schema. It lives in the heads of the reliability engineers, operations managers, and compliance officers who deal with these distinctions daily. If they are not the ones defining the ontology, the ontology will not contain it and if the ontology does not contain it, the AI answers built on top of it will be systematically wrong.
|
An ontology built by engineers reflects how engineers interpreted the business. An ontology built by domain experts reflects how the business actually works. The difference is the entire value proposition of a knowledge graph. |
The engineering bottleneck and how it compounds
|
When engineers own the ontology |
When domain experts own the ontology |
|
Definitions reflect schema conventions |
Definitions reflect how the business uses terms |
|
Changes queue behind engineering work |
Changes happen directly, when the need arises |
|
Domain experts disengage from the model |
Domain experts remain active owners |
|
The ontology drifts from operational reality |
The ontology stays current |
|
AI answers reflect the schema, not the business |
AI answers reflect business intent and logic |
The pattern that follows is predictable. Domain experts raise change requests. Engineering queues them. The ontology falls behind. Domain experts stop trusting the model. They stop using the knowledge graph. The investment stalls.
|
The adoption test If a reliability engineer identifies a gap in the model on Monday, can they fix it by Thursday without raising an engineering ticket? If the answer is no, the ontology is owned by the wrong team. |
Making the shift
Domain expert ownership is not about removing engineering from the process. Engineers build and maintain the infrastructure, the data connections, and the execution layer. Domain experts define the meaning.
This shift only works when domain experts can contribute directly without routing through a technical queue. That means:
- Visual, whiteboard-style modelling that replaces code with direct interaction
- Version control and change history built into the authoring workflow
- Governance that allows different teams to own different domains of the model
- Changes that take effect immediately, without a deployment cycle
When these conditions exist, ontology authoring becomes a regular operational activity. A compliance officer updates a regulatory relationship when a jurisdiction changes its requirements. An operations manager refines what “critical asset” means after a review. None of these require an engineering ticket and none of them should.
What changes when the right people own it
AI answers align with business logic, not schema interpretation
When an AI system queries a knowledge graph built by engineers, answers are grounded in how engineers modelled the data. When domain experts own the model, answers reflect how the business actually defines its concepts. The gap between “certification_date is not null” and “the engineer holds a valid, current certification for this specific asset class” is the gap between a technically correct lookup and a reliable operational answer.
Semantic drift is caught before it compounds
Semantic drift where the same concept means different things in different systems is one of the primary drivers of inconsistent AI outputs. When domain experts own the ontology, they become the authority on shared definitions. Disagreements about what “customer” or “active” means get resolved in the model, not reconstructed independently in every downstream query and AI system.
The model stays current
Businesses change faster than data models typically do. Domain experts who can modify the ontology directly close the gap between operational reality and the model that represents it. When that gap is small, AI answers are reliable. When it widens, the answers drift and teams notice.
|
The ontology is not a data model. It is a shared organizational understanding, expressed in a form that machines can act on. The people who hold that understanding should be the ones who maintain it. |
Three practical steps
1. Start with one domain and one owner.
Pick the part of the business where ontology gaps cause the most visible problems. Find the domain expert who is most attuned to definitional precision. Start there.
2. Give them tools they can actually use.
If authoring requires SPARQL, OWL syntax, or engineering support, domain experts will not engage. Visual, no-code modelling environments are not a compromise and they are a prerequisite. The resulting ontology will be more accurate, not less.
3. Make the change path direct.
Define who can modify which parts of the model, with version control and change history. Keep engineering involvement for structural changes, not for every update to a relationship definition.
Once the first domain is working, expand to the adjacent one. Every new domain added makes the existing model more valuable through shared entities and relationships. The compounding effect is real but only if the model stays current, which only happens when the right people own it.
How This Works on the Databricks Lakehouse
When domain experts can define and evolve meaning directly, three things change. Changes to the ontology are reflected immediately across every query and AI system that depends on it. AI answers align with business logic rather than schema interpretation. And the model stays current because the people who notice gaps can fix them without waiting for an engineering sprint.
On the Databricks Lakehouse, this is how Kobai works. Domain experts build and modify the semantic model in Kobai Studio — a no-code visual environment. Those definitions are immediately reflected in Saturn, Kobai’s graph engine, which runs directly on governed Delta Lake tables within the Databricks Lakehouse. Precursor handles the data mapping from source tables to the semantic model. Episteme’s AI answers operate over what domain experts declared, not what engineers inferred from column names and every answer carries lineage back to the source.
If the ontology at the heart of your knowledge graph is maintained by data engineers rather than domain experts, it reflects an approximation of your business. To explore how Kobai addresses this on Databricks, visit kobai.io or contact us at contact@kobai.io.

