Ontology As Substrate
The canonical definition layer underneath every reliable AI implementation. Without a shared ontology of the domain's objects and roles, agents drift. With one, they operate on rails.
This concept was articulated and demonstrated for the AAS community by Steven Tran, CTO and co-founder of Soundcheck Live, building on a two-and-a-half-year ontology stint at Amazon's B2B business. The live-events example below is his.
What An Ontology Is
An ontology is the canonical map of an object or word in a domain. The single agreed-on meaning that every operator, role, and tool refers to when they use the term.
Concrete example. In the live events industry, the word event means radically different things depending on who is in the room.
- For a performer, an event is a performance.
- For a security team, an event is a job.
- For a catering team, an event is an order.
- For an event planner, an event is an event.
- For a finance team, an event is an invoice.
- For a venue, an event is a booking.
Same underlying object. Six different surface words. Every tool that tries to handle the live events industry has to pick one of those framings, ship it, and watch the other five sets of users fight the product because the words are wrong for their role.
The ontology fixes this. You define one canonical object (Event) with its core attributes (date, location, performer, payment, files, communication, review). Then for each role you map the surface vocabulary back to the canonical object. Performer's "gig" maps to Event. Security's "job" maps to Event. Catering's "order" maps to Event. The data is the same. The interface speaks each role's language.
Steven's working ontology for live events identifies nine cross-role primitives that hold across every operator type: intake, CRM, people management, timeline, task, assets, communication, payments, workflow, outcomes. The surface words rotate by role. The primitives stay constant. That stability is what lets one backend serve every kind of operator the industry contains.
Why This Is The Substrate For AI
A language model returns whatever its training data taught it to return. Without a constraint, the model guesses. The guess is a hallucination, and the hallucination compounds across every step of an agent run. Steven's warning on this is sharp: the degree of error agents can introduce is exponential. Leave the agent running for a week without rails and you can return to a system that recursed on its own mistake.
A canonical ontology gives the agent rails. Instead of guessing what an "event" means in this conversation, the agent reads the ontology, finds the canonical Event object, and operates on the explicit attributes. The output is consistent across runs, across roles, and across users.
This is the difference between a product that works for one vertical and breaks the moment you adapt it to a second, and a product whose backend stays the same as the surface adapts to each operator type.
The thesis Steven shipped on with this concept: AI cannot model the world reliably without a foundation. Ontology is the foundation.
Safe And Unsafe Actions Live On The Ontology
Once the ontology defines the canonical object and its attributes, you can also define which actions on that object are safe and which require approval. Steven's frame: as agents do more, you want to limit unsafe actions to ones that pass through an audit gate. The ontology is where that policy lives.
A canonical Event has a payments attribute. Reading it is safe. Modifying a payment that exceeds a threshold is unsafe and requires human approval. The ontology makes the policy expressible. Without the ontology, every "unsafe action" rule has to be re-written for every operator vertical the system serves. With it, the rules are domain-level and apply everywhere.
The Operational Reality Connection
Every applied AI implementation eventually arrives at the same realization: the agent is only as good as the substrate it reads from. See Operational Reality for the broader version of this claim. Ontology is the layer that makes operational reality machine-readable in a way that survives multiple operators, multiple roles, and multiple time horizons.
Clearly documented ontology is critical to helping your agents advance your life mapped to your operational reality.
The same claim scales from one operator to a multinational platform. At the personal scale, "your life" is the operational reality, and your Personal Agentic OS is the ontology your agent reads from. At the company scale, the operational reality is the business, and the Sovereign Agentic Business OS is the ontology. The discipline is the same. Skip the documentation, and the agent guesses. Document it, and the agent operates on rails.
You can think of the layers as a stack:
- Operational Reality. The raw truth of how the business actually runs.
- Ontology. The canonical definitions of the objects and relationships in that reality.
- Context. The populated instances of those objects for a specific operator. Your context lake is the local snapshot of operational reality on top of an ontology.
- Harness and Agent. The runtime that reads the context, queries the ontology, and acts.
Skip the ontology layer, and the harness drifts. Get the ontology right, and every layer above it gets dramatically more reliable.
How To Build One
The practical move is iterative.
- Pick the verticals. Which roles operate in the domain? List them.
- Find the shared object. The thing every role refers to under different surface words. In live events, it is the Event. In healthcare, it is the Encounter. In legal, it is the Matter.
- Define the canonical attributes. Which fields are present for every instance of the object, regardless of role? Those are your primitives.
- Map the surface words. For each role, map their vocabulary back to the canonical attributes. This is the translation layer.
- Lock the ontology before you build on top of it. Once business logic lives on the ontology, changing it is expensive. Get it right first.
- Bring in subject-matter experts to refine. AI can produce a 70 to 80% draft of an ontology in a vertical you do not know deeply. The remaining 20% has to come from people who live in the work.
Tools like RDFOX and other graph-database systems can hold the ontology once you have one. The format matters less than the discipline of agreeing on the canonical object and its attributes before any business logic gets written.
Why This Matters For AAS
Applied AI Society is itself an example of this principle in action. The canonical vocabulary of the discipline (Personal Agentic OS, Jarvising, Activation, Hyperagent, Applied AI, the Elevator Economy, the Omni-Crisis) is an ontology for applied AI. Without it, every chapter and every practitioner would invent their own vocabulary, and no insight would compound across the network.
When you build a domain-specific applied AI tool, you are building two things: the agent that does the work, and the ontology the agent operates on. The ontology is the moat.
Further Reading
- Ontology-Driven Development: The development methodology that turns this substrate into compounding software, with the lineage back to Domain-Driven Design and the Palantir framing.
- Operational Reality: The substrate the ontology describes.
- Context Lake: The populated instance of operational reality for a specific operator, sitting on top of an ontology.
- Truth Management: The discipline of maintaining the substrate over time.
- Harness Engineering: The runtime that operates on the ontology.
- Personal Agentic OS: The individual scale of the same architecture.