Ontology Process
Opsfolio Ontology Process, a sophisticated framework that uses the Web Ontology Language (OWL) to bridge the gap between raw operational data and actionable AI reasoning.
Purpose
This design document explains the Opsfolio ontology process, why OWL is used, how the ontology is structured, how it is used in MCP-based AI systems, and how it helps define operational truth while reducing operational truth drift.
The short version is simple: the ontology gives Opsfolio a shared semantic model for evidence, assets, events, workflows, policies, incidents, metrics, and organizations. That shared model keeps humans, software, and AI aligned on what things mean and how they relate.
Why We Use an Ontology
Opsfolio operates across multiple operational domains: resource surveillance, security assurance, infrastructure monitoring, data governance, and organizational responsibility. Each domain produces data, but raw data alone does not create shared meaning.
The ontology provides that meaning by defining:
- the canonical concepts in the system
- the allowed relationships between those concepts
- the core constraints that prevent ambiguous interpretation
- the evidence model that ties observations to decisions
This turns operational data into a usable, reviewable truth model.
Why We Use OWL
We use OWL because it gives us a formal, machine-readable way to model operational semantics.
OWL is useful for Opsfolio because it supports:
- Classes for domain concepts such as
Evidence,Resource,SecurityIncident, andDataWorkflow - Subclass hierarchies so specialized concepts inherit shared meaning
- Object properties for relationships such as derivation, ownership, monitoring, and policy linkage
- Datatype properties for timestamps, values, and units
- Inverse properties for bidirectional reasoning
- Disjointness and restrictions for detecting model errors and preserving consistency
- Annotation vocabulary for human-readable definitions, examples, and scope notes
OWL is not being used here as an academic exercise. It is being used as the operational schema for meaning. The goal is not to model every possible implementation detail, but to define the stable semantics that should survive across databases, services, interfaces, and AI layers.
Ontology Design Principles
- Evidence First: Opsfolio is modeled as an evidence-centered system. Evidence is the unifying concept across domains.
- Shared Core, Specialized Domains: The model starts with a small abstract core and then specializes into operational domains.
- Operational Meaning Over Storage Layout: The ontology is designed around business meaning first, while still preserving links to implementation concepts.
- Traceability and Provenance: The model favors relationships that answer what was observed, where it came from, and who owns it.
- AI Grounding and Governance: The ontology is intended to be a durable semantic contract for AI assistants and MCP servers.
Ontology Process
The ontology process should be treated as a design and governance workflow rather than a one-time modeling task.
Define the operational truth problem
Start by defining what decision, assurance, or operational truth the model must support. Examples include compliance status, security incident evidence, data quality findings, or infrastructure health.
Discover domain vocabulary
Collect the terms used by operators, databases, workflows, APIs, and reporting systems. Separate temporary implementation names from durable business concepts.
Normalize into canonical concepts
Map overlapping or conflicting terms into a canonical vocabulary. The ontology chooses one shared concept and records the meaning clearly.
Model classes and relationships
Represent core concepts as classes and connect them with object properties and datatype properties. Favor stable semantics over local naming conventions.
Add constraints and guardrails
Use subclassing, disjointness, inverse properties, and targeted restrictions to make incorrect modeling harder and reasoning more reliable.
Add descriptions and examples
Each important class and property should include labels, comments, definitions, examples, and scope notes for both humans and AI.
Align with implementation surfaces
Map the ontology to the implementation layer: tables, services, workflows, and MCP tool outputs. The ontology should remain stable even when implementation details evolve.
Review, version, and evolve
Ontology changes should be reviewed like interface changes. Updates should be versioned and evaluated for downstream impact on AI and operational workflows.
Current Ontology Processing Pipeline

The current implementation flow moves ontology content from source files into AI-accessible operational interfaces.
Ingestion
.owl files are ingested into the uniform_resource table. This gives the platform a canonical stored source for ontology artifacts and keeps the original OWL available for traceability.
Automatic Transformation
The XML/RDF representation is automatically transformed into JSON and stored in uniform_resource_transform. This makes ontology content easier to process while preserving semantic structure.
View Generation
The OWL-derived JSON is converted into SQL views. This gives downstream systems a simple way to inspect classes and relationships without having to parse RDF/XML directly.
MCP Tools
The SQL views are exposed through simple MCP-friendly tools. This turns the ontology into operational infrastructure that AI systems can access directly for reasoning.
Why this pipeline matters: It preserves the source ontology, normalizes it into application-friendly structures, and publishes it through AI-usable interfaces.
Ontology Structure
The current ontology is organized in a layered way.
Core abstract concepts
Evidence: the central concept for verifiable operational truthAsset: something of organizational value under managementEvent: something that happens in timeProcessingEvent: an event that creates or processes evidenceObservation: a time-specific measurement or reading
Domain areas
- Resource surveillance: Models digital resources, collections, and surveillance sources.
- Security assurance: Models endpoints, compliance policies, frameworks, and incidents.
- Infrastructure monitoring: Models network assets, infrastructure metrics, and thresholds.
- Data governance: Models workflows, data quality issues, and transformations.
- Organizational structure: Models organizations, people, roles, and monitoring behavior.
Relationship layer
The ontology connects these domains through relationships such as derivation and lineage, session tracking, policy linkage, and asset ownership. This layer turns a list of terms into an operational knowledge graph.
How the Ontology Is Used in MCPs
In MCP-based systems, the ontology acts as the semantic contract between tool outputs and AI reasoning.
- MCP tools expose operational facts: MCP servers provide access to operational systems (evidence stores, policy engines, etc.).
- The ontology provides the semantic frame: The ontology defines what those facts mean (e.g., distinguishing a
ResourcefromComplianceEvidence). - The AI uses ontology terms instead of ad hoc labels: Anchoring reasoning to the ontology improves consistency across prompts and sessions.
- Cross-tool joining becomes reliable: The shared model lets AI connect data from multiple different MCP servers.
- AI explanations become traceable: The AI can explain conclusions by pointing to specific ontology-backed evidence and provenance.
Ontology and Operational Truth
Operational truth requires a clear definition, evidence, provenance, time context, and relationships. The ontology helps define this by making those parts explicit and consistent.
Reducing Operational Truth Drift
The ontology reduces drift by providing:
- Canonical vocabulary: One shared language for core concepts.
- Typed relationships: Defining which connections are logically valid.
- Evidence grounding: Keeping conclusions tied to hard evidence.
- Provenance and lineage: Preserving how evidence was acquired.
- Cross-domain consistency: Connecting security, governance, and monitoring.
Guidance for AI and MCP Integration
To get the most value from the ontology, follow these rules:
- Return typed entities that map cleanly to ontology concepts.
- Include identifiers, provenance, and timestamps in tool outputs.
- Preserve links between evidence and the claims they support.
- Use ontology labels and definitions in prompts and explanations.
- Prefer canonical ontology terms over local database jargon.
- Version ontology changes and communicate impact to AI workflows.
Recommended Docs Site Structure
If this content is split into a docs.opsfolio.com design section:
- Ontology Overview
- Ontology Process and Governance
- OWL Modeling Conventions
- Ontology in MCP and AI Systems
- Operational Truth and Truth Drift
Summary
The Opsfolio ontology is the semantic backbone for evidence-centered operations. Together with OWL and MCP, it creates a stable definition of operational truth and reduces the drift that appears when humans and AI operate from inconsistent concepts.
How is this guide?
Last updated on