.// HOW IT WORKS

From question to governed action in seconds.

Every interaction follows the same three-stage pipeline: ingest context, reason through the semantic layer, execute with governance. No shortcuts. No black boxes.

.// DATA FLOW

Three stages. Full traceability.

Every agent request follows an identical pipeline — whether it is a simple question or a multi-step autonomous workflow. Here is exactly what happens.

1

Context Assembly

The Context Engine receives the request and assembles everything the agent needs to respond: live data from 920+ tables, document embeddings from the RAG knowledge base, user permissions from RBAC policies, and relationship context from the semantic graph.

What happens at this stage

  • 1. Request parsed and intent classified
  • 2. Relevant data sources identified from 920+ tables
  • 3. RAG retrieval pulls relevant document chunks
  • 4. Entity graph resolves cross-system relationships
  • 5. User permissions loaded from 1,607 RLS policies
2

Semantic Reasoning

The Semantic Layer takes the assembled context and reasons through it. The LLM generates an execution plan — a structured sequence of actions, each checked against permission boundaries before it can proceed. If the request involves sensitive operations, human-in-the-loop approval gates activate.

What happens at this stage

  • 1. LLM generates structured execution plan
  • 2. Each action verified against RBAC policies
  • 3. Confidence scored per action step
  • 4. Sensitive operations flagged for human approval
  • 5. Rollback strategies defined for each step
3

Governed Execution

The Action Engine executes the approved plan step by step. Each action is logged with timestamps, user context, and reasoning chains. Results are verified, and the complete audit trail is written to the governance layer. Failures trigger automatic rollback.

What happens at this stage

  • 1. Actions execute in dependency order
  • 2. Cross-system writes via governed connectors
  • 3. Results verified against expected outcomes
  • 4. Full audit trail with decision rationale
  • 5. Context graph updated with new state

.// MCP PROTOCOL

Built on MCP. Open by design.

Vera implements the Model Context Protocol for tool and data integration. Every connector, every data source, and every action surface is exposed as an MCP-compatible server — making Vera extensible by default.

MCP Tool Servers

Each integration exposes its capabilities as MCP tools. Agents discover available actions dynamically and compose multi-tool workflows without hardcoded logic.

MCP Data Sources

Enterprise data surfaces as MCP resources. Agents query structured data, retrieve documents, and traverse the entity graph through a unified protocol.

Custom MCP Servers

Build your own MCP servers to expose proprietary systems. Vera treats custom integrations identically to built-in connectors — same governance, same audit trails.

.// MODEL ARCHITECTURE

Self-hosted models. Zero data leakage.

Vera runs a self-hosted inference stack with Qwen 3.5-9B as the primary model, Claude as an API fallback for complex reasoning tasks, and a future Vera-Engine-9B fine-tuned for enterprise operations.

Primary

Qwen 3.5-9B

Self-hosted on your infrastructure or Vera Cloud. Handles 90%+ of inference requests with zero external API calls. Your data never leaves your environment.

Fallback

Claude API

For complex reasoning tasks that exceed the primary model's capabilities. Zero-retention API with no data used for training. Opt-in per workspace.

Roadmap

Vera-Engine-9B

A custom model fine-tuned for enterprise operations — optimized for structured data reasoning, workflow planning, and multi-step execution at 9B parameter efficiency.

.// KNOWLEDGE BASE

RAG-powered knowledge. Always grounded.

Every agent response is grounded in your actual enterprise data through Retrieval-Augmented Generation. Documents, policies, contracts, and runbooks are embedded and indexed for semantic retrieval — not hallucinated from training data.

Document Ingestion

Upload PDFs, Word documents, Confluence pages, Notion databases, and Google Docs. Vera chunks, embeds, and indexes them for semantic retrieval.

Semantic Retrieval

When an agent needs context, it retrieves the most relevant document chunks based on semantic similarity — not keyword matching. Results include source citations.

Structured + Unstructured Fusion

Agents combine RAG results with live structured data from 920+ tables. A support agent references both the knowledge base article and the customer's actual account data.

Tenant-Isolated Indexes

Each organization's knowledge base is completely isolated. Embeddings, indexes, and retrieval boundaries are enforced at the tenant level. No cross-contamination.

.// GOVERNANCE

Every action governed. Every decision traceable.

Vera's governance layer is not a feature — it is the foundation. 1,607 row-level security policies, complete audit trails, human-in-the-loop approval gates, and per-agent cost tracking ensure every AI action meets enterprise compliance standards.

Row-Level Security

1,607 RLS policies enforce data access at the row level. Agents only see what the user is permitted to see.

Audit Trails

Every action, query, and decision is logged with timestamps, user context, reasoning chains, and outcome verification.

Approval Gates

Sensitive operations require human approval before execution. Configurable per action type, department, and risk level.

Cost Tracking

Per-agent, per-action cost monitoring. Know exactly what each AI operation costs and optimize usage in real time.