API Reference
Every public TypeScript export in SOMA, grouped by category. All functions are importable from the main package entry point (import { ... } from 'soma').
Ops-intel functions and types are also available via the subpath export (import { ... } from 'soma/ops-intel').
Vault
Core vault operations for the Markdown+YAML entity store.
createVault
function createVault(dir?: string): Vault
| Parameter | Type | Default | Description |
|---|---|---|---|
dir | string | '.soma/vault' | Path to the vault directory on disk |
Returns: Vault — Object with create, read, update, remove, list, query, search methods.
Creates and initializes a vault instance. The vault stores entities as Markdown files with YAML frontmatter, organized into type subdirectories. On initialization, loads or rebuilds the _index.json fast-lookup index.
const vault = createVault('.soma/vault');
const entity = await vault.create({ type: 'insight', name: 'retry-pattern', body: '...' });
vaultEntityCount
function vaultEntityCount(baseDir: string): number
| Parameter | Type | Description |
|---|---|---|
baseDir | string | Path to the vault directory |
Returns: number — Total number of entity entries in the vault index.
Workers use this to detect vault restructuring (deletions, migrations) without being invalidated by other workers' writes. State resets only when the entity count decreases — new entities being added is normal and does not trigger a reset.
const count = vaultEntityCount('.soma/vault');
if (count < worker.state.savedEntityCount) {
worker.resetState(); // vault restructured
}
vaultFingerprint (deprecated)
function vaultFingerprint(baseDir: string): string
| Parameter | Type | Description |
|---|---|---|
baseDir | string | Path to the vault directory |
Returns: string — SHA-256 hash of the _index.json file contents.
Deprecated: Use vaultEntityCount() instead. The hash-based fingerprint changes whenever any worker writes to the vault (adding entities mutates _index.json), causing spurious state resets in all other pipeline workers on every cycle.
Four-Layer System
Layer-aware writes, queries, and permission enforcement across the four knowledge layers (L1 Archive, L2 Working Memory, L3 Emerging Knowledge, L4 Canon).
writeToLayer
function writeToLayer(
vault: Vault,
layer: KnowledgeLayer,
worker: WorkerName,
entity: Partial<Entity>
): Promise<Entity>
| Parameter | Type | Description |
|---|---|---|
vault | Vault | Target vault instance |
layer | KnowledgeLayer | One of 'archive', 'working', 'emerging', 'canon' |
worker | WorkerName | The worker performing the write (e.g. 'harvester', 'synthesizer') |
entity | Partial<Entity> | Entity fields to write |
Returns: Promise<Entity> — The created entity with all layer fields populated.
Creates a new entity in the specified layer after enforcing worker permissions and validating layer-specific required fields. Throws LayerPermissionError if the worker lacks write access to that layer.
const decision = await writeToLayer(vault, 'archive', 'harvester', {
type: 'decision',
name: 'tool_choice: fetch-data',
decision_type: 'tool_choice',
});
queryByLayer
function queryByLayer(vault: Vault, layer: KnowledgeLayer): Promise<Entity[]>
| Parameter | Type | Description |
|---|---|---|
vault | Vault | Vault instance to query |
layer | KnowledgeLayer | Layer to filter on |
Returns: Promise<Entity[]> — All entities in the specified layer.
Uses index-level filtering for performance. On a vault with 100K entities and 1K in L3, an emerging query reads only the ~1K matching files, not all 100K.
const proposals = await queryByLayer(vault, 'emerging');
enforceWritePermission
function enforceWritePermission(worker: WorkerName, layer: KnowledgeLayer): void
| Parameter | Type | Description |
|---|---|---|
worker | WorkerName | Worker attempting the write |
layer | KnowledgeLayer | Target layer |
Returns: void — Throws LayerPermissionError if the worker cannot write to the layer.
Checks the static permission matrix. Harvester and Reconciler may write to L1, Team-Context to L2, Synthesizer and Cartographer to L3, Governance to L4. Policy Bridge has read-only access to all layers.
enforceWritePermission('harvester', 'archive'); // OK
enforceWritePermission('harvester', 'canon'); // throws LayerPermissionError
canWrite
function canWrite(worker: WorkerName, layer: KnowledgeLayer): boolean
| Parameter | Type | Description |
|---|---|---|
worker | WorkerName | Worker to check |
layer | KnowledgeLayer | Target layer |
Returns: boolean — true if the worker has write permission to the layer.
Non-throwing variant of enforceWritePermission. Useful for conditional logic where you want to check permission without catching exceptions.
if (canWrite('synthesizer', 'emerging')) {
await writeToLayer(vault, 'emerging', 'synthesizer', entity);
}
validateLayerFields
function validateLayerFields(layer: KnowledgeLayer, entity: Partial<Entity>): void
| Parameter | Type | Description |
|---|---|---|
layer | KnowledgeLayer | Target layer for validation |
entity | Partial<Entity> | Entity fields to validate |
Returns: void — Throws a validation error if required fields are missing or invalid.
Validates layer-specific required fields: L1 needs layer + source_worker, L2 additionally needs team_id + decay_at, L3 needs confidence_score + evidence_links + decay_at, L4 needs ratified_by + ratified_at + origin_l3_id. Also validates that confidence_score is between 0 and 1, and that L1/L4 entries do not have decay_at.
validateLayerFields('emerging', {
layer: 'emerging',
source_worker: 'synthesizer',
confidence_score: 0.85,
evidence_links: ['trace-1', 'trace-2'],
decay_at: '2026-06-19T00:00:00Z',
}); // OK
LayerPermissionError
class LayerPermissionError extends Error {
worker: WorkerName;
layer: KnowledgeLayer;
}
Thrown when a worker attempts to write to a layer it does not have access to. Contains the worker and layer that triggered the rejection.
try {
await writeToLayer(vault, 'canon', 'harvester', entity);
} catch (e) {
if (e instanceof LayerPermissionError) {
console.log(`${e.worker} cannot write to ${e.layer}`);
}
}
Governance
Human-in-the-loop review pipeline for promoting L3 proposals to L4 canon.
createGovernanceAPI
function createGovernanceAPI(vault: Vault): GovernanceAPI
| Parameter | Type | Description |
|---|---|---|
vault | Vault | Vault instance containing L3 proposals |
Returns: GovernanceAPI — Object with list_pending, promote, reject, get_evidence methods.
Creates the governance review interface. Only pending L3 entries are eligible for promotion. L2 entries cannot be promoted. Already-promoted or rejected entries cannot be re-promoted.
const gov = createGovernanceAPI(vault);
const pending = await gov.list_pending();
await gov.promote(pending[0].id, 'reviewer-alice');
list_pending
gov.list_pending(): Promise<Entity[]>
Returns L3 entries with status pending, sorted by confidence_score descending so reviewers see the most confident proposals first.
promote
gov.promote(entryId: string, reviewerId: string): Promise<Entity>
Creates a new L4 canon entry from the L3 proposal. Sets ratified_by, ratified_at, and origin_l3_id. Marks the original L3 entry as promoted (exempting it from decay).
reject
gov.reject(entryId: string, reviewerId: string, reason: string): Promise<void>
Marks the L3 entry as rejected with the provided reason. Rejected entries are exempted from decay but remain in L3.
get_evidence
gov.get_evidence(entryId: string): Promise<{ entry: Entity; evidence: Entity[] }>
Returns the L3 entry alongside all its linked L1 evidence traces, resolved from evidence_links. Provides the full evidence chain for informed review decisions.
GovernanceError
class GovernanceError extends Error
Thrown when governance operations violate rules: attempting to promote L2 entries, re-promoting already-promoted entries, or operating on non-existent entries.
Policy Bridge
Read-only query interface between the vault knowledge layers and agents.
createPolicyBridge
function createPolicyBridge(vault: Vault): PolicyBridge
| Parameter | Type | Description |
|---|---|---|
vault | Vault | Vault to query across layers |
Returns: PolicyBridge — Object with query method.
Creates the intent-based routing interface. Every result includes source_layer and semantic_weight metadata so agents know how authoritative each piece of information is.
const bridge = createPolicyBridge(vault);
const mandatory = await bridge.query({ intent: 'enforce' });
query
bridge.query(opts: PolicyQueryOptions): Promise<PolicyResult[]>
| Intent | Layer | Semantic Weight | Description |
|---|---|---|---|
enforce | L4 | mandatory | Hard constraints agents must follow |
advise | L3 | advisory | Soft guidance with confidence scores |
brief | L2 | contextual | Team context (requires team_id in options) |
route | L1 | historical | Historical reference data |
all | L1-L4 | Stratified | Results from all four layers with respective weights |
const teamContext = await bridge.query({ intent: 'brief', team_id: 'backend-team' });
const everything = await bridge.query({ intent: 'all' });
createSomaPolicySource
function createSomaPolicySource(vault: Vault): PolicySource
| Parameter | Type | Description |
|---|---|---|
vault | Vault | Vault to back the policy source |
Returns: PolicySource — AgentFlow-compatible policy source interface.
Creates an adapter implementing AgentFlow's PolicySource interface, enabling SOMA policies to be consumed as guard constraints by the AgentFlow execution engine.
const source = createSomaPolicySource(vault);
// Pass to AgentFlow guard configuration
Decay
Time-based lifecycle management for ephemeral layers (L2 and L3).
createDecayProcessor
function createDecayProcessor(vault: Vault, config?: DecayConfig): DecayProcessor
| Parameter | Type | Default | Description |
|---|---|---|---|
vault | Vault | — | Vault instance to process |
config | DecayConfig | { l2DefaultDays: 14, l3DefaultDays: 90 } | Decay timing configuration |
Returns: DecayProcessor — Object with processDecay, extendDecayOnAccess, computeL2DecayAt, computeL3DecayAt methods.
const decay = createDecayProcessor(vault, { l2DefaultDays: 14, l3DefaultDays: 90 });
const results = await decay.processDecay();
processDecay
decay.processDecay(): Promise<DecayResult[]>
Moves expired L2 entries to L1 with decayed_from: 'working' and expired L3 entries to L1 with decayed_from: 'emerging'. Skips promoted/rejected L3 entries. Updates evidence references in L3/L4 entries pointing to decayed entries so no broken links remain.
extendDecayOnAccess
decay.extendDecayOnAccess(entityId: string): Promise<void>
Resets the decay timer for an entity when it is read. Implements activity-based extension so frequently-accessed entries survive longer.
computeL2DecayAt
decay.computeL2DecayAt(fromDate?: Date): string
Returns an ISO timestamp for when an L2 entry should expire (default: 14 days from now).
computeL3DecayAt
decay.computeL3DecayAt(fromDate?: Date): string
Returns an ISO timestamp for when an L3 entry should expire (default: 90 days from now).
checkDanglingReferences
function checkDanglingReferences(vault: Vault): Promise<DanglingRef[]>
| Parameter | Type | Description |
|---|---|---|
vault | Vault | Vault to audit |
Returns: Promise<DanglingRef[]> — Array of evidence links that point to non-existent entities.
Audits all evidence_links across L3 and L4 entries to find broken references. Can be run independently as a health check.
const broken = await checkDanglingReferences(vault);
if (broken.length > 0) console.warn('Broken evidence links found:', broken);
Decision Extraction
Infers agent decisions from ExecutionGraph structure without requiring adapter changes.
isExecutionGraph
function isExecutionGraph(obj: unknown): obj is ExecutionGraph
| Parameter | Type | Description |
|---|---|---|
obj | unknown | Object to type-check |
Returns: boolean — true if the object conforms to the ExecutionGraph shape.
Type guard for distinguishing execution graphs from other event types during ingestion.
if (isExecutionGraph(event)) {
const decisions = extractDecisionsFromGraph(event);
}
extractDecisionsFromGraph
function extractDecisionsFromGraph(graph: ExecutionGraph): RawDecision[]
| Parameter | Type | Description |
|---|---|---|
graph | ExecutionGraph | Full execution graph with nodes, edges, and trace events |
Returns: RawDecision[] — Array of inferred decisions.
Extracts decisions by analyzing graph structure: tool nodes become tool_choice decisions, branched edges become branch decisions, retried edges become retry decisions, subagent nodes become delegation decisions, failed nodes become failure decisions, and explicit decision trace events are passed through.
const decisions = extractDecisionsFromGraph(graph);
// [{ type: 'tool_choice', choice: 'fetch-data', outcome: 'completed', ... }]
decisionsToEntities
function decisionsToEntities(
decisions: RawDecision[],
graphId: string,
agentId: string
): Partial<Entity>[]
| Parameter | Type | Description |
|---|---|---|
decisions | RawDecision[] | Decisions from extractDecisionsFromGraph |
graphId | string | Source execution graph ID |
agentId | string | Agent that produced the graph |
Returns: Partial<Entity>[] — Entity objects ready for writeToLayer.
Converts raw decisions into entity format with stable IDs derived from graph_id-node_id for idempotent re-ingestion.
const entities = decisionsToEntities(decisions, 'exec-123', 'agent-alpha');
for (const e of entities) {
await writeToLayer(vault, 'archive', 'harvester', e);
}
extractDecisionsFromSession
function extractDecisionsFromSession(events: SessionEvent[]): NormalizedDecision[]
| Parameter | Type | Description |
|---|---|---|
events | SessionEvent[] | Raw session events (JSONL traces or OpenClaw events) |
Returns: NormalizedDecision[] — Normalized decision sequence extracted from the session.
Parses tool calls, results, and agent actions from raw session events into a standardized format. Available from both 'soma' and 'soma/ops-intel'.
computePatternSignature
function computePatternSignature(decisions: NormalizedDecision[]): string
| Parameter | Type | Description |
|---|---|---|
decisions | NormalizedDecision[] | Decisions from extractDecisionsFromSession |
Returns: string — A deterministic signature representing the decision pattern (e.g., fetch→parse→write).
Useful for comparing execution patterns across runs and agents.
Migration
Utilities for migrating flat (pre-layer) vaults to the four-layer model.
migrateToLayers
function migrateToLayers(vaultDir?: string): Promise<MigrationResult>
| Parameter | Type | Default | Description |
|---|---|---|---|
vaultDir | string | '.soma/vault' | Path to the vault directory |
Returns: Promise<MigrationResult> — Count of migrated and skipped entities.
Scans all entities and adds layer: 'archive' and source_worker: 'migration' to those without a layer field. Non-destructive (only adds fields, never modifies existing data) and idempotent (re-running skips already-migrated entities). All entities start in L1; governance reviewers can later promote valuable ones to L4.
const result = await migrateToLayers('.soma/vault');
console.log(`Migrated ${result.migrated}, skipped ${result.skipped}`);
Orchestrator
Top-level entry point that wires together the vault, workers, and pipeline.
createSoma
function createSoma(config: SomaConfig): Soma
| Parameter | Type | Description |
|---|---|---|
config | SomaConfig | Full configuration including vault dir, LLM provider, embedding function, worker configs, and decay settings |
Returns: Soma — Object with run and watch methods.
Creates the orchestrator that coordinates all workers in sequence: Harvester -> Reconciler -> Synthesizer -> Cartographer -> Decay Processor.
const soma = createSoma({
vaultDir: '.soma/vault',
analysisFn: myLlmFunction,
embedFn: myEmbedFunction,
decay: { l2DefaultDays: 14, l3DefaultDays: 90 },
});
run
soma.run(): Promise<PipelineResult>
Executes a single pipeline cycle: ingest traces, reconcile, synthesize, map relationships, process decay. Returns a summary of entities created, updated, and decayed.
watch
soma.watch(tracesDir: string): void
Starts a continuous file watcher on the specified traces directory. Triggers a pipeline run when new trace files appear. Harvester runs on a 60-second cycle, Reconciler on 5 minutes, Synthesizer on 1 hour, and Cartographer on change detection.
soma.watch('.soma/inbox');
// Runs continuously, processing new traces as they arrive