API Reference
Complete documentation for every Vektor Slipstream method. All methods are async and return Promises. The memory instance is created once at startup and reused across your agent session.
Initialises the Slipstream memory engine. Detects hardware, loads the ONNX embedding model, pre-warms the inference pipeline, and opens the SQLite database. Call once at startup.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| agentId | string | required | Unique identifier for this agent. Scopes all memories — different agents on the same DB cannot see each other's data. |
| licenceKey | string | required | Your Polar licence key (VEKTOR-XXXX-...) or set via VEKTOR_LICENCE_KEY env var. |
| dbPath | string | optional | Path to the SQLite database file. Default: ./slipstream-memory.db |
| silent | boolean | optional | Suppress the boot banner. Default: false |
const { createMemory } = require('vektor-slipstream'); const memory = await createMemory({ agentId: 'my-agent', licenceKey: process.env.VEKTOR_LICENCE_KEY, dbPath: './memory.db', // optional silent: false, // optional });
Stores a memory with its vector embedding. The text is embedded locally using all-MiniLM-L6-v2 and stored in SQLite. Returns the new memory's database ID.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| text | string | required | The text content to store as a memory. |
| opts.importance | number | optional | Importance score 1–5. Higher importance surfaces first in recall. Default: 1 |
// Basic store const { id } = await memory.remember('User prefers TypeScript over JavaScript'); // With importance score const { id } = await memory.remember( 'Critical: never deploy on Fridays — team decision 2026-01-15', { importance: 5 } );
Semantic recall using cosine similarity over stored vectors. Returns the top-k most relevant memories, ranked by combined similarity score and importance. Avg latency: ~8ms local.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| query | string | required | The query text to search for semantically similar memories. |
| topK | number | optional | Number of results to return. Default: 5 |
Returns
MemoryResult[] = [{ id: number, // database row ID content: string, // stored text summary: string, // REM-compressed summary (if dreamed) importance: number, // importance score 1-5 score: number, // cosine similarity 0-1 }]
const results = await memory.recall('coding preferences', 5); // → [{ id: 1, content: 'User prefers TypeScript...', score: 0.97, importance: 1 }] // Inject into LLM system prompt const context = results.map(r => r.content).join('\n'); const systemPrompt = `You are a helpful assistant.\n\nContext:\n${context}`;
Breadth-first traversal from a seed concept. Finds the most relevant memories via recall, then traverses edges to find connected memories up to N hops out. Returns a graph fragment with nodes and typed edges.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| concept | string | required | The concept to start graph traversal from. |
| opts.hops | number | optional | Traversal depth. Default: 2 |
| opts.topK | number | optional | Number of seed nodes from initial recall. Default: 3 |
const { nodes, edges } = await memory.graph('TypeScript', { hops: 2 }); console.log(`Found ${nodes.length} connected memories`); console.log(`Linked by ${edges.length} edges`);
Returns memories added or updated on a topic in the last N days. Useful for understanding how the agent's knowledge about a topic has changed over time.
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| topic | string | required | Topic to check for recent changes. |
| days | number | optional | Lookback window in days. Default: 7 |
// What changed about the project in the last 7 days? const changes = await memory.delta('project architecture', 7); changes.forEach(c => console.log(c.content, c.updated_at));
Generates a human-readable summary of everything the agent learned in the last 24 hours, ranked by importance. Designed to be injected into the system prompt at session start.
const brief = await memory.briefing(); // → "[SLIPSTREAM BRIEFING — last 24h — 12 memories] // 1. User prefers TypeScript over JavaScript // 2. Project deadline moved to March 15 // ..." // Inject into system prompt const system = `You are a helpful assistant.\n\n${brief}`;
Triggers the 7-phase REM compression cycle manually. Compresses memory fragments into high-density insight nodes — up to 50:1 compression ratio. Reduces context noise and improves recall precision. Run during idle periods or on a nightly schedule.
// Trigger manually await memory.dream(); // Schedule nightly via cron (node-cron) const cron = require('node-cron'); cron.schedule('0 3 * * *', async () => { await memory.dream(); console.log('REM cycle complete'); });
Cloak is the sovereign identity layer — stealth browser, AES-256 credential vault, layout sensor, and token ROI auditing. Import from vektor-slipstream/cloak.
const { cloak_fetch, cloak_render, cloak_diff, cloak_passport, tokens_saved } = require('vektor-slipstream/cloak');
Fetch a URL using a stealth headless browser. Returns clean compressed text, stripping scripts, styles, nav and footer noise. Caches results locally for 1 hour.
const result = await cloak_fetch('https://example.com'); // → { text: '...', tokensSaved: 38000, fromCache: false } // Force bypass cache const fresh = await cloak_fetch('https://example.com', { force: true });
npx playwright install chromium before using cloak_fetch or cloak_render.
High-fidelity layout sensor. Renders a page in a headless browser, executes all JS, resolves CSS cascade. Returns computed styles, font audit, gap analysis, and asset errors.
const result = await cloak_render('https://vektormemory.com', ['.hero', '.nav']); // → { status: 'SUCCESS', audit: { layout, fonts, gapSuspects, assetErrors } }
Read and write to the AES-256-GCM encrypted credential vault at ~/.vektor/vault.enc. Machine-bound — unreadable on any other machine. Pass a value to write, omit to read.
// Write cloak_passport('GITHUB_TOKEN', 'ghp_xxxx'); // Read const token = cloak_passport('GITHUB_TOKEN'); // → 'ghp_xxxx'
Return what semantically changed on a URL since last fetch. Requires cloak_fetch to have been called at least once for the URL.
const diff = await cloak_diff('https://example.com'); // → { unchanged: false, added: [...], removed: [...], summary: '+12 new terms' }
Calculate token efficiency and ROI for a session. Logs to audit trail in the vault. Use to prove the memory layer is paying for itself.
const roi = tokens_saved({ raw_tokens: 10000, // estimated without VEKTOR actual_tokens: 3000, // actual tokens used agent_id: 'my-agent', provider: 'openai', cost_per_1m: 2.50, }); // → { saved: 7000, reduction_pct: 70, cost_saved_usd: 0.0175, roi_multiple: 2.3 }