VEKTOR implements concepts from three peer-reviewed papers and one open research project. The implementation is entirely original. No code was copied. These papers shaped how we think about memory.
All concepts explained here are derived from publicly available research. VEKTOR's source code is original and proprietary. Links go directly to the original papers.
MAGMA defines a typed graph structure for agent memory — each node carries semantic, temporal, and relational attributes. Rather than flat vector stores, it proposes a layered graph where memories exist at different levels of abstraction and are connected by typed edges that encode causal, temporal, and associative relationships.
memory.graph() and multi-hop context retrieval. Each memory node in VEKTOR's SQLite graph carries the attribute schema MAGMA describes.
EverMemOS frames agent memory as an operating system — with creation, consolidation, and retirement phases for individual memories. It introduces lifecycle management that goes beyond simple retrieval: memories are promoted, demoted, merged, and eventually archived based on access frequency and recency. The paper proposes background consolidation as a core mechanism.
memory.dream() trigger exposes this manually.
Mem0 addresses the core challenge of memory bloat in long-running agents: as memories accumulate, retrieval degrades and context windows overflow. The paper introduces compression and deduplication strategies that identify when two memories are functionally identical, when a newer memory supersedes an older one, and when a set of specifics should be abstracted into a general principle.
memory.remember() runs AUDN — the agent decides in real-time whether to add a new memory, update an existing one, delete a contradiction, or take no action. This keeps the graph clean without manual management.
MemGPT (now Letta) proposed treating the LLM as an operating system kernel: the model manages its own context window as paged virtual memory, moving information in and out of active context as needed. This reframe — from LLM as stateless function to LLM as stateful OS — is foundational to building agents that persist across sessions and accumulate knowledge over time.
memory.briefing()) — the morning summary that reconstructs agent context from overnight storage — is a direct application of this model. The agent wakes up, loads its OS, and resumes from where it left off.
Every piece of information is stored as a node in a typed, attributed graph. Semantic nodes capture facts. Causal nodes capture why. Temporal nodes track when. Entity nodes track who. Edges between them encode relationships, not just proximity.
Before writing any memory, VEKTOR asks the model: does this already exist? Does it contradict something? Should it replace a previous belief? The AUDN decision (Add / Update / Delete / None) fires on every remember() call. The graph never accumulates stale data.
Inspired by EverMemOS lifecycle management. While the agent idles, a 7-phase background process scans recent memories, identifies clusters, and compresses fragments into higher-level insights. The result: the graph grows wiser, not just larger. Available in VEKTOR Studio.
Unlike keyword RAG, memory.recall() traverses the graph using vector similarity and edge weights together. A query doesn't just find matching text — it finds connected context: the reason something was remembered, the time it changed, and the entities involved.
Pure SQLite. No cloud sync, no external API calls for memory operations. The graph lives in a single .db file on your server. Backup is a file copy. Migration is a file move. Memory never touches a third-party service — your agent data is yours.
The MemGPT OS paradigm made practical. Each session begins with memory.briefing() — a structured summary of what the agent knew when it last ran. Topics reviewed, decisions made, open threads. The agent doesn't start from zero; it wakes up and continues.
Every line in VEKTOR was written from scratch. The papers above shaped how we think about memory architecture — they are the intellectual foundation. But the implementation decisions: the SQLite schema, the AUDN prompt design, the REM phase sequence, the embedding pipeline — those are original VEKTOR engineering.
Publishing the research foundations lets you verify the concepts behind VEKTOR. If you're evaluating whether VEKTOR's approach is sound, read the papers. If you want to understand the implementation, full documentation is included with your purchase.
One-time purchase. Local-first. Drop into any Node.js agent in minutes.