The published research
behind persistent agent
memory.

VEKTOR implements concepts from three peer-reviewed papers and one open research project. The implementation is entirely original. No code was copied. These papers shaped how we think about memory.

NOTE

All concepts explained here are derived from publicly available research. VEKTOR's source code is original and proprietary. Links go directly to the original papers.

Four Foundations
Papers that shaped VEKTOR.
01 ARXIV // 2601.03236 Graph Architecture

MAGMA: Multi-level Attributed Graph Memory Architecture

MAGMA defines a typed graph structure for agent memory — each node carries semantic, temporal, and relational attributes. Rather than flat vector stores, it proposes a layered graph where memories exist at different levels of abstraction and are connected by typed edges that encode causal, temporal, and associative relationships.

Typed edge graphs Memory abstraction layers Attribute-rich nodes Multi-hop traversal
VEKTOR MAGMA directly informs VEKTOR's four memory layers — semantic, causal, temporal, and entity. The typed edge structure is the foundation of memory.graph() and multi-hop context retrieval. Each memory node in VEKTOR's SQLite graph carries the attribute schema MAGMA describes.
02 ARXIV // 2601.02163 Memory Lifecycle

EverMemOS: Persistent Memory Operating System for LLM Agents

EverMemOS frames agent memory as an operating system — with creation, consolidation, and retirement phases for individual memories. It introduces lifecycle management that goes beyond simple retrieval: memories are promoted, demoted, merged, and eventually archived based on access frequency and recency. The paper proposes background consolidation as a core mechanism.

Memory lifecycle states Background consolidation Promotion & demotion Archive & retrieval tiers
VEKTOR EverMemOS is the conceptual backbone of VEKTOR Studio's REM Cycle. The 7-phase dream engine that runs while your agent is idle — compressing 50 fragments into 3 core insights — directly mirrors the background consolidation lifecycle EverMemOS describes. The memory.dream() trigger exposes this manually.
03 ARXIV // 2504.19413 Memory Compression

Mem0: The Memory Layer for Personalised AI

Mem0 addresses the core challenge of memory bloat in long-running agents: as memories accumulate, retrieval degrades and context windows overflow. The paper introduces compression and deduplication strategies that identify when two memories are functionally identical, when a newer memory supersedes an older one, and when a set of specifics should be abstracted into a general principle.

Semantic deduplication Memory supersession Abstraction promotion Conflict resolution
VEKTOR Mem0's approach directly shapes VEKTOR's AUDN loop (Add / Update / Delete / None). Every call to memory.remember() runs AUDN — the agent decides in real-time whether to add a new memory, update an existing one, delete a contradiction, or take no action. This keeps the graph clean without manual management.
04 LETTA.COM Agent OS

Letta / MemGPT: LLM as Operating System

MemGPT (now Letta) proposed treating the LLM as an operating system kernel: the model manages its own context window as paged virtual memory, moving information in and out of active context as needed. This reframe — from LLM as stateless function to LLM as stateful OS — is foundational to building agents that persist across sessions and accumulate knowledge over time.

Memory-as-OS paradigm Paged context management Stateful agent design Persistent session state
VEKTOR The memory-as-OS paradigm is the philosophical foundation for why VEKTOR exists. VEKTOR's briefing system (memory.briefing()) — the morning summary that reconstructs agent context from overnight storage — is a direct application of this model. The agent wakes up, loads its OS, and resumes from where it left off.
Core Concepts
How the research maps to VEKTOR.
01 — GRAPH STRUCTURE

MAGMA 4-Layer Memory

Every piece of information is stored as a node in a typed, attributed graph. Semantic nodes capture facts. Causal nodes capture why. Temporal nodes track when. Entity nodes track who. Edges between them encode relationships, not just proximity.

02 — CURATION LOOP

AUDN — Autonomous Update

Before writing any memory, VEKTOR asks the model: does this already exist? Does it contradict something? Should it replace a previous belief? The AUDN decision (Add / Update / Delete / None) fires on every remember() call. The graph never accumulates stale data.

03 — CONSOLIDATION

REM Cycle — Dream Engine

Inspired by EverMemOS lifecycle management. While the agent idles, a 7-phase background process scans recent memories, identifies clusters, and compresses fragments into higher-level insights. The result: the graph grows wiser, not just larger. Available in VEKTOR Studio.

04 — RETRIEVAL

Associative Recall

Unlike keyword RAG, memory.recall() traverses the graph using vector similarity and edge weights together. A query doesn't just find matching text — it finds connected context: the reason something was remembered, the time it changed, and the entities involved.

05 — PERSISTENCE

Local-First SQLite

Pure SQLite. No cloud sync, no external API calls for memory operations. The graph lives in a single .db file on your server. Backup is a file copy. Migration is a file move. Memory never touches a third-party service — your agent data is yours.

06 — CONTEXT

Briefing — OS Resume

The MemGPT OS paradigm made practical. Each session begins with memory.briefing() — a structured summary of what the agent knew when it last ran. Topics reviewed, decisions made, open threads. The agent doesn't start from zero; it wakes up and continues.

Originality
Concepts are public. Implementation is ours.
No code was borrowed.

Every line in VEKTOR was written from scratch. The papers above shaped how we think about memory architecture — they are the intellectual foundation. But the implementation decisions: the SQLite schema, the AUDN prompt design, the REM phase sequence, the embedding pipeline — those are original VEKTOR engineering.

  • Open research — all four papers are publicly available and linked directly
  • Original schema — VEKTOR's graph structure is not a port of any reference implementation
  • No SDK reuse — we did not fork or extend Mem0, Letta, or any OSS memory library
  • Proprietary curation logic — the AUDN loop prompt chain is VEKTOR-specific
  • Independent REM design — the 7-phase dream engine is not based on EverMemOS code
Why it matters.

Publishing the research foundations lets you verify the concepts behind VEKTOR. If you're evaluating whether VEKTOR's approach is sound, read the papers. If you want to understand the implementation, full documentation is included with your purchase.

  • Peer-reviewed concepts — the architectural ideas have been validated outside VEKTOR
  • Auditable decisions — you can trace every design choice back to a research principle
  • Future-proof design — built on ideas that were peer-reviewed in 2025, not trend-chasing
  • Commercial licence — VEKTOR Pro and Studio include commercial use rights

Ready to build with VEKTOR?

One-time purchase. Local-first. Drop into any Node.js agent in minutes.