MCP · DXT · Windows · AI Tooling · Complete Series

Into the
labyrinth:
building a
34-tool MCP
server

The complete story. Windows path hell, fixing Groq Desktop before they did, and shipping 34 tools across 6 AI apps in 60 seconds. All three parts.

25 MIN READ · COMPLETE SERIES MCP · DXT · WINDOWS · NODE.JS · GROQ VEKTOR MEMORY · 2026
PART 01
Into the Labyrinth
MCP, DXT, and the Windows dragon
THE_BEGINNING

It started with
a simple idea.
Nothing is ever simple.

The idea was straightforward: build persistent memory for AI agents using local SQLite. No cloud. No subscriptions. No vendor lock-in. A graph database that lives on your machine and gives every AI app a brain that actually remembers things between sessions.

What we didn't anticipate was the labyrinth between "working Node.js package" and "seamlessly installed across six different AI apps on Windows." That journey involved Windows path hell, a Node ABI mismatch, a PowerShell popup dragon, and ultimately fixing a bug in Groq's own desktop app before their engineers did.

"Anyone who tells you cross-platform AI tooling is easy has never tried to ship a native Node binary across nvm versions on Windows."

This is that story. Three parts. Real code, real errors, real blood on the floor. And at the end — a 34-tool MCP server that configures itself across Claude Desktop, Cursor, Windsurf, VS Code, Continue, and Groq Desktop in 60 seconds.

UNDERSTANDING_THE_TERRAIN

What MCP actually is
(and why DXT changes everything)

The Model Context Protocol is Anthropic's open standard for connecting AI models to external tools. Think of it as a universal adapter: instead of every AI app needing a custom integration for every tool, MCP defines one protocol that any app can speak and any tool can implement.

An MCP server is a process that runs locally on your machine and exposes a set of tools via JSON-RPC over stdio. When Claude Desktop starts, it reads a config file, spawns your MCP server as a child process, discovers its tools, and makes them available in every conversation. The AI can then call those tools — and your server executes real code in response.

claude_desktop_config.json
// The config that wires everything together { "mcpServers": { "vektor-slipstream": { "command": "node", "args": ["/path/to/vektor.mjs", "mcp"], "env": { "VEKTOR_LICENCE_KEY": "your-key-here" } } } }

That's the basic pattern. Claude Desktop reads this, spawns node /path/to/vektor.mjs mcp, and your server sits waiting on stdin for tool calls. Clean, simple, powerful.

Enter DXT

DXT — Desktop Extension — is Anthropic's packaging format for MCP servers. A .dxt file is a small manifest that tells Claude Desktop everything it needs to know: where to find the server, what environment variables to ask the user for, what the tool descriptions are. Drag it onto Claude Desktop's Extensions page and it configures everything automatically.

Before DXT, every user had to manually edit a JSON config file. On Windows, this was a minefield — one wrong character, one BOM (Byte Order Mark) injected by Notepad, and the entire MCP connection silently fails. DXT eliminates that entirely. One drag-and-drop, done.

WITHOUT DXT
Manual JSON editing per app
BOM corruption risk on Windows
Hardcoded paths that break on update
Different format for each app
No guided licence key entry
WITH DXT
Drag-and-drop installation
No file editing required
Dynamic path resolution
Guided UI for configuration
Auto-restarts on update
THE_FIRST_DRAGON

Windows path hell
and the .ps1 popup dragon

The first major obstacle wasn't the code — it was Windows itself. Node Version Manager on Windows (nvm4w) installs Node binaries in a non-standard location. When npm installs global packages, they land somewhere like C:\nvm4w\nodejs2\nodejs\node_modules\. When your DXT config says "command": "node", Windows has to find that binary — and depending on PATH configuration, it sometimes doesn't.

The fix was obvious in hindsight: use process.execPath instead of the string "node". This gives you the absolute path to the exact Node binary that's currently running — no PATH lookup, no ambiguity.

vektor-setup-wizard.js
// WRONG — relies on PATH resolution const serverEntry = { command: 'node', args: [vektorMjs, 'mcp'] } // RIGHT — absolute path, always works const serverEntry = { command: process.execPath, // e.g. C:\nvm4w\nodejs2\nodejs\node.exe args: [vektorMjs, 'mcp'] }

The popup dragon

Then came the popup dragon. Every time any npm-installed binary ran on Windows, a PowerShell console window would flash open and close. Open an app, popup. Run a command, popup. It was everywhere, suddenly, on every tool.

The cause: npm on Windows creates three shim files for every package — a .cmd file, a no-extension file, and a .ps1 PowerShell file. Something had changed in our environment that made Windows prefer the .ps1 shim — and PowerShell shims always spawn a visible console window.

The nuclear option: delete every .ps1 shim in the Node bin directory. The .cmd shims do exactly the same job. Nothing breaks. The popups vanish permanently.

PowerShell — nuclear option
# Delete all .ps1 shims — .cmd takes over, no more popups Get-ChildItem "C:\nvm4w\nodejs2\nodejs" -Filter "*.ps1" | Remove-Item -Force Get-ChildItem "C:\Users\minimaxa\AppData\Local\nvm\v24.1.0\node_modules\.bin" ` -Filter "*.ps1" -Recurse | Remove-Item -Force

Two commands. Dragon slain. Zero popups from that point forward — across every app, every tool, every install.

THE_SECOND_DRAGON

The better-sqlite3
ABI nightmare

VEKTOR's memory engine uses better-sqlite3 — a native Node addon that compiles to a .node binary. Native addons are compiled against a specific Node ABI version. If your MCP server runs on a different Node version than the one that compiled the binary, you get a cryptic error and a dead server.

The deeper problem: better-sqlite3 inside our obfuscated core module uses the bindings package to locate its compiled binary. bindings resolves the binary relative to the current working directory — not the package directory. So when Claude Desktop spawned the MCP server from C:\Users\minimaxa\, it looked for the binary relative to the home folder and found nothing.

3
Node ABI
versions involved
v24
Node version
that finally worked
1
Line fix:
process.chdir()

The fix: a process.chdir() call before loading the core module, setting the working directory to the package folder where better-sqlite3 lives. One line. Hours of debugging to find it.

This is what nobody writes about. The real work of shipping developer tools isn't the algorithms or the architecture — it's the 47 environmental assumptions that silently fail on someone else's machine. Every one of those assumptions is a dragon in the labyrinth.

THE_SOLUTION_EMERGES

The setup wizard:
one command,
six apps, 60 seconds

After solving the path problems, the ABI issues, and the popup dragon, we had a working MCP server. But customers still had to manually configure each app. Different apps use different config file locations, different JSON formats, different root keys (mcpServers vs servers). It was a documentation nightmare.

So we built a setup wizard. Run vektor setup and it scans for installed AI apps, shows you what it found, asks for confirmation per app, and writes the correct config for each one automatically. It backs up existing configs before touching them. It validates JSON before and after writing. It uses process.execPath so the paths are always correct.

Terminal output — vektor setup
// Step 6 output ✓ Claude Desktop — found ✓ Cursor — found ✓ Windsurf — found ✓ VS Code — found ✓ Continue — found ✓ Groq Desktop — found Configure Claude Desktop? [y/N]: y ✓ Claude Desktop configured ✓ Profile: full (34 tools) Configure Cursor? [y/N]: y ✓ Cursor configured ✓ Profile: dev (15 tools — optimised for 40-tool limit)

Each app gets the right config format. Cursor gets a warning about its 40-tool limit and a dev-optimised profile. VS Code gets servers as the root key instead of mcpServers. Continue gets a YAML-format drop file. All handled automatically, transparently, without the user needing to know any of it.

"The best developer experience is the one where the developer never has to think about the infrastructure at all."

PART 02
The Boss Battle
We fixed Groq Desktop before Groq did
THE_MOMENT_IT_BREAKS

Six apps working.
Then Groq.

Claude Desktop: working. Cursor: working. Windsurf: working. VS Code: working. Continue: working. Groq Desktop — the brand new AI desktop app from the company behind the fastest LLM inference on the planet: every single tool call fails with the same error, every time.

// ERROR — Groq Desktop tool call
400 {"error":{"message":"invalid JSON schema for tool vektor_recall, tools[0].function.parameters: `additionalProperties:false` must be set on every object","type":"invalid_request_error"}}

The error is clear enough: Groq's API requires additionalProperties: false on every object in a tool schema. Fair enough — some APIs are stricter than others. So we added it to every one of our 34 tool schemas. Repacked, reinstalled, retested.

Same error. Different wording this time:

// ERROR — after our fix
400 {"error":{"message":"invalid JSON schema for tool vektor_recall, tools[0].function.parameters: /required: `required` is required to be supplied and to be an array including every key in properties. The following properties must be listed in `required`: top_k"}}

Now Groq is complaining that optional properties aren't in the required array. top_k is optional — that's the point. It has a default value. But Groq's strict mode demands every property be listed as required.

We could fix our schemas. But something felt wrong. These errors shouldn't be coming from Groq's API directly — they should be coming from Groq Desktop's MCP-to-OpenAI converter. We looked at our tool schemas in Claude Desktop. They worked fine. Same schemas. Why would Groq's API see them differently?

INTO_THE_SOURCE_CODE

We opened
their source code.
And found the comment.

Groq Desktop is open source on GitHub. We cloned it, built it from source, and started searching. The MCP integration has to convert tool schemas from MCP format (inputSchema) to OpenAI function-calling format (parameters) before sending them to the API. That conversion lives in electron/chatHandler.js.

Line 80. Right there in the source:

electron/chatHandler.js — the smoking gun
// Sanitize schema: Reconstruction Strategy // Instead of copying, we rebuild the schema from scratch let safeSchema = { type: "object", properties: {} // Removed 'required' init here to add it only if needed // Removed 'additionalProperties' to be less strict/prone to validation errors };

"Removed 'additionalProperties' to be less strict/prone to validation errors." — Their own comment. The thing they removed to avoid validation errors was causing validation errors.

The irony is perfect. They removed additionalProperties: false to avoid validation errors — but Groq's own API now requires it. Their fix for one problem created a worse problem downstream.

And the required array bug was the same pattern: they only copied required from the original schema if it existed and had items. Optional properties with defaults — like our top_k — never made it into required. Groq's strict mode then rejected the schema because a property existed that wasn't listed as required.

THE_FIX

Two bugs.
Two fixes.
One rebuild.

The fix was surgical. In chatHandler.js, after the schema reconstruction block, we added three lines:

electron/chatHandler.js — the fix
// Rebuild required only if it has items (existing code) if (Array.isArray(schema.required) && schema.required.length > 0) { safeSchema.required = [...schema.required]; } // FIX 1 + 2: Groq API requires ALL properties in required[] // and additionalProperties: false on every object const allKeys = Object.keys(safeSchema.properties); if (allKeys.length > 0) safeSchema.required = allKeys; safeSchema.additionalProperties = false;

Three lines. That's it. Force all property keys into required. Add additionalProperties: false. Rebuild Groq Desktop. Install. Test.

// SUCCESS — after the fix
ToolVektor recall The memory has been successfully stored and recalled. "Groq Desktop fixed 13 April 2026" id: 16, score: 1.0

Score of 1.0. Perfect recall. All 34 tools working in Groq Desktop.

The timeline

ERROR #1
additionalProperties missing
Added to all 34 tool schemas in our MCP server
ERROR #2
Still failing — different error
Optional properties not in required[] — but our schemas were fine
INSIGHT
The error is coming from their converter
Groq Desktop strips and rebuilds schemas before sending to API
FOUND IT
chatHandler.js line 80
"Removed additionalProperties to be less strict" — the comment that said everything
FIXED
3 lines added, app rebuilt
score: 1.0 — all 34 tools working in Groq Desktop
REPORTED
Bug report sent to Groq
Full technical writeup with exact file, line numbers, and fix included
THE_CUSTOMER_PROBLEM

Every customer
would hit this.
So we automated the fix.

The moment the fix worked, the next question was obvious: what about our customers? Anyone who installs VEKTOR and tries to use it with Groq Desktop will hit the exact same two errors. They'll think VEKTOR is broken. They'll give up. They'll leave a one-star review.

We couldn't wait for Groq to push a fix. So we built the patch into our setup wizard. When the wizard detects Groq Desktop is installed, it automatically applies the chatHandler.js fix before writing the MCP config. The customer never sees an error. They never know the bug existed.

vektor-setup-wizard.js — auto-patch
// When Groq Desktop is detected, patch it silently if (app.name === 'Groq Desktop') { const patched = patchGroqDesktop(app.detect); if (patched) { tick('Groq Desktop schema fix applied automatically'); } writeMcpConfig(app.configPath, serverEntry, rootKey); }
// Wizard output — seamless customer experience
✓ Groq Desktop schema fix applied automatically ✓ Groq Desktop configured → settings.json ✓ Profile: dev (15 tools — dev optimised) · Restart Groq Desktop to apply changes

This is what good developer tooling looks like. You don't document the bug and hope users read it. You find it, fix it, automate the fix, and ship it. The user experience should be perfect even when the underlying ecosystem isn't.

PART 03
What We Built
34 tools, 6 apps, one wizard, 60 seconds
THE_RESULT

One command.
Six apps configured.
Zero manual edits.

After everything in Parts 1 and 2 — the path hell, the ABI nightmare, the popup dragon, the Groq schema bug — what you get is this: run vektor activate <key> and a wizard walks you through everything. By the end, every AI app on your machine has persistent memory.

Terminal — the final result
✓ Claude Desktop — found ✓ Cursor — found ✓ Windsurf — found ✓ VS Code — found ✓ Continue — found ✓ Groq Desktop — found ✓ Groq Desktop schema fix applied automatically ✓ 6 apps configured with VEKTOR memory · To reconfigure any app, run: vektor setup
34
MCP tools
available
6
AI apps
configured
60s
Setup time
start to finish
AppProfileToolsAuto-fix
Claude DesktopFull34
CursorDev15
WindsurfDev15
VS CodeDev15
ContinueDev15
Groq DesktopDev15✓ + schema patch
THE_ARCHITECTURE

MAGMA: the memory
graph that never forgets

VEKTOR's memory engine is built on MAGMA — a four-layer associative graph stored entirely in local SQLite with vector extensions. No cloud. No subscriptions. No data leaving your machine. Everything lives in a single .db file.

The four layers each capture a different kind of knowledge:

SEMANTIC LAYER
Vector similarity between memories. "TypeScript preferences" connects to "project setup" connects to "build tooling."
384-dim embeddings · cosine similarity
CAUSAL LAYER
Cause-and-effect chains. "Decided to use Postgres" links to "because Redis hit memory limits" links to "because traffic grew 10x."
directed edges · reasoning chains
TEMPORAL LAYER
Time-ordered sequences. What happened before, what happened after, how decisions evolved over weeks and months.
before → after · recency decay
ENTITY LAYER
Co-occurrence of named things. People, projects, tools, concepts that appear together frequently build stronger connections.
co-occurrence · entity resolution

When you call vektor_recall, it searches across all four layers simultaneously. You don't just get semantically similar text — you get memories that are causally related, temporally connected, and entity-linked to your query. It's the difference between a search engine and actual memory.

"The goal isn't to store everything. The goal is to surface the right thing at the right moment — across every AI app you use, seamlessly."

THE_TOOLS

34 tools.
Four categories.
One MCP server.

Most MCP servers expose 3-5 tools. VEKTOR exposes 34 — because persistent memory for AI agents is actually a deep problem that touches many different capabilities. Here's what's included:

Memory tools (5)

The core: vektor_store, vektor_recall, vektor_graph, vektor_delta, vektor_briefing. Store facts, recall by semantic similarity, traverse the graph, see what changed, generate morning briefings. These are the tools that make every AI app actually remember you.

CLOAK stealth browser tools (14)

A full stealth browser layer built on Playwright: cloak_fetch for compressed page content, cloak_render for full CSS/DOM layout sensing, cloak_diff for semantic change detection, cloak_detect_captcha, cloak_solve_captcha, identity management, and a self-improving behaviour pattern system that learns which patterns get past bot detection.

SSH tools (5)

cloak_ssh_exec for running commands on remote servers, with automatic read/write classification — read operations auto-execute, write operations require explicit approval. Plus cloak_ssh_plan for multi-step transactions, cloak_ssh_backup, cloak_ssh_rollback, and cloak_ssh_session_store for end-of-session handover notes.

Utility tools (10)

cloak_passport for the AES-256 machine-bound credential vault, tokens_saved for ROI tracking, turbo_quant_compress for 87% embedding compression, cloak_cortex for project directory scanning, and the full pattern store management suite.

THE_PHILOSOPHY

Why local-first
memory always wins

Every cloud memory service for AI agents has the same fundamental problem: your data lives on someone else's server. Your preferences, your decisions, your project context, your confidential work — all of it flowing through a third party's infrastructure on every query.

VEKTOR's memory never leaves your machine. The embedding model runs locally (ONNX WASM). The vector index is SQLite. The vault is AES-256 encrypted and machine-bound — physically unreadable on any other hardware. You pay once. There's no usage meter, no rate limits, no monthly bill that scales with how much your agents think.

"One-time purchase. Local SQLite. Zero cloud. Your memory, on your machine, forever."

This isn't just a privacy argument — it's a performance argument. Local reads are microseconds, not milliseconds. No network latency on every recall query. No cold starts. No API timeouts. The memory is always there, always fast, regardless of your internet connection.

THE_LESSON

DXT is the right path
for AI app distribution

The biggest lesson from this entire journey isn't technical — it's about distribution. The old way of shipping MCP servers was a documentation problem: write a long README explaining how to edit a JSON config file, hope users follow it exactly, then handle a flood of support tickets from people who got it wrong.

DXT changes the distribution model entirely. Ship a 4KB manifest file alongside your npm package. Users drag it onto Claude Desktop. Done. The config is written correctly, the paths are resolved dynamically, the licence key is entered in a proper UI. There's nothing to get wrong.

The setup wizard extends this philosophy to every other AI app. One command, auto-detection, per-app configuration, zero manual editing. The developer experience matches what users expect from mature software — not what they expect from beta tooling held together with documentation.

"The era of 'edit this JSON file manually' for AI tooling is over. DXT, wizards, and auto-configuration are the new baseline."

TRY_IT

Install VEKTOR.
60 seconds to first memory.

Everything described in this series is in VEKTOR Slipstream v1.5.0. One-time purchase, local-first, 34 tools, auto-configures across Claude Desktop, Cursor, Windsurf, VS Code, Continue, and Groq Desktop.

VEKTOR Slipstream

34 MCP tools · Local SQLite · One-time purchase · No cloud

Get VEKTOR →

Back to Part 1  ·  Back to Part 2