$ npx mcp-remote https://mcp.persistmemory.com/mcp
MCP Protocol Ready
PERSISTENT
MEMORY FOR
AI AGENTS.
Add long-term memory to AI agents, Claude MCP servers, and LLM applications. Persist conversations, retrieve knowledge, and reduce token costs across ChatGPT, Claude, Cursor, Copilot, Windsurf, Cline, and Gemini.

BUILT FOR DEVELOPERS.

Everything you need to give your AI persistent, searchable memory.

Vector Memory

Every conversation and document is embedded and stored. Semantic search finds exactly what your AI needs.

Isolated Spaces

Organize memories into separate spaces per project, client, or context. Full access control.

MCP Native

Works with ChatGPT, Claude, Cursor, Copilot, Windsurf, Cline, Gemini, and any MCP-compatible client.

Real-time Chat

Chat with an AI that has access to all your stored memories and context in real-time.

Cloud Native

Built on Cloudflare Workers for edge-fast responses globally. No cold starts.

API First

Full REST API for programmatic access. Store memories, search, and manage spaces from anywhere.

THREE STEPS.

Get started in under a minute.

01

Create an Account

Sign up and get your API key instantly. No credit card required.

02

Add to Your AI Tool

Add PersistMemory to Claude, Cursor, VS Code, or any MCP-compatible AI assistant.

{
  "mcpServers": {
    "persist-memory": {
      "command": "npx",
      "args": ["-y", "mcp-remote",
        "https://mcp.persistmemory.com/mcp"]
    }
  }
}
03

Start Remembering

Your AI now has persistent memory. Store notes, search knowledge, and build context.

WHY PERSISTMEMORY?

The AI memory problem is real. Every LLM — ChatGPT, Claude, Gemini, Copilot — forgets everything after each session. Your AI assistant has no long-term memory, no persistent context, and no way to remember what you discussed yesterday.

PersistMemory is the AI memory layer that fixes this. It gives any AI tool — from Claude Desktop to Cursor to custom LangChain agents — persistent, searchable, vector-powered memory. Store conversations, documents, code snippets, and knowledge. Search with semantic understanding. Retrieve context instantly via MCP protocol or REST API.

Built for developers who need AI memory that actually works. Unlike raw vector databases (Pinecone, Weaviate, Qdrant), PersistMemory handles embeddings, chunking, search ranking, and access control out of the box. Unlike framework-locked solutions (LangChain Memory), PersistMemory works across every AI tool and IDE.

Free to start. No credit card. Set up in 60 seconds. Create an account, add one line to your MCP config, and your AI has persistent memory forever. Works with VS Code, JetBrains, Neovim, Zed — every IDE, every AI assistant, one memory layer.

READY TO
REMEMBER?

Free to start. No credit card required.

We value your feedback

Share Your Feedback

Help us improve PersistMemory. Bug reports, feature requests, or just say hi.