MCP personal knowledge management: The Protocol Won.
- Martin Chen

- 2 days ago
- 9 min read

Anthropic's open standard for AI-tool integration - the Model Context Protocol - reached 97 million monthly downloads by March 2026, a 970x increase from the 100,000 it recorded at launch in November 2024. MCP personal knowledge management has been the headline use case: Obsidian vaults, Notion databases, and Readwise highlights can now be traversed by AI agents directly, without proprietary plugins or one-off integrations. The pitch is that your second brain finally has a brain of its own.
The protocol part worked. The assembly part is where most users hit a wall.
MCP standardized how AI agents connect to external data. It did not standardize how easy that connection would be to configure. Obsidian alone has 64+ community-built MCP server implementations, each requiring different plugins, configuration files, and ongoing maintenance. Notion's official MCP server works cleanly but became the default workaround after Notion moved full AI access to its $20/user/month Business tier in May 2025. Readwise ships a well-maintained server - scoped entirely to reading highlights, disconnected from every other source in a typical knowledge worker's stack.
The shift from passive PKM storage to active AI collaboration is real and accelerating. Whether you experience that shift depends on whether you're willing to build the stack that enables it.
What Happened - MCP Goes From Protocol to PKM Infrastructure
Anthropic launched MCP on November 25, 2024, as an open standard with Python and TypeScript SDKs, pre-built server implementations for Google Drive, Slack, GitHub, Git, and Postgres, and integration with Claude Desktop. The design borrowed its message-passing architecture from the Language Server Protocol - the standard that lets code editors communicate with language analyzers - which made it immediately recognizable to developers and accelerated early adoption beyond what most protocol launches achieve.
The growth timeline was compressed. Downloads reached approximately 100,000 in the first month. When OpenAI officially adopted MCP in April 2025 and integrated it into the ChatGPT desktop app, monthly downloads passed 22 million within weeks. Google DeepMind confirmed support the same month. Microsoft integrated MCP into Copilot Studio in July 2025. By December 2025, Anthropic donated MCP to the Agentic AI Foundation - a directed fund under the Linux Foundation co-founded by Anthropic, Block, and OpenAI, with Google, Microsoft, and AWS as additional backers. The donation neutralized concerns about Anthropic controlling the standard and cemented MCP as shared industry infrastructure rather than a competitive moat.
By March 2026, MCP SDK growth had carried the protocol to 97 million monthly downloads and over 10,000 active public servers. The PKM ecosystem was among the most active segments: Obsidian accumulated 64+ community MCP server implementations; Notion shipped an official server through its developer platform with token-efficient Markdown responses designed for LLM use; Readwise launched and maintains its own official server. What had been a developer-facing integration protocol in late 2024 became the default mechanism for connecting AI assistants to personal knowledge tools.
For knowledge workers, this represented a concrete capability shift. An AI agent with MCP access to an Obsidian vault can follow backlinks, search note content semantically, and surface material written years ago. With additional MCP server configs, the same agent can cross-reference Readwise highlights and query a Notion project database in a single prompt. Before MCP, that required proprietary plugins limited to one tool's AI. With MCP, it requires configuration - but theoretically crosses every connected tool at once.
Why It Matters - PKM's Retrieval Problem Finally Has a Solution
Personal knowledge management tools have always been built for input, not retrieval. The canonical PKM workflow - capture, organize, retrieve - works in the first two phases and breaks down in the third. Most knowledge workers can describe roughly what they've written about; most cannot reliably surface it at the moment they need it. Highlights accumulate unread. Notes go stale before anyone connects them to anything. The second brain metaphor understates the problem: a real second brain retrieves proactively, not on-demand.
MCP's contribution is theoretically solving retrieval at scale. When an AI agent has read-level access to an entire knowledge base - vault, highlights, references, bookmarks - your historical thinking becomes queryable context rather than a passive archive you may or may not remember to search. A knowledge worker drafting a proposal can ask an MCP-connected agent to pull everything relevant from two years of notes, cross-reference it with related reading highlights, and surface the arguments that overlap. Without MCP, that synthesis takes hours. With a working multi-server config, it becomes a prompt.
The shift is qualitative, not just an efficiency gain. The dominant retrieval mode for PKM has always been keyword search: you remember a phrase, you search for it, you hope your past self used the same words. MCP enables semantic, cross-tool synthesis that doesn't depend on remembering exact terminology. The Obsidian MCP integration community describes this as moving "from active recall - remembering keywords and hierarchies - to asking for what you need."
This distinction matters because it is not replicated by any single-tool AI. Notion AI sees Notion. Obsidian Copilot sees the vault. Readwise Reader's AI sees reading highlights. None of them see each other. An MCP-connected agent sees all of them simultaneously - which is why the protocol matters for personal knowledge management specifically, not just for any individual tool's feature set.
The knowledge management software market's projected growth from $26.4 billion in 2026 to $74.22 billion by 2034 reflects organizations beginning to treat retrieval infrastructure as a core investment. The shift MCP enables - from static archives to AI-traversable knowledge graphs - is the primary driver behind those numbers, not new users adopting additional note-taking apps.
The Real Problem - Assembly Required
Here is what most MCP personal knowledge management coverage leaves out of the headline: getting the setup to actually work requires sustained technical effort that the majority of knowledge workers will not complete.
The Obsidian case is the most concrete. There are 64+ community-built MCP server implementations - a number that sounds like ecosystem health but functions like fragmentation. Most servers do similar things: read vault contents, write notes, search semantically. They differ in authentication approach, feature coverage, documentation quality, and maintenance status. Selecting one requires reading GitHub issue trackers, checking commit dates, and often trying multiple options to find one compatible with your current Obsidian version and MCP client.
Once selected, baseline Obsidian MCP setup requires four distinct steps: install the Local REST API plugin inside Obsidian, configure a port number and generate an API key, select and install the MCP server in your AI client, and write or edit a JSON configuration file pointing the client at the correct address with the correct credentials. Each step has failure modes. Many first-time users spend an afternoon on this and do not finish.
There is also a security dimension that setup guides mention but rarely emphasize. According to MCP server security research published in 2025, connecting an AI agent via MCP grants it unrestricted read, write, and delete access to your entire vault by default. A misunderstood prompt or a hallucinating model can permanently delete notes. Security researchers documented prompt injection vulnerabilities and data exfiltration risks as systemic issues across the MCP ecosystem. The practical guidance from the research: maintain complete backups before connecting any agent. Most step-by-step setup guides do not surface this in the introduction.
The cross-tool synthesis problem multiplies the configuration overhead. Three PKM tools means three separate MCP server setups, three separate authentication flows, and three separate failure points. The agent sees all three when every connection works - but the user configures and debugs each independently. The setup overhead is non-trivial for each tool; the combined overhead is substantial enough that most knowledge workers who start this process do not complete it for more than one tool.
Mobile workflows are excluded entirely from current MCP personal knowledge management setups. Most PKM MCP servers require the host desktop application to be running. An Obsidian MCP server needs Obsidian open on a desktop. Knowledge workers who capture primarily on mobile - a majority of professionals' on-the-go workflow - have no MCP bridge available. The protocol's 2026 roadmap identifies streamable HTTP transport as a priority, which would enable remote-hosted servers and address this gap, but that is a future fix.
The pattern is consistent: MCP gave PKM tools an integration surface, not users an integrated experience. The gap between those two things is where most of the friction lives. For knowledge workers evaluating personal knowledge management tools today, the useful question is not whether a tool has an MCP server - almost all major tools do or will. The question is what the experience looks like before any AI touches it, and what configuration stands between the user and the cross-tool synthesis that MCP promises.
How the Major PKM Tools Are Responding
The PKM ecosystem's response to MCP has sorted into three distinct postures, each reflecting the tool's underlying architecture and pricing strategy.
Obsidian - consistent with its plugin-first identity - dropped its commercial license in 2025, making the core application free for all users, and left MCP server development entirely to its community. The result is a fast-moving but uneven ecosystem: the best community servers support wikilink graph traversal, Dataview queries, and semantic search; others are unmaintained forks with months-old issue trackers. Obsidian's implicit position is "here is a Markdown filesystem; you build the AI stack on top." For power users who have spent years building detailed, interconnected vaults, this preserves their investment and extends it with MCP. For newcomers expecting a ready-made AI experience, the setup burden is significant and the right server is not obvious.
Notion shipped an official MCP server through its developer platform - well-documented, returning Markdown rather than raw JSON for LLM use, and lower-token than raw API access. But Notion also eliminated its standalone $8–10/month AI add-on in May 2025; full Notion AI now requires the Business tier at $20 per user per month. Individual users connecting Claude or other AI assistants via MCP instead of paying for Notion's native AI has become a documented workflow - one that Notion's own developer documentation enables.
Readwise offers the cleanest PKM MCP implementation in the current comparison: an official server that incrementally indexes highlights and Reader documents, syncs automatically as content is added, and works with major MCP clients without custom configuration. The limitation is scope. Readwise captures reading highlights and saved articles - not notes you've written, meetings you've attended, or local files on your computer. An MCP-connected agent can answer "what have I highlighted about this topic over two years?" with precision. It cannot connect that answer to anything else in the knowledge stack.
Missing from all three: automatic capture. Each tool in this comparison requires deliberate user action to populate the knowledge base. MCP adds an AI retrieval layer on top of whatever the user has entered - but if the vault went untouched for three months, the agent traverses a three-month-old knowledge graph. The quality of MCP-enabled retrieval is bounded by the consistency of manual capture upstream.
This is where tools built around automatic capture approach the problem from a different angle. Rather than adding MCP to an existing note tool, they build automatic capture as the foundation: web pages indexed as you browse, meetings transcribed locally, local files and emails synchronized continuously. The knowledge base updates without deliberate action; the AI retrieval layer operates on current data. MCP is available as an extension protocol for connecting to external tools, rather than as the primary access layer. The genuine trade-off: automatic capture means less curatorial control over what enters the knowledge base compared to deliberate note-taking. Whether that trade-off works depends on the workflow.
What's Next - The Architecture Question Nobody Is Answering Yet
The 2026 MCP roadmap identifies three improvements that directly affect PKM use cases. Streamable HTTP transport - moving MCP servers from locally-hosted processes to remote-hosted services - would address the mobile exclusion problem without requiring a desktop application to run. SSO-integrated authentication flows would replace the current per-tool API keys and JSON configuration files, reducing setup to something closer to a standard OAuth connection. Multi-agent orchestration primitives would let specialized agents coordinate across tools rather than requiring a single agent to manage every MCP connection simultaneously.
These are improvements to the assembly problem, not replacements for it. Even with all three roadmap items delivered, users would still need to select tools, configure connections, and maintain integration stacks as tools update. What changes is the per-step difficulty. The current pattern - port numbers, raw API keys, manual JSON files - would give way to OAuth flows and centralized management interfaces. That is meaningful progress. It is not the same as not needing configuration at all.
The cross-session memory problem has not been formally addressed by the roadmap. An AI agent that spends an hour helping you synthesize research across your Obsidian vault has no memory of that session the next time you connect. You re-establish context from scratch each time. This limits the "second brain" metaphor's accuracy: a second brain that forgets every session is a sophisticated search interface, not a cognitive partner.
Two architectural bets are being stress-tested in parallel. The first: that MCP integration plus existing PKM tools, as it matures, converges on native AI knowledge base value - that protocol improvements eventually eliminate enough friction that the retrofit approach becomes accessible to most users. The second: that tools designed for AI-first retrieval outperform MCP-patched legacy tools even when the protocol matures, because the architectural difference is in capture and indexing, not just retrieval access.
What is already clear is that the shift from passive storage to active AI collaboration is real and accelerating. MCP is the mechanism making it technically possible at the protocol level. Whether you experience it depends on whether you're positioned to build the stack - or already using a system where the stack is built in.
The most useful frame for evaluating any PKM setup in 2026 isn't the MCP server count. It's two prior questions: how does information get into the knowledge base, and how consistently does that actually happen? The AI retrieval layer - whether built on MCP or native to the tool - is only as valuable as what it has to work with. A perfectly configured MCP personal knowledge management setup traversing a sparse, outdated vault returns sparse, outdated answers.
If your knowledge base is actively maintained and you're comfortable with configuration work, connecting Obsidian, Notion, or Readwise to an AI client via MCP is worth doing. If capture consistency is the real bottleneck, the retrieval layer doesn't fix it. The protocol infrastructure is now in place. The right entry point depends on which problem you're actually trying to solve.


