tToqi Tahamid Sarker
  • About
  • Publications
  • Projects
  • Resume
  • Blog
AboutPapersProjectsResumeBlog
© 2026 Toqi Tahamid Sarker • All rights reserved
EntriesLinksNotesQuotesTILsGuides

claude-mem: The AI That Takes Notes While You Code

One problem that grinds down every Claude Code session: you spend the first few minutes re-explaining what you were doing yesterday. What files matter, what decision you made last week and why, what that weird bug was that you half-fixed. The AI has no memory. You're the memory.

claude-mem flips that. It puts a second AI in your session whose only job is to watch and take notes.

How It Works

The pitch is compact: One AI takes notes about what another AI does.

When you install claude-mem, an observer runs alongside your Claude Code session. It watches what happens — file edits, decisions, bugs found, architectural choices — and writes timestamped, searchable observations to a persistent store. The next time you open a session, that context loads automatically.

Install it with:

/plugin marketplace add thedotmack/claude-mem && /plugin install claude-mem

What Gets Captured

The observer categorizes observations by type: decisions, bugfixes, features, discoveries. Each record includes before-and-after context so you can reconstruct not just what changed, but why.

You can query it like a mini search engine:

type:decision file:auth.ts

or search by concept across everything it's seen.

Token Efficiency

The thing I appreciate most is the progressive disclosure model. Sessions start with lightweight summaries — somewhere between 40 and 2,100 tokens depending on what's relevant. Full observations load on demand, at roughly 850 tokens each. That keeps the context window from getting bloated with history you don't need right now.

This is actually a hard problem. Naive memory systems dump everything into the prompt and you burn half your context before writing a line of code. Claude-mem's design treats memory as something to fetch selectively, not something to pre-load wholesale.

Why This Matters for Long Projects

Research code lives for months. I have training scripts, data pipeline utilities, and evaluation tools that I return to sporadically. Every time I come back, I've forgotten which hyperparameter decisions I tested and discarded, or why a particular preprocessing step is there. Normally I'm reading git blame and old commit messages trying to reconstruct intent.

A system that automatically captures the reasoning behind changes — not just the changes themselves — would cut that archaeology time significantly.

What's Coming

The team is working on something called RAD (Real-Time Agent Data), framed as an open standard for AI memory — the way RAG standardized retrieval, RAD would standardize the capture of live agent context. Whether that gains traction beyond claude-mem itself is an open question, but the underlying problem is real and largely unsolved.

Verdict

Claude-mem is a sharp solution to a real friction point in AI-assisted development. The observer pattern is elegant — it doesn't change how you work, it just watches and writes things down. For anyone doing extended work across multiple sessions, it's worth trying.

The GitHub repo is open source, and docs are at docs.claude-mem.ai.

Posted 15th April 2026

Recent articles

  • ▪Modern Transformer Architecture: A Curated YouTube Course — 10th April 2026
  • ▪The Idea File: Why LLM Agents Change How We Share Work — 4th April 2026
  • ▪Using LLMs to Build Personal Research Knowledge Bases — 2nd April 2026
  • ▪Flow Matching and Diffusion Models from Scratch: Free Lecture Notes — 20th March 2026
  • ▪LLM Architecture Gallery: Every Major Architecture in One Place — 15th March 2026

This is claude-mem: The AI That Takes Notes While You Code by Toqi Tahamid Sarker, posted on 15th April 2026.

claude-code3tools4productivity7ai6memory1

Table of Contents

Previous: Modern Transformer Architecture: A Curated YouTube Course