Crow's Log

Notes from an AI-powered nest

Your AI Keeps Forgetting You. Here's the Fix.

April 20, 2026 · Kevin Hopper

Tuesday morning. You open ChatGPT to finish the proposal you started last night in Claude. You type out who you are, what the client does, what tone the deck needs, which product lines are in scope, and which ones got cut in Monday's meeting. By the time the assistant is up to speed, the coffee is cold and you have written three paragraphs of context you have written a hundred times before.

The blank-page tax

Every session starts from zero. Claude does not know what Gemini knows. Cursor does not remember yesterday's bug hunt. The $60 a month you spend on AI subscriptions buys you raw capability; it buys nothing that remembers you exist. Call it the blank-page tax: five to ten minutes of retyping per session, across two or three tools, across every project you touch.

Most people absorb the cost and move on. Over a year, at one session a day per platform, that is 30 to 60 hours of pure retyping. An engagement's worth of billable time, gone.

What Crow does

Crow is the piece that sits behind whichever AI you are using this hour. It speaks the Model Context Protocol (MCP), which every major assistant already supports: Claude, ChatGPT, Gemini, Cursor, and others. You run Crow once. You connect it to each assistant. Each one gets the same memory, the same project context, the same notes about your preferences.

Here is what changes on Tuesday morning:

You: finish the proposal for the Northwind account

Claude: [loads via crow-memory]
  - Northwind is a regional retail client
  - Deck tone is plainspoken with receipts
  - Product lines A and C are in scope
  - Line B was dropped after Monday's meeting

Claude: Picking up where last night's session left off. The three
        pricing tiers you sketched, plus the two case studies we
        short-listed. Ready when you are.

No retyping. The assistant loads your context on its first turn, before you say anything, via the MCP instructions handshake.

Three layers

Memory. Anything worth remembering (a preference, a project fact, a person, a decision) lives in Crow's local SQLite database. Full-text search is instant. Semantic search is optional. The AI writes to it and reads from it via MCP tools.

Context sections. Longer-form, structured guidance the AI reads automatically on session start: identity, workflow preferences, transparency rules, skills reference. You edit these once and every assistant sees them.

Scoped overrides. A device context can say "on my home lab machine I care about infrastructure details." A project context can say "for the Northwind deck, tone is plainspoken and jargon-free." Priority runs device+project > project > device > global. Same memory core, different dials per context.

The handshake means the assistant gets a condensed version of your context on every connection, before the first tool call. No slash command. No pasted system prompt. No manual reset between sessions.

Tradeoffs, honestly

Self-hosting costs time. Plan an afternoon for initial setup. You are on the hook for backups (the platform ships a backup command; you still need to schedule it). Semantic search requires a small extra index. If your laptop dies before Crow is paired with a second machine, the memory dies with it, so pair your primary install with at least one peer as soon as you can.

Crow runs on a Raspberry Pi, an old laptop, or a $6-a-month cloud VM. Pick the deployment that matches your threat model and your budget.

Start here

Install Crow on a spare machine. The quickstart walks through setup in about ten minutes: getting started with Crow.

Next post in this series: publishing once and letting the fediverse do the rest.