You Don’t Need SaaS — The $0.10 System That Replaced My AI Workflow

📺 Original Video: You Don’t Need SaaS. The $0.10 System That Replaced My AI Workflow by Nate B Jones (AI News & Strategy Daily)

📅 Duration: 30:16 · Published: March 2, 2026

You Don’t Need SaaS. The $0.10 System That Replaced My AI Workflow (45 Min No-Code Build)

Nate B Jones (AI News & Strategy Daily) • 30:16

TL;DR

  • Your AI tools have siloed memory — Claude doesn’t know what you told ChatGPT, and none of them talk to each other. You’re stuck re-explaining yourself constantly.
  • Nate built an Open Brain: a Postgres database + vector embeddings + MCP server that any AI can plug into. Type a thought in Slack, 5 seconds later it’s searchable by meaning from Claude, ChatGPT, Cursor, anywhere.
  • Cost: $0.10-0.30/month. Setup: 45 minutes, no coding required. You own the infrastructure, no SaaS middlemen.
  • The real win isn’t the tool — it’s context infrastructure that compounds over time while everyone else starts from zero every chat.

The Memory Problem Nobody’s Talking About

▶00:00 Your AI agent probably doesn’t have a brain. Not a real one, anyway. It can’t reliably read and think through context you’ve built over months. Nate’s been evangelizing second brain systems (Notion, Obsidian, Zapier, N8N), but there’s a missing piece: the agent-readable part.

▶01:46 Autonomous agents just went mainstream. OpenClaw hit 190K GitHub stars and spawned 1.5 million agents in weeks. OpenAI hired Peter Steinberger (OpenClaw’s inventor). Anthropic’s building one. We need memory systems that agents can actually use.

▶02:07 Here’s the bottleneck: AI output quality depends entirely on your ability to specify. Nate’s framework goes from prompt craft → context engineering → intent engineering → specification engineering. The 10x people? They built context infrastructure that does the heavy lifting before they type a single prompt.

▶03:02 The best prompt in the world can’t fix an AI that doesn’t know what you’ve been working on, what you’ve tried, your constraints, your key people, or what you decided last Tuesday. And agents need that context too.

The Context Transfer Tax

▶03:44 Every new chat starts from zero. Every time you switch from Claude to ChatGPT to Cursor, you lose things. Think about how much of your prompting is just asking AI to catch up on what you already know. You’re burning your best thinking on context transfer instead of real work.

▶04:12 A Harvard Business Review study found digital workers toggle between apps nearly 1,200 times a day. Each switch seems small, but collectively it’s destroying attention. Here’s what people miss: memory architecture determines agent capabilities way more than model selection.

Why Platform Memory Is a Trap

▶05:07 Sure, Claude has memory now. ChatGPT has memory. Grok has memory. Google has memory. But think about what they don’t give you.

▶05:34 Claude’s memory doesn’t know what you told ChatGPT. ChatGPT’s memory doesn’t follow you into Cursor. Your phone app doesn’t share context with your coding agent. Every platform built a walled garden and none of them talk to each other.

▶06:04 What you’ve really got is five separate piles of sticky notes on five separate desks. That’s not memory.

The Vendor Lock-In Game

▶07:00 These systems are designed to create lock-in. You spend months building up history with ChatGPT, then you want to try the latest Claude model — boom, you lose all that context. Not because the new model is worse, but because your context is trapped in the old one.

▶07:41 Big corporations are betting that if they can trap you with memory, you’ll only use their agents. They get to keep you, your attention, and your dollars forever. But your knowledge shouldn’t be a hostage to any single platform.

▶08:04 Memory is engaging. Feeling known is engaging. It’s smart product strategy. But you don’t have to go along with it.

The Human Web vs. The Agent Web

▶08:46 You might think, “I’ll just connect my Notion to OpenClaw, problem solved.” Nope. There’s a structural mismatch most people haven’t noticed.

▶09:20 The internet is forking. There’s the human web (fonts, layouts, pretty pages) and the agent web (APIs, structured data, machine-readable). Your Notion workspace is built for human eyes — pages, databases, toggles, cover images. Beautiful for you, useless for an AI agent that needs to search by meaning, not folder structure.

▶10:01 Apple Notes are locked in an ecosystem. Evernote has a decade of clutter with no semantic structure. Your bookmarks are a graveyard. These tools were built in the 2010s for humans to browse. AI features got bolted on later.

▶10:44 If you build infrastructure for the agent web, you control it. Your agents can plug in, your chatbots can plug in, but you manage it. You’re not dependent on ChatGPT memory or a SaaS company not changing a setting.

The Open Brain Architecture

▶12:03 Here’s what Nate’s proposing: instead of storing thoughts in an app designed for humans, store them in infrastructure designed for anything. A real database, vector embeddings that capture meaning (not just keywords), and a standard protocol any AI can speak.

▶12:30 This works because of MCP (Model Context Protocol). Started as Anthropic’s open-source experiment in November 2024, now it’s the HTTP of the AI age. The USB-C of AI. One protocol, every AI, your data stays yours.

Why Postgres?

▶12:57 Your thoughts live in a Postgres database you control. Not some proprietary format. Postgres is the most boring, battle-tested tech you can imagine. It’s not exciting. It’s not deprecating. It’s not VC-backed chasing unicorn status. It’s just a standard way of storing data. You want that boringness because everything else needs to plug into it.

▶13:28 When you vectorize it properly, every thought gets converted into a vector embedding — a mathematical representation of what it means, immediately AI-readable. Ask “what was I thinking about career changes last month?” and it finds your note about considering consulting or product work, even if you never used the word “career.” That’s semantic search, a whole different universe from Control-F.

How It Works

▶14:02 The workflow: you type into a Slack channel, “I was talking with Sarah. She’s thinking about leaving her job to start a consulting business.” Five seconds later, the system has:

  • Stored the raw text
  • Generated a vector embedding of the meaning
  • Extracted metadata (people, topics, type, action items)
  • Filed it all in a real database

Now any AI you’re working with can see it.

▶14:30 In Claude working on a coaching framework? “Search my brain for notes about people considering career transition.” Found it. In ChatGPT drafting an email? Same search, same result. In Cursor building a tool? Hit the MCP server, it’s right there. One brain, every AI, persistent memory that never starts from zero.

The Two Parts

▶15:02 Capture: You type a thought, it hits a Supabase edge function that generates an embedding and extracts metadata in parallel, stores both in Postgres with pgvector, replies in thread with confirmation. Whole round trip: under 10 seconds.

▶15:19 Retrieval: An MCP server connects to any compatible AI client. You get three tools:

  • Semantic search (find thoughts by meaning)
  • List recent (browse what you captured this week)
  • Stats (see your patterns)

Hit it from Claude, Claude Code, ChatGPT, Cursor, VS Code, anywhere.

▶15:45 The companion guide walks through the setup. Copy-paste, no coding, about 45 minutes. Nate tested it with someone who has zero coding experience. She built it in 45 minutes. Total running cost on free tiers of Slack and Supabase: $0.10 to $0.30/month for ~20 thoughts a day.

Why This Matters Strategically

▶16:22 We’re in the middle of a massive shift in how AI integrates into daily work. Models keep getting better at a terrifying pace. Opus 4.6 shipped weeks ago. The agent market is probably growing triple digits this year. Three-person engineering teams routinely outproduce teams 10x their size.

▶17:05 This is showing up in economy-wide metrics. Eric Bjornson wrote in the Financial Times that US productivity grew ~2.7% in 2025 — double the decade average. He attributed a lot of that to AI agents.

▶17:27 But AI adoption isn’t the same everywhere. If you’re just talking with a chatbot, you’re not really adopting AI workflows. The people getting outsized results aren’t depending on better models — they’re restructuring how they work with AI as a primary collaborator. But you can’t collaborate with something that has no memory of you.

The Compounding Advantage

▶17:55 Think about the difference:

Person A opens Claude, spends 4 minutes explaining their role, project, constraints, decision. Gets a good answer.

Person B opens Claude. It already knows her role, active projects, constraints, team members, and last week’s decisions because it all lives in Open Brain via MCP. She asks a question, gets an answer informed by six months of context.

▶18:27 Want to switch to ChatGPT for a different perspective? Person B gets a different model but the same brain, same context, same answer quality. Every tool has the full picture. That advantage keeps compounding. Every thought captured makes the next iteration better. Every decision, every person noted, every insight saved becomes another node in a growing knowledge graph.

▶19:00 Person A starts from zero every time. The gap between “I use AI sometimes” and “AI is embedded in how I think and work” is the career gap of this decade. It comes down to memory and context infrastructure.

Building on Top

▶20:01 MCP servers aren’t just for retrieval. You can write directly into the brain from anywhere — Claude on the phone, ChatGPT on desktop, Claude Code in terminal, a messaging app. Any MCP-compatible client becomes both a capture point and a search tool.

▶20:42 Think about what you can build on top: a dashboard visualizing your thinking patterns over time, a daily digest surfacing forgotten ideas based on current work. You don’t need to code it — just ask your AI to retrieve from the MCP server and build something.

Limitations and Habits

▶21:17 Honesty time: the metadata extraction isn’t always perfect. The LLM guesses with limited context and will sometimes misclassify a thought or miss a name. Doesn’t matter much because semantic embeddings handle the heavy lifting. Semantic search works even when metadata is off.

▶21:40 The one real requirement: you actually have to use it. The system compounds. Every thought captured makes the next search smarter and the next connection more likely to surface. But it needs input. You need to build the habit.

The Four Prompts

▶22:02 Nate includes four prompts to cover the full lifecycle:

  1. Memory Migration: Extracts everything your AI already knows about you (Claude’s memory, ChatGPT’s memory, wherever) and saves it into Open Brain. Every other AI you connect starts with that foundation instead of zero.

▶22:44 2. Open Brain Spark: An interview prompt that discovers how the system fits your specific work. Asks about your tools, decisions, re-explanation patterns, key people. Generates a personalized list of what you should capture regularly.

▶23:16 3. Quick Capture Templates: Five-sentence starters optimized for clean metadata extraction. Decision capture, person note, insight capture, meeting debrief — each designed to trigger the right classification in your processing pipeline.

▶23:49 4. Weekly Review: End-of-week synthesis across everything you captured. Clusters by topic, scans for unresolved action items, detects patterns across days, finds connections you missed. Five minutes on a Friday becomes more valuable every week as your Open Brain grows.

What It Feels Like When It Works

▶24:12 When you get the Postgres database set up and start using it, something happens that’s hard to describe until you experience it. Your AI in every part of the system starts to know you. Not in the creepy corporate surveillance way — in the “hey we were thinking about this last week and it’s relevant to what you’re asking me now” way. The way a great colleague remembers what matters.

▶25:00 Every AI you use gets better. You’re less afraid of trying a new AI because you can just plug it into MCP and it has the context.

The Bigger Picture

▶25:15 Nate built his original second brain guide before the agent revolution went mainstream (only ~6 weeks ago). It was useful for humans, solved a cognitive problem. But when agents exploded, what we need is a second brain system that’s more foundational — something that enables both us and our agents to reliably read from a system that isn’t SaaS-controlled, isn’t proprietary, is open-source LLM friendly.

▶26:13 Two benefits: the agent can read it, and the human-readable part gets cleaner. You get downstream benefits you didn’t get when thinking about the system from only a human perspective.

▶27:01 If you’re willing to get slightly technical and follow a clean step-by-step tutorial, you get a future-proofed system that unlocks the human benefit of touching any AI system in the future without additional effort.

Context Engineering for Humans

▶27:27 One of the larger lessons Nate’s been meditating on: AI is forcing a clarity of thought in our work and lives that has tremendous human benefit. Tobi Lütke (Shopify CEO) said a lot of corporate politics amount to bad human context engineering.

▶28:05 We need extraordinary clarity to work with AI agents. When we develop that clarity through foundational memory architectures — good databases, clean MCP servers — we get the benefit of cleanly working with that memory system anywhere. We do good context engineering for our human brains when we build the right context engineering for AI.

Getting Started

▶29:06 Open Brain adds that foundational layer, not by replacing what you built but by giving it infrastructure underneath. A database, a protocol, your thoughts, every AI you’ll ever use. You can build it in a morning over coffee this weekend.

▶29:32 If you already built a second brain, Nate’s including a special migration guide so you don’t lose the thoughts you’ve been capturing and can get them into a more agent-readable system.

▶29:49 Don’t be afraid of the slightly technical parts. You should be able to show this video to an AI and say “help me build this” and it should be able to do it.

We try hard to get the details right, but nobody’s perfect. Spot something off? Let us know