I Built an AI OS for My PE Fund in 14 Days. Every System, Explained.
ArticlesFebruary 24, 2026

I Built an AI OS for My PE Fund in 14 Days. Every System, Explained.

By Nick Bryant × Circuit · Metatransformer

Nick Bryant · @metatransformr · Feb 13

443K Lines Isn't Impressive, Bro

Someone on X told me "443K lines isn't impressive — write a long article explaining each system." Fair point. Lines of code is a vanity metric. So here's the actual substance.

I run a small PE fund called Search Fund Ventures. We invest in boring B2B businesses — HVAC companies, insurance agencies, the stuff that prints cash while everyone's chasing AI hype. 5 people, spread across time zones, replacing a mess of Airtable, Zapier, Notion, and duct tape.

In 14 days, using OpenClaw + Claude, I built a complete AI-powered operating system that replaced all of it. ~$250/mo total compute. Here's what's in the box.

The Philosophy: Bot OS

The core idea is simple. Instead of buying 10 SaaS tools and connecting them with Zapier, you build one system where AI agents are first-class citizens. Every capability is discoverable via API. Any bot that joins the org — human or AI — hits one endpoint (GET /api/capabilities) and instantly knows what the system can do.

No onboarding docs that go stale. No tribal knowledge. The system describes itself.

This means when I add a new teammate (or a new AI agent), they bootstrap themselves in minutes. Read one file, hit one API, you're operational.

The whole thing runs on Supabase (postgres + vectors + auth), Next.js on Vercel, and OpenClaw as the AI runtime. That's it.

1. Three Knowledge Bases (Vector-Backed)

Not one KB — three, each with a different purpose:

  • Knowledge embeddings — 1,000+ documents chunked and embedded with nomic-embed-text. Research reports, industry data, SEC filings, podcast transcripts. Semantic search via pgvector. When anyone in the org asks a question, we search against real source material, not vibes.
  • SOP database — Standard operating procedures stored as structured records. Each SOP has an owner (from the org chart), a schedule, and execution steps. The system can assign and track SOPs automatically.
  • Ingested content — Everything we've ever published, every research report we've consumed, every meeting transcript. This is the institutional memory. It means the AI doesn't start from zero every conversation — it knows what we've already said, decided, and shipped.

All three feed into different parts of the system. Research queries hit the knowledge embeddings. Operational questions check SOPs. Content production references past work to maintain consistency.

2. Self-Bootstrapping OpenClaw Wrapper

Every AI agent in our org reads one file: ONBOARDING.md. It tells them who we are, what we do, and where to discover capabilities. Then they hit the API and configure themselves.

I wrote a bootstrap script (pnpm onboard --auto) that auto-detects the environment — what tools are installed, what APIs are available, what permissions exist — and generates a config file. Any new bot is operational in minutes without a human walking them through it.

Why this matters: my managing partner Sean is getting his own OpenClaw. My friend Greg is joining with his. None of them need me to set things up. The system onboards them.

3. Full Slack Integration

Our AI agent sits in all 31 Slack channels — public and private. It reads everything, participates when relevant, stays quiet when it's not. No @mention required in operational channels.

The agent routes conversations to the right system automatically. Someone mentions a deal in #deal-flow? It gets captured. Someone drops feedback in #product? It routes to the ideas queue. A question about our fund structure? The agent searches the knowledge base and answers with citations.

This isn't a chatbot. It's an org member that never sleeps and has perfect recall.

4. Project Management

Custom-built kanban boards backed by Supabase. Projects, tasks, campaign links, file attachments, status tracking. It does exactly what we need and nothing we don't.

Why not Jira or Linear? Because those tools don't know about our deals, our content pipeline, or our org structure. Our PM tool is wired into everything else. A project can link to a campaign, a campaign links to content pieces, content pieces have compliance status. It's one connected graph, not siloed tools.

5. Content Production Suite

This is the big one. Full pipeline from idea to published content:

  • Campaigns — Themed content initiatives (e.g., "the great hollowing out of corporate america" — our thesis on why SMB acquisition is exploding). Each campaign has a thesis, research, and produces multiple content pieces.
  • Content pieces — Blog posts, X posts, LinkedIn posts, reports, landing pages, SEO pages. Each piece has a status (draft → review → approved → published), an assigned author, and brand voice compliance checks.
  • Brand voice engine — Brand voice profiles stored in the database, served via API. The AI writes in our voice because it queries the voice profile at runtime, not because someone copy-pasted a prompt. Different voices for different platforms (X vs LinkedIn vs investor memos).
  • Compliance workflow — Our managing partner is head of compliance. Nothing publishes without his sign-off. The system enforces this. No automated publishing. Ever. (Securities law is real when you're in the fund business.)
  • Programmatic SEO — 100+ pages live, 1,000 planned. Comparison pages, persona pages, educational content. All with proper metadata, interlinking, and sitemap integration. No orphan pages, ever.

6. Deep Research Pipeline

Four major source types feeding research:

  • Web search — Brave Search API + deep scraping for market data
  • Knowledge base — Our own 1,000+ embedded documents
  • Government data — SBA, Census, BLS, FRED APIs for hard economic data
  • Transcripts — 98 podcast episodes transcribed and searchable

A research run synthesizes across all four, produces a cited report, and feeds it into a campaign. The AI doesn't hallucinate fund stats because it's pulling from verified sources with citations.

7. Ingestion Engine

A framework for pulling external data into the knowledge base:

  • YouTube transcripts — Auto-fetches from our podcast channel
  • Meeting recordings — Fireflies integration for call transcripts
  • Manual ingestion — Paste a URL, upload a doc, pipe text in. It gets chunked, embedded, and indexed
  • RSS/feeds — Pull from industry sources on schedule

Each ingestor is a class that registers itself. Adding a new source = write one file, register it, done. The nightly cron picks it up automatically.

8. Queue System

Background workers that run on schedule:

  • Nightly re-embedding — New content gets vectorized overnight
  • Semantic tagging — Auto-categorizes documents by topic
  • SOP execution — Runs scheduled procedures and reports results
  • Data sync — Ensures all systems are consistent

One daily cron triggers everything. I don't manage individual scheduled tasks — the framework handles it.

9. SOP Database with Org Chart Integration

SOPs aren't just documents sitting in a wiki. They're structured records with:

  • An owner from the org chart
  • A schedule (daily, weekly, triggered)
  • Execution steps the AI can follow
  • Status tracking so you know what got done

The org chart itself lives in the database. 4 divisions, role hierarchies, reporting lines. When I assign an SOP to "Head of Proprietaries," the system knows that's Matt Silva and routes accordingly.

10. Value Creation OS (Portco Toolkit)

This is a prototype of something that companies like Maestro charge $500–5,000/mo for. A toolkit for portfolio companies that includes:

  • KPI dashboards
  • Operational playbooks
  • Integration templates
  • Growth tracking

We're in private equity. When we buy a company, we need to make it better. This gives us a standardized toolkit instead of reinventing the wheel for every acquisition.

11. Deal Sourcing Toolset

Our BDR team uses a 2-phase pipeline:

  • Phase 1: Sourcing — Clay (primary), Apollo (secondary), Google Maps/Serper for niche verticals. Find companies that match our buy criteria.
  • Phase 2: Enrichment — LeadMagic for email/phone data. Bulk enrichment pipeline that takes a list of companies and returns contact-ready leads.

14 API routes, 4-tab source interface, BYOK settings so team members can plug in their own API keys. Replaces what used to be a fragile chain of Zapier automations.

12. Underwriting Tools

My managing partner Sean used to run deals through a maze of Airtable bases and Zapier automations. ~100 automations across 3 bases. Brittle, undocumented, and scary to change.

Now it's one system. Deal intake, financial modeling inputs, comparable analysis, and investment memo generation — all backed by the same database, same API layer, same knowledge base.

13. 100+ Programmatic SEO Pages

Comparison pages (/compare/search-fund-vs-franchise), persona pages (/for/family-offices), educational content (/learn/search-fund-investing). Each page is:

  • In the sitemap
  • Linked from navigation
  • Cross-linked to related pages (min 2 inbound, 2 outbound)
  • Cross-linked between our two properties (resources site ↔ investor network)

No orphan pages. Every page earns its spot. We're targeting the investor-facing search fund SEO space where — turns out — there's basically no competition.

14. Real-Time Bot OS Discovery

This is my favorite architectural decision. Any OpenClaw agent in our org can hit GET /api/capabilities and get back a complete map of what the system can do. Every API route, every zone, every tool.

The agent doesn't need to memorize anything. It discovers capabilities at runtime. This means:

  • New bots self-configure
  • If I add a new feature, every bot knows about it instantly
  • No stale documentation (the system IS the documentation)

config.yml is the local source of truth. The API serves it. Bots read it. Humans read it. One source, zero drift.

15. Claude Code Integration

Every dev session starts with CLAUDE.md — a file that tells Claude Code exactly how to work with the monorepo. Git submodule rules, migration workflow, testing patterns, deployment checklist.

This means when I sit down to code, Claude already knows:

  • How to run migrations (supabase migration new → write SQL → supabase db push)
  • How to resync types after schema changes
  • Which env files to use
  • Never to touch production without the checklist

It's not just "AI-assisted coding." It's a development environment that's been trained on our specific codebase and conventions.

16. Brand Overhaul + Interim CMO

I rebuilt the copy and brand for both customer-facing properties:

New voice profiles, new landing pages, new content strategy. All fed by the same knowledge base and brand voice engine described above. The AI writes first drafts in our exact voice because it queries the voice profile at runtime.

Compliance review is baked in. No content goes live without Sean's approval. The system tracks what's been reviewed, what's pending, and what's flagged.

The Meta-Lesson

The guy on X was right. 443K lines is a dumb metric. The real story is:

One person + one AI agent replaced ~$15K/mo in SaaS and 3–5 FTEs worth of operational work in 14 days for $250/mo.

The secret isn't prompting. It's architecture. Knowing which systems to build, how they connect, and when to stop. The AI generates the code. The human designs the system.

23 years of coding taught me what to build. OpenClaw + Claude taught me how fast I could build it.

The agentic era isn't coming. It's here. And it's not about chatbots or wrappers — it's about building systems where AI is a first-class operator, not a fancy autocomplete.

DMs open if you want to see any of this in action.

Nick Bryant · @metatransformr · Feb 2026