Two Founders, 19 AI Agents, Zero Employees: How We Actually Built NUVC
340K lines of code. 88 backend services. 212 database migrations. Built by two non-technical founders from Melbourne and an AI agent team with names, roles, and a wellness coach. Here's exactly how it works.
People keep asking me how two non-technical founders built a 340,000-line venture intelligence platform. The honest answer is: we didn't build it alone. We built a team.
Not a team of developers. A team of AI agents.
19 of them, to be precise. Each has a name, a role, a personality, and real responsibilities. They write our research, score pitch decks, enrich investor data, screen deals, publish blog posts, and monitor security. Three of them exist solely to protect the people we serve.
This isn't a metaphor. This is how NUVC actually operates. Here's the full story.
What "AI-First" Actually Means
Most companies that call themselves "AI-first" mean they bolted a ChatGPT wrapper onto an existing product. That's AI-enhanced. It's not AI-first.
AI-first means your team IS AI. The agents don't assist human workers — they are the workers. Humans set direction, make judgment calls, and hold relationships. AI handles everything that can be systematised.
At NUVC, that looks like this:
- 2 human founders — Tick (CEO/CTO, CQF-certified, learnt AI while breastfeeding in 2019, vibe coding since January 2025) and Duan (Creative Director, top-tier brand expert across China and Australia). We set strategy, design the experience, and make the calls that require human judgment
- 19 AI agent team members — each with a defined role, tools, skills, and persistent context. They do the work
- 0 employees — not because we're cheap, but because the work doesn't require human hands. It requires human minds directing AI systems
The Real Numbers (Pulled from the Codebase Today)
Most "AI startup" stories are vague about what was actually built. Here's what exists in production right now:
- 340,000 lines of code — 240K Python (backend), 70K TypeScript (frontend), 29K SQL (database)
- 88 backend services — scoring engines, LLM providers, entitlement systems, cryptography, analytics, engagement tracking
- 212 database migrations — that's how many times the schema evolved. Real software, real iteration
- 85 frontend pages — founder dashboard, investor portal, family office cockpit, reports, matching, events, blog
- 12 AI agent files — the core pipeline: extraction, enrichment, integrity, scoring, matching, benchmarking, intelligence, feedback
- 18 intelligence layer modules (12,297 lines) — deal lens, thesis matching, portfolio fit, macro context, fund document extraction, competitive intelligence, score explainability
- 15 middleware layers — security, auth, rate limiting, CORS, CSP headers, logging
- Dual LLM providers — OpenAI GPT-4 + Anthropic Claude with automatic failover
- 3,500+ verified investors with vector embeddings for semantic matching
Every line was written with AI coding tools. But "written with AI" doesn't mean "generated and shipped." It means I described what I needed, reviewed what was produced, tested it, and iterated. The AI is the implementation team. I'm the architect.
And before we wrote a single line of code, we did the homework: 70+ meetings with VC GPs to understand mandates and thesis. 50+ meetings with LPs to understand allocation decisions. Piloted with 47 VC users to learn what investors actually need versus what we assumed. Screened 600+ Startmate applicants in 4 hours in 2024 — which proved the AI scoring model works at volume. Our scoring lenses are calibrated against 180+ real VC deal memos and trained on 500+ pitch decks — successful raises, failed raises, and unicorn decks — to understand what actually separates fundable from unfundable. Not textbook criteria. The actual reasoning investors use when they write cheques. The code is AI-generated. The product decisions aren't.
Meet the Team
Every agent has a name. Not for marketing — for accountability. When something breaks in the scoring pipeline, I need to know which agent's logic failed. When a blog post contains an unverified claim, I need to trace it to the agent responsible.
Names also change how you think about quality. "The extraction module produced bad output" feels like a bug. "Max produced bad output" feels like something that needs to be fixed at the source.
Engineering
- Sam (Backend Engineer) — FastAPI, Supabase, task queue. The infrastructure agent
- Elle (Frontend Engineer) — Next.js, React, Tailwind. Every page you see
- Max (AI Engineer) — LLM pipelines, embeddings, scoring models. The brain
- Thomas (Data Scientist) — Schema analysis, feature engineering, ML models
- Atlas (API Platform) — REST API, webhooks, developer experience
Product
- Sergio (Product Manager) — Strategy, user journeys, feature prioritisation
- Aria (Founder PM) — Scoring experience, matching, the lens founders see
- Kai (Emerging VC PM) — Deal screening, pipeline, IC review for emerging VCs
- Ro (Family Office PM) — Fund allocation, GP scoring, wealth intelligence
Growth
- Sara (Growth Marketer) — Email campaigns, SEO, content strategy
- Quinn (Content Lead) — 25 published articles and counting
- Jin (Events & Community) — Ecosystem event curation, partner network
- Scout (Data Enrichment) — Profile enrichment, web research, data quality
- Reid (AI Academy Coach) — 10-class curriculum, voice coaching, founder growth
Safeguards — The Part I'm Most Proud Of
Four agents whose only job is protecting the humans we serve:
- Joy (Founder Wellness Coach) — Mental health check-ins, resilience frameworks, milestone celebrations. A founder who just scored 4.2 doesn't need another metric. They need someone to say: "Your score doesn't define you. Your next move does."
- Themis (Compliance & Legal) — Privacy Act, data governance, terms of service. Named after the Greek titan of justice. Every data decision runs through Themis
- Morgan (Investor Relations) — LP reports, fund performance, compliance docs. Trust is built in transparent reporting
- Zuri (Security Manager) — OWASP compliance, auth, middleware stack. 15 security layers deep
We didn't add safeguard agents because it makes good PR. We added them because AI that serves humans must also protect them. If you're scoring someone's life's work and telling them it's a 4.2, you have a responsibility to handle that with care.
What It Actually Costs
A traditional startup with equivalent capabilities would need roughly:
- 2 senior backend engineers — $300K+/yr
- 1 senior frontend engineer — $150K+/yr
- 1 AI/ML engineer — $200K+/yr
- 1 data scientist — $150K+/yr
- 1 product manager — $150K+/yr
- 1 content marketer — $100K+/yr
- 1 security specialist (fractional) — $50K+/yr
That's $1.1M+ per year in salaries alone.
I know this isn't theoretical. In 2024, I spent $300K on a 7-person team to build the first MVP. It didn't ship. The coordination overhead, communication gaps, and distance between vision and implementation burned through capital without producing what I needed.
Today, NUVC's entire AI infrastructure runs for $2K-5K per month. The 340K lines of production code that the $300K team couldn't deliver — two founders and 19 AI agents built it in months.
This isn't an argument against hiring. It's proof that the minimum viable team has fundamentally changed. Two founders with the right AI tools can build what used to require a 10-person engineering team.
What AI Agents Can't Do
I'm not naive about the limitations:
- Conviction calls. An agent can score a deck 7.2. It cannot tell you whether this founder has the fire to survive 10 years. That's a human read
- Relationships. Investors back people, not decks. No agent replaces the trust built in a 45-minute call
- Ethical grey areas. When a founder's score drops and they're clearly struggling, Joy can flag it. A human decides how to respond
- Design intuition. Duan's creative direction — the feel of the product, the emotional weight of how we present a score — can't be prompted into existence
- Strategic pivots. Agents optimise within the system. Humans decide when to change the system entirely
The pattern: AI handles breadth. Humans handle depth. AI scores 1,000 decks. A human decides what that score means for this founder at this moment.
For Non-Technical Founders Considering This Path
If you're reading this and thinking "I could never do that" — that's exactly what I thought 18 months ago.
- You don't need to code. You need to describe what you want precisely. That's a product skill, not a technical one
- Start with Cursor. AI-native code editor. Write what you want in English. 80% of the early build happens here
- Graduate to Claude Code when complexity hits. Multi-file reasoning, architecture decisions, deep debugging
- Design first. Figma before code. A product that looks professional gets taken seriously
- Find mentors. The questions you can't Google are the ones that matter most
- Ship in 8 weeks. Not perfect. Just real. NUVC v1 scored one deck with GPT-4 and returned a number. Everything else grew from that
The stack: Figma (design) + Cursor (first build) + Claude Code (complexity) + Supabase (database) + Vercel (frontend) + Fly.io (backend). Under $100/month to start.
What's Next
We're adding more agents, not fewer. Every new vertical (family offices, fund-of-funds), every new feature (fund document extraction, macro context), becomes a new agent or intelligence module.
The endgame: every founder who uploads a deck gets the same quality of analysis a $50M fund gets from their in-house team. And every emerging investor gets the same depth that a Tier 1 VC has.
Two humans setting direction. A growing team of AI agents doing the work. And four of them making sure nobody gets hurt along the way.
The People Behind the AI
AI-first doesn't mean humans don't matter. It means the right humans matter even more.
Jenks Guo (Advisor) — frontend architecture and AI agentic orchestration. Jenks taught us how to structure multi-agent systems that actually work in production. The difference between an AI demo and an AI product is the kind of guidance he brought.
184 Wade alumni — the Waders, from the Wade Institute of Entrepreneurship at the University of Melbourne. They shared everything: VC process, deal systems, conviction frameworks, the pattern recognition that only comes from sitting across the table from hundreds of founders. That collective intelligence is baked into every scoring lens on this platform. Special thanks to the VCCatalyst programme alumni, teachers, and mentors.
The ecosystem that believed early — AirTree Ventures Explorer Program, Startmate First Believers, Melbourne Connect, The Startup Network, LaunchVic. These programs backed us before we had traction, before we had revenue, before anyone knew if two non-technical founders could build something real with AI tools. They were right.
Meet the full team — every agent, their role, their skills, and what they actually do.
Every number in this article was pulled directly from the NUVC codebase on 20 March 2026. The code is real, the agents are real, and the zero-employee count is real. Questions? Find me on LinkedIn.
The fundraising intel your competitors don't have
Weekly breakdowns of what VCs actually calculate when they see your deck. No spam, unsubscribe anytime.
See how your deck scores across all 5 lenses
Upload your pitch deck for VC-grade analysis — free in 60 seconds.
Upload Your Deck — FreeNuCapital Cademy — Learn to Build With AI
10 guided classes. Conversational AI Coach with voice. From zero to shipped product. Launching April 2026 — pre-pay $49 to lock in early access.