Most VCs Using AI Are Doing It Wrong
Everyone now has access to ChatGPT, Claude, and Gemini. But the gap in outcomes between investors using these tools is widening — not closing. Here's what separates the ones getting real leverage from the ones still copy-pasting decks.
I'm going to say something that will annoy a lot of people in this industry.
The AI tools you're using — Claude, ChatGPT, Gemini — are not the problem.
You are.
Not because you're unsophisticated. Not because you don't understand the technology. But because almost everyone in venture is making the same fundamental mistake with these tools — and it's costing them hours they'll never get back, and signal they'll never see.
I've spent the last two years building AI systems specifically for deal flow — extraction pipelines, thesis-aware scoring engines, IC memo generation, investor matching at scale. I've watched hundreds of investors interact with AI in every way imaginable.
The pattern is consistent.
The investors getting real leverage aren't using better models. They're not running more sophisticated prompts. They're not spending more time with the tools.
They've built a different relationship with them entirely.
This article is about that relationship. What it looks like. Why most people don't have it. And how to build it — starting today, with tools you already have access to.
The Most Common Mistake: Asking AI to Decide
Walk into any emerging VC fund today and ask how they're using AI.
Most will tell you some version of this: "I paste the deck into ChatGPT and ask if it's a good investment."
It sounds reasonable. It feels productive. And it produces — at best — a confident-sounding summary that tells you roughly what you already knew after skimming the executive summary yourself.
That's not intelligence. That's a Magic 8-Ball with a subscription fee.
The fundamental error is asking AI to do the one thing it genuinely can't do: make a judgment call about something it has no real context for.
AI doesn't know your thesis. It doesn't know what you've backed before. It doesn't know what patterns you've seen work. It doesn't know the market nuance you've built over a decade of investing.
When you ask it to decide without that foundation, you get generic output. And generic output is worse than useless — it creates the illusion of diligence without the substance.
What AI Is Actually Extraordinary At
Before you can use these tools well, you need to understand where they genuinely add value.
AI is extraordinary at structured observation.
Give it a pitch deck and ask it to extract every factual claim the founders make — team backgrounds, traction metrics, market sizing methodology, revenue model assumptions, use of funds breakdown. Ask it to flag anything that seems inconsistent, unsupported, or missing.
That's a task that takes a good analyst 90 minutes. AI does it in seconds — and it doesn't get tired, distracted, or biased by a slick design.
AI is extraordinary at pattern matching at scale.
When you have a library of past investments — your passes, your wins, your misses — AI can find the patterns you've stopped consciously seeing. What signals were present in every deal you backed? What was almost always true about the ones you passed on and later regretted?
Your own history is your best training data. Most investors never use it.
AI is extraordinary at research synthesis.
Comparable rounds. Market sizing benchmarks. Competitive landscape. Team background verification. The 3-hour research block that used to precede every IC memo — AI does the first 80% of that in minutes.
The final 20% — the conviction, the relationship read, the pattern recognition that only comes from experience — that's still yours. It should be.
The Architecture That Actually Works
The investors getting real leverage from AI aren't just using better prompts. They've built an architecture around the tools. And that architecture has three components:
1. Persistent Context
Your thesis isn't a website paragraph. It's a decision framework.
Write it as one. Your stage focus. Your sector convictions. What founder signals matter most to you. What your minimum traction bar looks like at each stage. What a pass looks like versus a "not yet."
That document becomes the foundation of every AI interaction. Every evaluation is anchored to your actual criteria — not generic VC wisdom scraped from the internet.
Without persistent context, AI resets every conversation. With it, every output builds on the last.
2. Separation of Perception and Judgment
The best workflow has two distinct phases.
Phase one: AI sees. It extracts, structures, surfaces inconsistencies, flags gaps. It produces a complete, clean picture of what the deck actually says — not what you want it to say.
Phase two: you decide. Armed with structured information, you apply your judgment, your pattern recognition, your read of the team. You make the call.
The investors who conflate these two phases — who ask AI to both see and decide in one prompt — get mediocre output at both.
3. Your Own History as Training Data
Pull together 10 deals you've backed. Write down what you saw in each one — not the post-rationalised thesis, but what you actually believed when you wired the money. Do the same for 10 passes.
Now you have signal. Not generic signal — your signal. Feed it to the model alongside every new evaluation. The output quality changes completely.
The Workflow That Actually Works
Deck arrives
↓ Extract: team, traction, market, financials (structured)
↓ Screen: thesis alignment check against your criteria
↓ Flag: pass / shortlist / needs more information
↓ Shortlisted → generate first-pass research memo
↓ Human reviews memo → adds conviction layer
↓ Partner meeting or pass
That's it. Every shortlisted deal gets a research foundation in minutes. Every pass is documented with reasoning. Every IC memo starts from a structured base rather than a blank page.
Working With Claude and ChatGPT: What Actually Works for Deal Flow
The two tools most VCs reach for. Both capable. Neither plug-and-play. Here's how to get real output from each.
Claude (Anthropic)
Claude is the stronger analytical mind. It holds more context, reasons through complexity more carefully, and is less likely to confidently fabricate. For VC work, it excels at long document analysis — feed it an entire pitch deck, DDQ, or fund document. It reads the whole thing, not just the highlights.
Tips for Claude specifically:
- Load your full thesis at the start of every conversation. Claude uses context window space well. Give it everything — fund strategy, decision criteria, past patterns. It won't forget halfway through.
- Ask for inconsistencies explicitly. "What claims in this deck are unsupported by the data presented?" Claude will surface things a rushed read misses.
- Use it for long-form memo drafts. Give it structured extraction output + your thesis + comparable rounds. Ask for a first-pass IC memo. The output is genuinely usable as a starting point.
- Tell it your confidence bar. "Flag anything where you're inferring rather than observing." Claude distinguishes between what the deck says and what it's extrapolating. That distinction matters in diligence.
ChatGPT / OpenAI
GPT-4 is faster, more conversational, and better at broad research synthesis. Where Claude goes deep, GPT-4 goes wide. For VC work, it excels at market research synthesis, comparable round research, and founder background research.
Tips for ChatGPT specifically:
- Use it with Browse enabled for live research. Static knowledge has a cutoff. For funding rounds, market data, and founder news — turn on web browsing. The output quality jumps.
- Break complex requests into steps. Extract first, then analyse, then synthesise. Three prompts beats one megaprompt.
- Don't trust specific numbers without verification. GPT-4 will cite figures confidently that are wrong. For anything factual — verify against Crunchbase or LinkedIn directly.
- Use it for outreach drafts. After you've decided to engage a founder, GPT-4 writes strong first-contact emails when given the context. Saves 20 minutes per reach-out.
Using Both Together: The Split That Works
Inbound deck arrives
↓ ChatGPT → quick market + founder research (10 min)
↓ Claude → deep deck analysis + extraction + inconsistency check (5 min)
↓ You → apply judgment to structured output
↓ Claude → first-pass IC memo from combined inputs
↓ Human review → conviction layer added
Two tools. Different strengths. Combined workflow. Neither replaces the other. Neither replaces you.
The one rule that applies to both: Negative constraints are more powerful than positive ones.
"Don't be generic." "Don't assume outside the data presented." "Don't hedge — give me a direct read."
Most people never use them. The ones who do get output that's actually usable.
Why Consistency Matters More Than Sophistication
A simple, consistent scoring framework applied to every deal — same criteria, same weights, same format — is worth more than a brilliant one-off analysis.
Why? Because you can't improve what you can't compare.
When every evaluation looks different, you can't see your own patterns. You can't track which signals actually predicted your best outcomes. You can't build the feedback loop that makes you a better investor over time.
Score everything the same way. Review the outliers. Adjust your criteria as you learn. That's how the system improves — and how you improve.
What's Actually Shifting in Venture
The funds pulling ahead right now aren't the ones with the biggest models or the most sophisticated tech stacks.
They're the ones who have answered a simple question clearly: what is AI for — and what is it not for?
AI is for extraction, research, pattern matching, structuring, first-pass synthesis. It's for eliminating the groundwork that consumed your best hours.
It is not for relationships. It is not for conviction. It is not for the phone call at 11pm when a portfolio company is in trouble.
Protect both. That's the edge.
The Principle
AI handles the numbers.
Humans build the relationships.
That's how the best investments get made.
The investors who internalise that division — and build their systems around it — are the ones who will look back in five years and wonder how they ever worked any other way.
The rest will still be copy-pasting decks into ChatGPT and wondering why it doesn't feel like leverage.
Try NUVC for investors — thesis-aware scoring, persistent context, first-pass memo generation, and investor matching. Pre-built for emerging VCs and solo GPs who want the system without the build.
The fundraising intel your competitors don't have
Weekly breakdowns of what VCs actually calculate when they see your deck. No spam, unsubscribe anytime.
See how your deck scores across all 5 lenses
Upload your pitch deck for VC-grade analysis — free in 60 seconds.
Upload Your Deck — Free