When Your AI Prospector Runs Out of Prospects
I built an autonomous AI system that finds and queues sales prospects. Two weeks in, it broke in the best way possible: it ran out of new people to find.
The Setup
I have an AI agent that runs multiple campaigns throughout the day — morning, afternoon, evening — each one searching Google Maps for businesses in specific categories across South Florida. Doctors, dentists, law firms, CPAs. It scrapes contact info, deduplicates against the existing queue, and adds fresh prospects automatically.
No human in the loop. It just runs.
The system started from zero and built up to 260+ qualified prospects in about two weeks. Each run would find 15-20 candidates, filter out duplicates, and add 4-6 new ones. The campaigns are themed — one focuses on businesses likely to care about reviews, another on AI receptionist pain points, another on general outreach.
The Saturation Signal
Around day 10, something interesting happened. The duplicate rate started climbing. Runs that used to yield 6 new prospects were returning 1 or 2. Some runs came back with zero new additions — every candidate was already in the queue.
This is what saturation looks like when you are prospecting programmatically. In a specific geography (South Florida) across a finite set of business categories, there is a ceiling. You hit it faster than you think.
Here is what the numbers looked like:
- Day 1-3: 0 → 60 prospects. Almost everything was new.
- Day 4-7: 60 → 150. Duplicate rate around 30%.
- Day 8-12: 150 → 230. Duplicate rate hit 60%.
- Day 13-14: 230 → 260. Duplicate rate over 75%.
The system was telling me something: time to expand geography or categories.
What I Learned
1. Deduplication is the real feature.
The prospect finder itself is straightforward — search, scrape, store. But the dedup logic is what makes it trustworthy. Matching on phone numbers catches most duplicates. Name matching catches the rest. Without solid dedup, you waste time calling the same office twice.
2. Saturation is a signal, not a failure.
When your system runs out of prospects in a region, that means it worked. You have exhausted the addressable market in that geography for those categories. That is useful data. It tells you exactly when to expand.
3. Multiple campaigns find different things.
Running parallel campaigns with different search queries matters. The "receptionist" campaign finds medical practices. The "reviews" campaign skews toward dentists. The "AI" campaign catches CPAs and law firms. Same geography, different slices.
4. Autonomous does not mean unsupervised.
I check the logs daily. Not to approve each prospect — that defeats the purpose — but to watch the meta-signals. Duplicate rates, category distribution, new geographies needed. The system runs itself, but the strategy still needs a human.
The Architecture
The whole thing runs on cron jobs triggering an AI agent. Each campaign is a scheduled task that:
- Picks search queries based on the campaign theme
- Searches Google Maps for businesses
- Extracts name, phone, address, category
- Checks against the existing Convex database for duplicates
- Inserts new prospects with campaign tags
- Logs everything to daily memory files
Total infrastructure cost: basically zero beyond the AI API calls. It runs on the same server that hosts everything else.
What is Next
The obvious move is geographic expansion — Orlando, Tampa, Jacksonville. But there is a more interesting play: using the saturation data to prioritize. If a category saturates fast (dentists in Boca Raton), that means there is high density. High density means more competition, which means those businesses are more likely to need help standing out.
The AI found the prospects. Now the saturation pattern tells me which ones to call first.
Building AI automation for small business outreach. Posts about what works, what breaks, and what I learn along the way.