The advice was sound. Narrow your ICP. Stop trying to sell to everyone in the vertical and focus on the specific company profile where you win. Get the firmographic criteria right — stage, size, tech stack, buying signals — and the motion gets more efficient. The list gets smaller, the targeting gets sharper, and the pipeline quality improves.

Most SaaS GTM teams followed that advice. The ICP is tighter than it was three years ago. The filters are more precise. The enrichment data is better. And the reply rates are roughly what they were before the narrowing — 1 to 3 percent, depending on the channel, regardless of how specific the criteria became.

Something the narrowing was supposed to fix didn’t get fixed. This piece is about what that something is, why narrowing makes it harder to ignore rather than easier to solve, and what the actual architectural fix requires.

The narrowing was the right response — it exposed something the broad ICP was hiding

Buyer sophistication changed the conversion math on generic outreach. A SaaS pitch describing category-level value to a category-sized list stopped producing meaningful reply rates as buyers got better at recognizing and ignoring it. The right diagnosis from teams paying attention was: we are not specific enough. The right response was to narrow the ICP. Both were correct.

What narrowing did not do was solve the underlying problem that generic pitches were producing. It made the problem visible in a new way. A generic pitch to a precise list still arrives at random moments in each prospect’s situation. With a broad list, random timing averaged out — some fraction of 5,000 accounts happened to be in the right phase at the right moment, and that fraction produced conversations. With a precise list of 200 accounts, random timing doesn’t average out. The margin of error disappeared.

The narrowing didn’t create the timing gap. It eliminated the margin of error that was hiding it.

Narrow ICP didn’t expose the timing gap. It eliminated the margin of error that was hiding it.

Against a broad list, timing works through sheer probability. Enough accounts means some percentage are always in evaluation mode, regardless of any specific knowledge about which ones. The outreach volume compensated for the absence of timing intelligence. Reply rates were low, but the absolute pipeline numbers worked because the list was large.

Narrow the list to 200 accounts and the probability math changes entirely. If 10 percent of those accounts are in active evaluation at any given time — a reasonable assumption for a well-defined ICP — that is 20 accounts. The outreach timing needs to hit those 20 at the right moment. Without timing intelligence, the motion is randomly distributed across all 200 accounts, only occasionally intersecting with the 20 that are actually in position.

A 1 to 3 percent reply rate on 200 accounts is not a response rate problem. It is a timing problem wearing a response rate costume. The ICP is right. The message is right. The timing is random.

The compounding mistake is to narrow further when the real fix is different infrastructure

When the narrowed ICP produces disappointing results, the instinct is to narrow again. The analysis is: our ICP must not be tight enough. Add a funding stage qualifier. Require a specific headcount range. Layer in a tech stack filter. Each additional criterion makes the list smaller and more defensible — and makes the timing problem more extreme.

A VP of Sales at a SaaS company building RevOps infrastructure has spent the better part of two years tightening the ICP with her team. The list went from several thousand to a few hundred accounts. Each account is genuinely a strong fit. The enrichment data is current. The firmographic criteria are validated against closed-won data. And each account on that list represents a significant share of the total addressable list — which means every generic outreach moment, every message that arrives at the wrong phase, is a credibility cost that takes real time to recover from. The narrow ICP didn’t make bad timing cheaper. It made it more expensive.

Adding more filters compounds that expense. A list of 80 accounts where timing is random produces fewer conversations than a list of 200 where timing is random. Both lists have the same problem. The smaller one just shows it faster.

Research surface widening isn’t new information. It’s work the narrow ICP was always going to require.

The widening of the research surface that GTM teams feel in 2026 is not a product of more information existing. It is a product of what timing-sensitive targeting was always going to require once the averaging effect disappeared. Per-account context — the trigger, the phase, the current situation — was never needed against a broad list because volume compensated for its absence. Against a narrow list, it is the work that determines whether outreach arrives when it can convert.

The three SaaS GTM motion types that narrowed differently each feel the same underlying gap.

In a founder-led motion, the founder who narrowed from “any growth-stage SaaS company” to “Series A fintech companies with a specific compliance requirement” has 200 accounts. Each account is correctly identified. The problem is that among those 200, some just signed a multi-year contract with a competing solution, some are in evaluation right now, some will be in evaluation in two quarters when a new regulation takes effect. The ICP can’t distinguish between them. The founder is doing the research by hand, account by account, trying to identify which 20 are in position to have a conversation. That is not a sustainable motion at 200 accounts. It’s a research job wearing a sales job’s clothing.

In a VP-Sales-led motion, the team built the ICP with RevOps support — validated against closed-won data, scored with enrichment, maintained regularly. The accounts are right. The enrichment data describes each account accurately. A company that just closed a Series B and is hiring a VP of Revenue is building a tool stack. A company whose incumbent vendor announced an acquisition is evaluating replacements. A company that just launched a new product line has a capability gap the SaaS product fills. The enrichment data doesn’t surface any of this. The SDR team is manually checking for these signals, which at narrow ICP scale means that the coverage is patchy and the research quality is inconsistent.

In a PLG-led motion with a sales overlay, the ICP is narrow by definition — product-qualified accounts that meet usage and expansion criteria. The fit is tighter than any filter can produce because it is grounded in real product behavior. And the timing is still wrong. A PQL scoring 90 out of 100 on product fit who just promoted an internal champion to a new role with a budget for an infrastructure upgrade is a fundamentally different outreach moment than the same PQL at 90 out of 100 in a company that just hired a new CFO implementing a cost reduction program. Usage data shows fit. It doesn’t show the organizational situation that makes the upgrade conversation relevant right now.

Infrastructure, not effort, is the architectural fix

The motion narrow ICPs require is not more SDR hours per account. It is a research layer that produces per-account context systematically — the trigger, the phase, the current situation — at the scale narrow ICPs generate.

Manual research at narrow ICP scale is not a sustainable answer. The SDR team doing per-account research on 200 accounts cannot also run the outreach motion at the cadence the ICP requires. The work compounds in the wrong direction: more research time per account means less outreach time per account, which means the motion produces fewer conversations at higher operational cost.

What changes with research infrastructure is what the sales team receives before they reach out. Not an enriched record describing what the account is — enrichment already provides that. A research basis for why this account, at this moment, has a situation where the offering is specifically relevant. That context is what turns a narrow ICP into a timing-intelligent motion. Without it, narrowing was the correct first move without the second move that makes it work.

The counterargument

“The answer is better enrichment. If my ICP is right and my filters are tight, I just need richer data per account.”

Enrichment describes the account: industry, headcount, funding stage, tech stack, executive tenure, growth indicators. All of this sharpens fit filtering. None of it produces per-account context. A funding stage tells you the account is the right size. It doesn’t tell you whether the account just hired a VP of Revenue who is building a new tool stack, or whether the last VP of Revenue departed and the role is frozen pending a search. Two accounts with identical enrichment profiles are in fundamentally different outreach moments. Research infrastructure distinguishes them. Enrichment doesn’t.

Better data is not the fix. It is the input to the fix. The fix is the research layer that converts data about what the account is into context about what the account is doing right now.


The narrow ICP created the motion. Research infrastructure makes it work.

The narrowing was right. The ICP tightening that happened across SaaS GTM over the last several years was the correct response to a real market shift. What most teams haven’t built is the infrastructure that narrow ICPs expose as necessary: a systematic way to produce per-account context at the scale narrow targeting creates.

A GTM Intelligence Layer does that work upstream of outreach — connecting the offering to specific account situations that the research layer identifies, so the outreach arrives when it can convert rather than when it happens to land. Wil, Wyra’s AI agent for the SaaS ecosystem, surfaces timing-relevant situations for accounts in the ICP. Human judgment stays in the loop; the sales team acts on what the research surfaces, not on what the enrichment record describes.

Research infrastructure isn’t more work on the same motion. It’s the motion narrow ICP was always going to require.