The First 90 Days with a GTM Intelligence Layer: A Practical Onboarding Playbook
Most teams approach onboarding a GTM Intelligence Layer like onboarding a CRM — configure it over a weekend and expect results by Week 2. That’s not how this works. This playbook is the order of operations for your first 90 days: the three-phase sequence that builds the intelligence foundation campaigns actually run on.
Key Findings
01
Campaigns start during Phase 1 — not after. As soon as the AI SDR completes warmup (~Day 14–15) and your first Offering is Finalized, launch. Build the Offering while the AI SDR warms up; don’t wait for one to finish before starting the other.
02
Artifacts before Offerings: upload case studies and capability documents before building Offerings. The AI builds Offering sections using Artifact content as source material. First-pass confidence scores are significantly higher with Artifacts-first sequencing.
03
Create your AI SDR on Day 1, not Day 14. The warmup period is approximately two weeks and can only start when the SDR is created. Delay it and you lose campaign capacity you can’t recover.
04
The reliable calibration indicator is trajectory across campaigns: are later campaigns outperforming earlier ones after Offering improvements? Calibrating too early (before patterns are readable across multiple campaigns) produces misleading signals.
05
The most common plateau causes: Offering finalized at the minimum 65% threshold without pushing for higher specificity, a generic Why Now, not rating Outreach Ideas consistently, and calibrating before enough data has accumulated.
Most teams approach onboarding a GTM Intelligence Layer the way they approach onboarding a CRM — configure it over a weekend, connect the accounts, point it at a list, and expect results by Week 2. That is not how this works. Teams who try it that way spend the next 60 days wondering why their results are flat, only to discover that the intelligence layer was half-built.
A GTM Intelligence Layer generates research-grounded outreach in the context of what you sell, who you sell to, and why your offering is relevant to a specific buyer right now. That context comes from the intelligence foundation you build in the first 30 days — your Offerings, your Artifacts, your company profile — and from the research the platform runs against every prospect through that lens. Teams that build the foundation correctly in Phase 1 compound through Phases 2 and 3. Teams that skip to execution first plateau at the first campaign and attribute the plateau to the tool.
This playbook is the order of operations. Follow it once and you won’t need to restart.
What this playbook is not: a replacement for in-product help documentation (use the help center for step-by-step UI guidance); a promise of specific outcomes at specific timelines; or a calibration guide for teams with 90+ days of active campaigns.
The 90-Day Intelligence Rollout
Three phases. Each has a distinct job. Each builds on the one before.
The framework
Phase 1
Days 1–30
Intelligence Foundation
Build the layer that every campaign runs on. Artifacts, Offerings, AI SDR setup. Nothing ships until this is complete.
Phase 2
Days 31–60
First Intelligence Cycle
Engage with what your Phase 1 campaigns generated. Review replies, respond, make calls. Launch additional campaigns against new Offerings or segments.
Phase 3
Days 61–90
Calibrate and Compound
Review patterns across campaigns. Improve your Offerings from what the data showed. Launch calibrated campaigns from that foundation. Establish the weekly review rhythm.
The phases compound — Phase 2 depends on Phase 1 being properly complete, and Phase 3 depends on Phase 2 running for long enough to produce readable data. Teams who skip Phase 1 and start with Phase 2 produce weak baseline data. Teams who try to evaluate the platform too early are measuring a cold start against a benchmark that reflects a calibrated system. The two are not comparable.
Four pillars of campaign quality
Every campaign produces output from four components working together:
- Offering — what you sell and why it matters. The substance layer. Wyra draws on your pain points, solutions, outcomes, and social proof to ground every message in what you actually deliver.
- Persona — who you’re targeting and how precisely. The direction layer. Persona connects your Offering to specific roles, industries, and company contexts.
- Why Now — the specific angle that makes your outreach timely rather than generic. The relevance layer. A market shift, a program deadline, a seasonal pressure point. This turns your CTA from “want to chat?” into specific and timely.
- Messaging Framework — how you instruct Wyra to assemble the first three into channel-native messages. The output layer. The framework controls slot structure (what to include and in what order), length scale, and tone. Getting the first three right and then using a weak framework still produces generic output — and a strong framework on weak inputs produces well-structured irrelevance.
All four are required. Phase 1 builds the Offering. Phase 2 adds Persona precision, a sharp Why Now, and your first Messaging Framework. Phase 3 calibrates all four based on real data.
Phase 1 — Intelligence Foundation (Days 1–30)
Phase 1 is straightforward: create your Offering, set up your AI SDR, and launch. Wyra generates the Offering from your profile and content — you review and refine it. Once the AI SDR completes warmup (around Day 14–15) and your first Offering is Finalized, your first campaign is ready to go.
Your profile is the foundation’s foundation
Before you upload anything, audit your company profile. The intelligence layer analyzes your company continuously — your website, case studies, public content, and any documents you upload as Artifacts. That analysis is only as good as the source material it has access to.
If your website says “we help companies with cloud solutions” and your actual offering is AWS data platform modernization for mid-size healthcare organizations, the intelligence layer generates outreach angles anchored to the generic version of your positioning. Correct the profile first. Be specific: which ecosystem, which practice areas, which customer types, which outcomes you’ve delivered. Specificity in the profile produces specificity in the intelligence layer.
Upload Artifacts before building Offerings — this order matters
Artifacts are the documents Wyra extracts intelligence from: case studies, capability decks, testimonials, ROI studies, implementation guides, certifications. Uploading them before you build your Offerings is not a preference — it is the correct sequencing. The common mistake: teams build the Offering first, see low AI confidence scores, then upload Artifacts to improve them section by section. The right order is Artifacts first, then Offerings.
What to upload in Week 1
- Your strongest case study — ideally with quantified outcomes. If you have multiple, start with the one closest to your primary offering.
- Your capabilities deck or solution brief — the document that explains what you do and how you deliver it
- Customer testimonials or reference quotes you have explicit permission to use
- If you’re in an OEM ecosystem, any co-sell materials that describe your specific practice area or partnership credentials
Start with 2–3 Artifacts. Quality of source material determines quality of Offering output — a case study with specific numbers, named business outcomes, and a described implementation scope is worth five generic capability overview decks.
Build your first two Offerings — and actually Finalize them
An Offering is a structured pitch playbook. It is the intelligence layer’s brief for every campaign and every list enrichment run. It is not a deck, not a product description, and not a summary of your capabilities. It is a structured argument for why a specific type of buyer should engage with you.
Your primary motion: the thing you sell to the broadest relevant audience you have. If you’re an AWS consulting partner, this is probably your migration or modernization practice. If you’re a SaaS ISV, this is your core product against its primary buyer vertical. Build this one first. Work through every section.
Before you Finalize, review the AI confidence scores. The finalization gate is 65%. High (65%+): sufficient specificity to generate relevant outreach. Medium (50–64%): a foundation exists but the Offering lacks depth needed for outreach to feel specific — you cannot finalize here. Low (below 50%): too thin to produce outreach worth sending.
If sections like Pain Points, Business Outcomes, or Social Proof are scoring below 65%, the fastest fixes are: adding specificity (“reduce claims processing time by eliminating manual document review” rather than “reduce costs”) and uploading Artifacts with real metrics and named outcomes.
Pick one use case, vertical, or customer type where you have strong evidence — a case study, a reference outcome, specific domain expertise. Build the second Offering around that specificity. The specificity of the source material will push the confidence scores higher, because the evidence maps directly to the offering’s claims.
Goal by Day 30:Two Finalized Offerings, both with confidence pushed well above the minimum threshold — aim for 72%+ on core sections, not just enough to clear the gate. An Offering in Draft status is not available for campaigns. Finalize before Phase 2 begins.
Create your AI SDR on Day 1 — not Day 14
Most teams create their AI SDR in Week 3. They don’t realize they’ve burned two weeks of warmup time they’ll never get back.
The AI SDR requires a warmup period of approximately two weeks after creation. During warmup, the agent operates at reduced capacity to build a sending reputation — this protects deliverability long-term. The warmup runs automatically once the AI SDR is created. But it can only start when you create it. If you want your first campaign live by Day 35, create the AI SDR on Day 1.
AI SDR setup checklist
- Complete the agent’s profile — name, title, bio
- Connect LinkedIn (required for LinkedIn outreach — requires your 2FA secret key; have it ready before you start this step)
- Add your calendar link — required for meeting booking
- Confirm status reads “Ready to engage!”
Email outreach runs without LinkedIn connected. But if LinkedIn is not connected, you lose the LinkedIn touchpoints in the sequence, which are often where early engagement starts. Connect in Phase 1 to run the full multi-channel sequence from your first campaigns.
Phase 1 checkpoint — end of Day 30
- Company profile accurately describes what you actually sell — specific practice areas, specific outcomes, specific customer types
- 2–3 Artifacts uploaded with strong source material (at minimum: one case study with numbers, one capability document)
- Outreach Ideas reviewed — like or dislike angles to train the intelligence layer
- Two Finalized Offerings — both with High confidence on core sections, confidence pushed above the minimum
- AI SDR created, LinkedIn connected, calendar link added, warmup underway or complete
- First campaign live — launched once AI SDR warmup completes and first Offering is Finalized (warmup completes around Day 14–15)
If all six are checked: Phase 2 is ready. A partial intelligence foundation produces partial results, and partial results are misread as platform performance when they’re actually setup performance.
Phase 2 — First Intelligence Cycle (Days 31–60)
Your Phase 1 campaigns are running. The job of Phase 2 is to engage with what they generate — the replies, the auto-replies, the first real conversations — and to extract the patterns that will improve everything that follows. If you’re launching additional campaigns against new Offerings or segments, this phase is where that happens too. The first intelligence cycle is calibration data. Read it as such.
Launch additional campaigns and upload new lists as you go
A list in Wyra is a CSV of LinkedIn profile URLs — a deliberate set of prospects you’ve identified because they match a specific targeting hypothesis: the right industry, the right role, with an observable trigger (a cloud migration announcement, a new partnership, a funding round, a leadership hire, a job posting for cloud architects).
For your first list, start narrower than feels comfortable. 200–500 contacts from a single vertical or a single role-type at companies with a specific trigger tells you something readable. A 2,000-contact broad-sweep list tells you noise. This first list size is distinct from total AI SDR outreach volume — as the campaign sequences run, the AI SDR touches each contact multiple times across channels. Total leads reached accumulates beyond the list size over the campaign’s runtime.
When you upload your list, the platform will prompt you to select a context Offering. This selection determines the quality of what enrichment produces. With a Finalized Offering selected, enrichment generates offering-contextual research for each lead: a relevance score, an outreach strategy, inferred pain points, a tailored CTA, discovery questions, and objections specific to what you sell — grounded in the Offering’s content. Select the Offering you built in Phase 1.
After enrichment completes, review 20–30 profiles in the Interactions view before launching. Check the relevance scores. Read the AI-generated outreach strategies. If the strategies look generic — if they could describe any company in the vertical without modification — your Offering’s Pain Points section may not be specific enough. Go back to the Offering, add specificity or an additional Artifact, and re-enrich a sample before launching.
Launch your first campaign — get these settings right the first time
The Campaign Wizard runs five steps: Offering selection, Persona, Idea, Messages, and AI SDR assignment.
Two campaign paths:if you’re using an uploaded list, the Persona step is skipped — your audience is defined by who’s in the list. If you’re running from Wyra’s Intelligence database, the Persona step appears. For Phase 2, start with your own uploaded list — tighter control, more readable results.
Running multiple campaigns in Phase 2 is fine— in fact expected if you have two Finalized Offerings. Run one campaign per Offering so results stay readable. What to avoid is running two campaigns against the sameOffering with different persona or message settings simultaneously — you won’t be able to read which variable drove the difference.
Campaign setup: the Why Now
The Why Now is Step 3 in the Campaign Wizard and the most under-worked input. It is the specific reason you are reaching out to this type of buyer, with this offering, right now — not a restatement of your offering, not a persona description.
Weak Why Now (don’t use):
- “We help companies improve their cloud infrastructure.”
- “We offer a free consultation to discuss your cloud strategy.”
Strong Why Now (the target):
- “We’re running a complimentary cloud architecture review for companies that have moved from one cloud provider to AWS in the last 18 months — specifically to map the gaps that first-gen migrations typically leave behind.”
- “We’re reaching out to IT leaders at regional banks now because the DORA compliance window is creating a 6-month evaluation cycle that firms in your tier are actively in right now.”
The Why Now needs a situation (what’s happening), a consequence (why it matters), and a timing hook (why now, not six months from now). When all three are present, the messaging framework converts them into a CTA that reads as timely — not as a template.
Messaging framework settings
Two dimensions control message output: slot count (how many elements appear) and length scale (how much Wyra expands each slot, 1–5).
- LinkedIn: 2–3 slots, length scale 1–2. Target 35–80 words. Never lead with the product on LinkedIn.
- Email: 4–6 slots, length scale 3–4. Target 150–350 words. Email builds the case — this is where pain points, social proof, and outcomes earn their place.
Tone:use Wyra (default) or Direct for Campaign 1. Challenger and Creative require a genuinely sharp Why Now and strong Offering — use them in Phase 3 once you have real data.
Channels:don’t suppress any channel on Campaign 1. Let the agent run the full email + LinkedIn + calling sequence.
Run the campaign for 3–4 weeks before drawing conclusions
The most common Phase 2 mistake is adjusting the campaign in Week 2 because early results are slower than expected. Two weeks is not a reliable signal window. LinkedIn connection acceptance rates take time to develop. Email deliverability improves as the sending infrastructure warms. Measuring email performance at Day 10 produces a number that tells you almost nothing about the campaign’s actual trajectory.
Let the campaign run. Check daily for connection acceptance rates, replies (respond promptly — use the Interactions view’s AI reply suggestions as starting points), and patterns in who’s engaging. Don’t adjust after a few days — look for a consistent pattern across at least two weeks before changing anything.
Phase 2 checkpoint — end of Day 60
- At least one campaign has been running for 3+ weeks (started in Phase 1 or early Phase 2)
- First replies responded to — using the Interactions view AI reply suggestions as starting points
- Calls made where appropriate — don’t rely only on async replies
- Additional lists uploaded and additional campaigns live for new Offerings or segments
- Interactions export pulled — basic pattern review complete (which industries, which roles engaged, what replies say)
Phase 3 — Calibrate and Compound (Days 61–90)
The first 90 days are not about learning a tool. They’re about building the intelligence foundation that the tool runs on.
Phase 3 is where the intelligence layer starts paying compound returns — because you now have real data from a real campaign to improve from. Phase 3 without the prior work is iteration without a real baseline.
By Day 61, you can have multiple campaigns running or completed. Phase 3 is not about launching a second campaign for the first time — it is about reading the patterns across the campaigns you have run, improving your Offerings based on what you learned, and launching new, calibrated campaigns from that foundation.
Read the Interactions layer with intent
Export the full Interactions CSV from your Phase 2 campaigns. Look for three patterns:
- Which vertical or company type had the highest engagement rate? If healthcare buyers replied at twice the rate of manufacturing buyers, your Offering maps more directly to the healthcare buyer’s situation. Build a healthcare-specific Offering (or sharpen the healthcare sections) before your next campaign in that vertical.
- Which role replied most?If VP of IT replied at higher rates than CTO, adjust persona settings on future campaigns — or examine whether the messaging should shift to speak more directly to executive outcomes.
- What did the early replies actually say?Read them. Fifteen replies with a consistent theme is enough to act on. “We already have a partner for this” is a competitive positioning signal — add a competitive differentiation angle to your Offering’s objections section. “Can you give me more detail on how this works in a regulated environment?” is a signal that compliance knowledge isn’t prominent enough in your messaging.
Improve your Offerings based on what you learned
Make targeted improvements before your next set of campaigns:
- If pain points aren’t landing in specific industries, upload an Artifact that speaks directly to that industry’s specific context
- If objections are appearing consistently in replies, enrich the Objections section with the actual language buyers are using
- If social proof is thin or non-specific, add a case study Artifact with verifiable outcomes and re-run AI enrichment on the Social Proof section
Re-Finalize the Offering after improvements. The version connected to active campaigns is frozen — the improved version becomes the basis for new campaigns. Do not modify an Offering that has active campaigns running against it.
Launch calibrated campaigns from what you’ve learned
Each new campaign from this point should be a deliberate improvement, not a restart. Improve one or two variables based on what the data showed: an updated Offering, adjusted persona settings, or a new targeted list built around the trigger types that produced the best engagement in earlier campaigns.
If patterns showed that one specific vertical dramatically outperformed others, build a dedicated Offering for that vertical. A vertical-specific Offering will consistently outperform a general Offering applied to a narrow list.
Establish your weekly intelligence review rhythm
By Day 90, the platform should be a weekly habit. The rhythm is 20–30 minutes:
- Review new Outreach Ideas generated since the prior week — like or dislike each one
- Check Interactions for replies that need responses
- Review AI SDR activity
- Queue any new lists that are ready for upload
The intelligence layer improves continuously with feedback. Liking and disliking Outreach Ideas trains the model toward the angles that match your actual GTM motion. A team that provides this feedback consistently at the weekly review has a materially better-tuned intelligence layer at Day 180 than a team that provides no feedback at all.
Phase 3 checkpoint — end of Day 90
- Interactions from Phase 2 campaigns reviewed — vertical, role, and reply patterns identified
- Primary Offering improved with at least one targeted refinement (new Artifact, improved section, or additional social proof)
- New calibrated campaigns launched or queued — built on Phase 2 patterns, using improved Offerings
- Weekly review rhythm established — Outreach Ideas feedback, Interactions check, SDR activity review
Four mistakes that plateau teams
Most teams who see disappointing results in the first 60 days have made one of four errors. These are fixable — but finding them early is far cheaper than discovering them after weeks of flat results.
The platform requires 65% confidence to Finalize an Offering. But 65% is a floor, not a target. An Offering that barely clears the gate has passed the minimum bar — it has not necessarily passed the quality bar. Pain Points, Business Outcomes, and Social Proof sections at 65% produce noticeably weaker outreach than those at 75–80%+. The gap is visible in the message preview.
The diagnostic: after Finalizing, review the per-section scores. If any core section is near the floor, the fastest fixes are adding specificity (“reduce claims processing time by 40% by eliminating manual document review” rather than “reduce costs”) and uploading Artifacts with real outcome data. Push the Offering to 72%+ before running volume campaigns against it.
The Why Now is the single most under-worked input in campaign setup. The diagnostic: read your Why Now in isolation, without looking at your Offering. Does it describe a specific, observable situation in the buyer’s world and explain why that situation creates urgency right now? Or does it describe your capabilities in slightly different language from the Offering?
The Why Now needs a situation, a consequence, and a timing hook. When all three are present, the messaging framework converts them into a CTA that reads as timely. When any is missing, the output reads as a template regardless of how strong the Offering is.
Liking and disliking Outreach Ideas is the primary mechanism by which the intelligence layer learns what you actually sell and to whom. It is not optional feedback — it is how the system calibrates to your GTM motion. Teams that skip the weekly Outreach Ideas review are running a system that cannot improve, because it has no feedback to improve from.
The weekly review takes 5–10 minutes. Like angles that match your current targeting; dislike angles that miss the mark. At Day 180, a team that has done this consistently has a materially better-tuned system than one that hasn’t. The gap compounds in both directions.
A campaign that has been live for two weeks does not have readable patterns. LinkedIn connection acceptance develops over time. Email deliverability improves as the sending infrastructure warms. First replies appear before the volume of interactions that show real patterns across verticals, roles, and Why Now angles accumulates.
The reliable calibration signal is trajectory across campaigns: are later campaigns outperforming earlier ones, after the Offering improvements from Phase 2 data have been applied? If yes, the intelligence loop is working. If performance is flat across multiple campaigns on an unchanged Offering, the diagnosis is usually upstream — an Offering that hasn’t been refined, or a Why Now that’s still generic. Fix the input before expanding the volume.
Measurement — what good looks like at each phase
The warmup curve
Campaigns launched in Phase 1 start at reduced daily send volume while the AI SDR’s sending reputation builds. Volume is lowest in the first week, ramps through days 7–14, and reaches full capacity (∼1,000 leads/month per AI SDR) once warmup completes around Day 14–15. “Total leads reached” in Phase 1 will reflect this warmup pace — not full AI SDR capacity. This is by design. The ramp protects long-term deliverability.
On reply rate and what to expect
Industry averages for B2B cold outreach run 1–3% across most verticals. Research-led outreach — with a well-structured Offering, targeted list, and calibrated persona — consistently outperforms this range. But early campaign results are not the reliable indicator of where your outreach performance will settle.
The reliable indicator is trajectory: are later campaigns outperforming earlier ones, after the Offering improvements from Phase 2 data have been applied? If yes, the intelligence loop is working and the platform is compounding. If campaigns are flat on an unchanged Offering, the diagnosis is usually upstream — an Offering that hasn’t been refined, or a Why Now that’s still generic. Fix upstream before expanding volume.
Summary and your next 7 days
The 90-Day Intelligence Rollout
- Phase 1 (Days 1–30): Intelligence Foundation. Profile accurate, Artifacts uploaded, two Offerings Finalized, AI SDR created and warming. Nothing campaigns until Phase 1 is complete.
- Phase 2 (Days 31–60): First Intelligence Cycle. First list enriched against a Finalized Offering. First campaign live. Interactions read, patterns noted, first replies responded to.
- Phase 3 (Days 61–90): Calibrate and Compound. Patterns from Phase 2 campaigns reviewed and translated into Offering improvements. New calibrated campaigns launched with sharper targeting. Weekly review rhythm established.
Your next 7 days
Does it accurately describe what you actually sell, at the specificity that generates relevant outreach?
Your strongest case study, your capabilities document, your best proof point material. Quality over quantity.
The warmup clock starts when you create it, not when you remember to. Two weeks of warmup you delay is two weeks of campaign capacity you lose.
Wyra builds the sections from your profile and content. Review, sharpen anything too generic, Finalize. Done in minutes.
Apply this framework in your organization
See how Wyra’s GTM Intelligence Layer puts this into practice for ecosystem partners.
Book a DemoRelated Resources
View allThe Offering Quality Framework: Why Upstream Content Determines Downstream Results
Most outreach problems are actually offering problems. The fix is not downstream. This playbook defines the three-layer framework for building campaign-ready offerings — from foundation lock to finalization — with quality criteria for all six content dimensions and three worked examples across ecosystems.
The Ecosystem-Native Outreach Playbook for AWS Partners
Most AWS partner outreach is generic by design — a capability list with an AWS badge attached. This playbook introduces the Ecosystem Intelligence Framework: three layers that convert generic capability into ecosystem-native execution for both SIs and ISVs.
The Vertical Response Model: Why the Same Outreach Produces Different Results in Different Industries
The 1–3% benchmark is not a ceiling. It is what happens when you treat every industry the same. This playbook introduces the Vertical Response Model — four structural drivers that create distinct performance profiles across eight verticals, and what research-led outreach changes about each one.