Two days into the trial. Account access is in, the data is pulling cleanly, and I've spent today getting oriented. Headline read: the account is healthy at handover — mature pixel, clean exclusions, full-funnel events firing, and a real creative production pipeline behind it. You're also mid-restructure: the JR-prefixed campaigns launched 1-9 days ago and are ramping while the older Persist | Testing winds down. Good time for me to show up.
This document is observations + questions, not recommendations. Before I push any tactical move on a $1.5K/day account I want your read on a handful of things — especially what we're actually optimizing for (first-purchase ROAS doesn't tell the real story on a subscription with retention + ascension behind it). My read is below, grouped into what I'm seeing, what I want to discuss, and what I need from you to land hard recommendations.
This is Part 1 of 2. Part 1 (this doc) is the operations side — account architecture, performance, audiences, pixel, campaigns, hygiene. Part 2 is a separate pass on creative, landing pages, and messaging — watching every active video, walking the funnel as a buyer, reviewing copy and brand voice across touchpoints. That's a different kind of work and warrants its own document.
Walked the LP, checkout, and pixel events myself. Now know:
/pages/persist for active 1002 ads.Section 4 compressed from 17 questions to 5. Section 5 plan updated.
Front-end ROAS at 0.52 confirms what you said about Persist running at-or-near break-even on first purchase. $110 cost per purchase against $39/mo subscription = ~2.8 months to recoup if average retention is at least that long. Real evaluation lives in the retention curve and Platinum ascension — which I don't have visibility into yet (see Section 4, question 2).
FBB Pixel has been running since Nov 2017 and last fired 2026-04-28. Standard events fire across the funnel (PageView, ViewContent, AddToCart, InitiateCheckout, AddPaymentInfo, Purchase). Custom events NCPurchase and Renewal are firing and pullable via the Ad Account API — that's the closest thing to LTV-adjusted reporting we can get without backend access. Real conversion data flowing to the algorithm, not synthetic.
Quick aside: I'm reading NCPurchase as "new customer purchase" — the trial-to-paid event — but I want to confirm with you that's what it is.
Every active sales adset excludes Klaviyo Purchasers, Purchase 180d, NCPurchase Conversions 180d, Purchase 30, and Website 180d Purchase. Plus a recent file-based suppression list. That's the cleanest exclusion stack I've seen in a while — cold traffic stays cold and warm doesn't get double-billed.
1001 as the main scaling line ($350/day, COST_CAP, 2 broad adsets). 1002 ABO as a per-adset creative testing layer ($25-$135/adset, each named for its concept). BOF retargeting on Website 30d + FB/IG 30d engagers. That's a reasonable cold + warm split, and the ABO setup gives you per-creative spend control without ASC over-pooling. Clean architecture.
Persist | Testing cycled ~24 distinct angles in the last 30 days — counter-position hooks (Don't Want It, Hate Being Strong, Worked Not Crushed, Signs Train Too Much), listicles (5 Signs, 1 of 10 for fat loss, Best Longevity), methodology demos (Weekly Schedule BnB, 3-60-10 Grid), outcome promises (Summer is Coming, You Look Happier, Then v Now), and physiology angles (HRV). Naming convention has batches in the 380s-400s — that's a real production pipeline behind this account, not a recycled creative set.
Strength interest stacks (Muscle&Strength, Physical strength, Weight training, Strength training, Physical exercise), functional fitness interests, 1% LAL on NCPurchase 180d combined with Marcus Filly Followers, MOF retargeting on Big 4 (US/CA/GB/AU), broad males Big 4. A solid audience library to run from — nothing missing on the targeting side.
What I see: Persist | Testing burned $19,989 over the last 30 days for 188 purchases at 0.36 ROAS. That's by far the largest active sales spend bucket and the lowest-ROAS one. It's also several times the JR scaling spend over the same window ($2,629 + $381 + $261 = ~$3,272 across the JR campaigns).
What I'm thinking: Looks like a creative testing engine more than a scale campaign — the 25-adset structure and rotating angles fit that pattern. But I want to understand how it actually feeds the JR campaigns before assuming.
Want to understand: Is this intentional testing budget? How do testing winners graduate into the JR scaling campaigns? At what testing-to-scaling ratio should this stabilize once JR is at full ramp? Spend trajectory shows it's already dropping (from ~$666/day average to ~$210/day in active adsets) — just want to confirm the wind-down is the plan.
What I see: At least 5 Persist LP variants are publicly live: /persist, /persist-by-marcus-filly, /persist-pump-lift-by-marcus-filly, /persist-training-program (which introduces a $99/3-month tier), /persist-weo (Welcome Offer with bonus ebooks), plus the Yard partnership at $230-$280/mo via goyard.fit.
What I'm thinking: Different price anchors and offer structures across LPs. Returning visitors could hit pricing dissonance depending on which they land on.
Update April 30: Verified via creative pull that 1002 JR ABO ads point to /pages/persist (canonical). Other variants likely test-history. Will verify same for 1001 and BOF over next 1-2 days.
What I see: The Marcus Filly Page (172K fans) is connected, but querying Instagram accounts on the ad account returns empty. Marcus has @marcusfilly with ~1M IG followers, but it's not associated with the ad account directly.
What I'm thinking: Could be three things: (a) Page-level association is handling the IG presence (some setups work this way), (b) intentional separation between the personal IG and the brand ads account, or (c) an oversight from a past migration.
Want to understand: Is this intentional? It affects how cleanly IG-native creative tests will run and how IG placements attribute. Want to confirm before doing anything IG-specific on creative.
What I see: 1001, 1002 ABO, and BOF retargeting all use COST_CAP. Persist | Testing uses LOWEST_COST_WITHOUT_CAP.
What I'm thinking: At ~$1.5K/day, Cost Caps tend to spend hand-to-mouth — the auction-tier benefit (where progressively more competitive impressions fill in) really kicks in at higher daily volume. If you're prepping for a step-up in spend, that's a different conversation. If we're holding here, LOWEST_COST_WITHOUT_CAP on the proven adsets might give the algorithm more room.
Want to understand: What's the thinking behind the COST_CAP choice on the JR campaigns? Is it pre-installed for a planned scale-up, or a default carried over? Either is fine — affects what I recommend on bid strategy.
What I see: Every active sales adset targets US. The saved audiences include Big 4 geos (US/CA/GB/AU and US/CA/AU/NZ) but nothing's currently running outside the US. Marcus's brand has international pull (1M IG followers, podcast distribution beyond US, content not geo-locked).
What I'm thinking: Could be based on prior testing where non-US underperformed, could be a $39 USD subscription perceived-price issue in CA/AU/GB, could be backend fulfillment friction, or could just not have been tested recently.
Want to understand: Any historical non-US testing results I should know about before considering geo expansion as a volume lever?
Goal structure — already asked via Slack. Volume target (X subs/mo)? CPA threshold (push gas while sub-CPA stays at $Y)? Or stage-gated (hit profitability bar first, then scale)? Affects every scaling decision I'd make.
Triple Whale viewer login — can I get one? 17 ad sets optimize on NCPurchase, which fires day-14 when monthly trials convert. That's outside Meta's 7-day click attribution window so its columns understate per-ad ROAS for those 17. Want to reconcile against TW before recommending anything on scale or cut. Helpful to know your selected attribution model + whether Sonar is running.
Creative production — who's making it (you, in-house, outside producer) and what's the cadence? Batch380s/400s naming suggests a real pipeline. Want to sequence testing without stepping on what's already in flight.
Dead angles — anything you've already proven dead in the library that I shouldn't retest? Just the obvious ones if any come to mind.
Non-Marcus presenters — the Persist Platinum coaches (Dallas, Rachel, Adam, Dominic, Erin, Alex, John) in or out of brand bounds for ads? 5signs_Org uses one already so I'm assuming yes, but want explicit confirmation before scaling that pattern.
Held back: profit-vs-growth mode, spend trajectory, end-of-trial outcome, COST_CAP intent, IG architecture, geo testing history. Most flow from your answer to #1; the rest I can pull from the data without you.
NCPurchase value distribution — pull insights with action breakdowns to confirm what each pixel event represents at the dollar level. Answers the last open measurement question without needing you.
Verify destination URLs across all active campaigns — 1001 JR Scaling, BOF Retargeting, Content Cycle. Confirmed /pages/persist for 1002 JR ABO; want to make sure no adset is pointing to a stale LP variant.
90-day historical performance on past Persist | Testing creatives — surface more redeployment candidates beyond the two already identified (BestWorkoutFor-Retest, SummerisComing).
Placement + demo + frequency breakdowns — verify the male-skew thesis is playing out in the data and spot any saturated adsets buried at 4+ frequency.
Reconcile against Triple Whale — once viewer access lands. Compare Meta's reported per-ad ROAS to TW's cross-window attribution before any scale/cut recommendations.
Part 2 is the creative + landing page + messaging audit. LP walk is mostly done as part of the update above; remaining is the per-ad creative review (every active video, visual quality, hook strength, message fit), the email follow-ups, and brand voice across touchpoints. Will scope as a separate doc.
Depends on goal structure (Q1) + Triple Whale reconciliation (Q2). Once both land I can propose specific scaling cadence.
Depends on production cadence (Q3) + dead-angle list (Q4). I'll have the historical performance numbers within 1-2 days; the retest/retire call needs your context.
Depends on Q5 (non-Marcus presenters in/out of brand bounds). 5signs already validates the pattern; want explicit go-ahead before scaling it.
Lower priority. Most of these I can pull historical data on first, then bring you a recommendation rather than a question.