PROMPT This AI Challenge
"Agentic AI Outcomes: Are You Measuring What Matters?" featuring Martin Schneider
Every episode of the PROMPT This podcast includes an AI Challenge for the audience. Follow the instructions below to complete this episode's challenge.
Every leader says they want discipline.
Very few actually want objective scrutiny.
The 5-Deal DQ Check started in sales. But the underlying principle applies to marketers managing campaigns and project managers running delivery portfolios.
It is a 20–30 minute operating discipline that exposes optimism bias in revenue, pipeline, campaigns, and execution plans.
The premise is simple: AI has no career risk, no sunk cost bias, and no attachment to “work already done.”
If your team cares about predictable outcomes, this becomes a weekly ritual.
The AI Challenge: The 5-Initiative Reality Check
Time Required: 20–30 minutes
Tools Needed: Export 5 active deals, campaigns, or projects
Select five active initiatives that:
-
Carry material revenue, budget, or strategic weight
-
Are expected to close, launch, or complete this quarter
-
Represent different segments, channels, or teams if possible
Then paste the following into your AI tool:
For Sales:
-
Deal summary
-
Discovery notes
-
Close date
-
Deal size
-
ICP criteria
For Marketing:
-
Campaign objective
-
Target persona
-
Budget and channel mix
-
Performance to date
-
Conversion assumptions
For Project Management:
-
Project charter or scope summary
-
Stakeholders and sponsor
-
Timeline and milestones
-
Budget status
-
Defined success criteria
Then run this prompt:
Should we continue pursuing these initiatives? Identify missing qualification or planning elements, risk signals, and recommend continue, fix, or stop.
That is the entire challenge.
No new framework. No software implementation. Just exposure.
What AI Will Immediately Surface
1. Hope-Driven Initiatives
In Sales:
-
Vague problem statements
-
No quantified impact
-
Close dates slipping
-
Notes heavy on enthusiasm
In Marketing:
-
“Awareness” goals without defined conversion math
-
Assumed lift with no baseline
-
Spend continuing despite flat performance
In Project Management:
-
Scope creep justified as “minor”
-
Timeline risk ignored
-
Success criteria loosely defined
In manufacturing, this might look like a plant automation project justified by “efficiency gains” with no cost model.
In a SaaS company, it might be a demand-gen program that “feels strong” but produces low-quality leads.
In a services firm, it might be a transformation project where the sponsor is excited but frontline adoption is unclear.
AI flags narrative without evidence.
2. Decision Authority and Sponsorship Gaps
In Sales, this shows up as:
-
No confirmed economic buyer
-
No documented decision process
-
No budget authority clarity
In Marketing:
-
No executive sponsor for the campaign
-
No alignment with revenue leadership
-
No clarity on who owns pipeline impact
In Projects:
-
Weak executive sponsorship
-
Stakeholders misaligned on scope
-
No escalation path for tradeoffs
Across products, manufacturing, and services companies, the pattern is consistent: initiative energy without decision authority.
AI highlights structural fragility when you see it summarized cleanly.
3. ICP or Strategy Drift
You provide your ICP or strategic criteria. AI cross-checks reality.
Common patterns:
-
Target segment outside your sweet spot
-
Use case outside core value proposition
-
Project misaligned with annual strategic priorities
-
Campaign chasing volume instead of ideal personas
In a capital equipment manufacturer, that might mean chasing sub-scale facilities outside your profitable service model.
In a software company, it might mean launching features for edge cases.
In a consulting firm, it might mean taking on clients without internal capacity to implement recommendations.
You see where execution violates strategy.
Why This Works
Humans rationalize.
- Sales teams rationalize quota pressure.
- Marketers rationalize sunk media spend.
- Project managers rationalize timelines to avoid escalation.
Leaders rationalize most of all because they need the forecast to hold.
AI does not care about:
-
End-of-quarter optics
-
Campaign pride
-
Political capital
-
Status meeting narratives
It evaluates patterns.
When you line up five initiatives side by side, structural weaknesses repeat:
-
Missing qualification
-
Inflated timelines
-
Undefined success criteria
-
Strategy misalignment
This does not replace judgment. It removes first-pass bias.
How to Operationalize It
If you want this to become part of your operating system:
-
Run it weekly on the same five initiatives
-
Track how often AI recommends stop, fix, or continue
-
Compare recommendations to outcomes 90 days later
-
Review AI stop calls openly in pipeline or portfolio reviews
If AI consistently flags dead deals, broken campaigns, or failing projects early, you have a discipline gap.
If AI is frequently wrong, your input data is weak.
Either outcome is instructive.
AI Prompts
Use these to deepen the exercise:
-
Evaluate these five initiatives for structural risk and recommend continue, fix, or stop
-
Analyze decision authority gaps across these deals or projects
-
Score these campaigns against our ICP and strategic priorities
-
Identify forecast or timeline risk patterns across these five initiatives
-
Compare AI stop recommendations to our current forecast or delivery status
Final Thought
Outcome quality matters more than activity volume.
The 5-Initiative Reality Check forces leaders in sales, marketing, and project management to confront whether their pipeline, campaigns, and roadmaps are built on evidence or optimism.
AI has no commission bias. It also has no ego.
If your team cannot tolerate this level of scrutiny, your forecast, your funnel, or your roadmap is already fiction.