AI Announcer (00:10)
Alright, listen up folks. Welcome to Promptus, yeah? This ain't no fluff, no hype, just straight talk for business folks sick of all that AI mumbo jumbo. I'm your voice this week, getting you warmed up, capice? Now Greg, he's the guy you call when you wanna scale your sales teams and crack that Silicon Valley code. And Clint? ⁓ Clint's the startup wizard, tacking big ideas and turning them into real money makers. These two, they cut through all the noise.
bringing you real deal strategies on how to use AI to launch new ideas, grow your business and stop getting left in the dust. So strap in folks, because here come your hosts.
Clint (00:50)
Hi everyone, this is Clint.
Greg (00:52)
This is Greg.
Clint (00:54)
So everyone's deploying AI agents these days, but almost no one is measuring whether they actually work. Our guest today is an AI industry analyst from one of the big analyst research firms and shares what he's seeing in the marketplace with us.
Greg (01:10)
So this was really interesting. I've been trying to pin down everybody that's come on this podcast and say, what is a Gentic AI really giving you in productivity gains? He's the first person that's really measured it across a ton of customers and came back with an answer. It blew me away.
Clint (01:27)
He's got the numbers to back it up. Let's listen to him.
Welcome to prompt this excited to have everybody here today. We have an outstanding guest. is Martin Schneider, Vice President and Principal Analyst at Constellation Research. He's joining us to unpack five KPIs that actually measure agentic AI's go to market impact. We're going to dig into agentic AI today with an expert. Welcome to the show, Martin.
Martin Schneider (01:59)
Thanks for having me, guys.
Greg (02:01)
All right. Wow. This is a, this is a big moment for us here at prompt this. We have a real AI analyst. So Clint usually plays our analyst and then almost analysts want to be analysts, whatever you want to call it, but sorry, Clint, you got to move over. got a real industry analyst here.
Clint (02:19)
That's right. You got to make room for the, for the big brains coming into the room here today. We typically talk to operators. You're kind of focused in on their specific experience and their specific journey. And I think, Martin, I'm just looking forward to hearing your point of view, talking to lots of people, right?
Martin Schneider (02:36)
Yeah, sure. I mean, as someone who was an operator, who I've been on both sides of the fence, the analyst side, but also on the go-to-market side, it's been an interesting kind of going into this analyst role as all this is changing because I've kind of get to sit back and see, all right, how are all these go-to-market leaders freaking out and trying to put all this stuff into production? It's been, know, putting this last report together, this big idea report was great because at a lot of vendor events that are in the agentic world and CRM world, CX world,
go-to-market tech world, talking to their customers, as well as talking to our own, ⁓ what we call our user org customers to put this together. was pretty enlightening in a lot of ways. So it was a lot of fun to put together because as a go-to-market practitioner, as well as an analyst in my life, I could commiserate as well as learn.
Greg (03:26)
was funny.
Oh, definitely. You know, we've been hearing, you know, we've had so many people on and agentic AI comes up almost in every conversation. Sure. You know, the workflows, how we're, you know, how they're doing it. And we've been hearing two sides of the story over and over and over, not ready for prime time yet, or it's amazing and it's life changing.
Martin Schneider (03:48)
Yeah. So you have the timid and the liars. it's.
Greg (03:51)
See,
that's what I thought, Clint. ⁓
Martin Schneider (03:55)
Yeah. mean, it's, you're, if you're that far on the enthusiasm scale, I'm waiting for a shoe to drop, right? Because you're either, you're about to hit some expenses and costs that might not cover the outcomes or, or you're just, I think you just have blind enthusiasm at this point. But if you're, but if you're too timid, I think you, run the, you run the, the risk of falling behind your peers and competitors in the space. So it's, you gotta be somewhere in the middle, I think is the answer. And that's what I've learned from talking to all these, all these practitioners over the last.
Clint (04:24)
There you go. We're, we're, ⁓ we're all about busting through the hype and there's definitely a lot of hype out there around agentic AI. Just before we jump in, let's, let's kind of bring it home for our audience right out of the gate. Smart. What is the one thing that they're going to learn today that you want to make sure they walk away with?
Greg (04:31)
Sure.
Martin Schneider (04:43)
⁓ I think more than one thing, it's kind of a few things, but they all kind of come together, right? ⁓ with, agentic AI, less is more. You can leverage multiple agents to cover a lot of use cases and focusing on jobs to be done with a less is more approach will actually. Prove, you know, way more successful than trying to think of kind of like there's an agent for that approach, right? It's, it's, when you think of like every little micro process, every little micro job, if you think of like that one to one.
You're going to get back into the old path of like point to point integrations that go crazy versus like think of it in an eye pass, right? In that kind of world.
Clint (05:19)
Well, let's dig into that topic of measurement. One that I actually hear a lot of people not put much thought into at the front end of their AI journey. I think that, so tell me about this, this research report. So real high level, what's this research report that you recently put out? What's the point of it? And let's dig into
Martin Schneider (05:38)
Yeah, so I mean, it's really about, know, what are the KPIs that really matter? Right? Because what we heard a lot of our end users are, you know, we're getting mandates or we're trying to prove our value one way the other, right? That we know what we're doing when it comes to AI, but how do we prove it? Right? Because as budget is still kind of amorphous and a question mark on like, how much is this going to be? Is it worth it? You know, how do you actually set the right baselines and then measure the right KPIs against your first
let's call it, you know, your first batch of agents to then, you know, to, to prove value and then kind of move to your next phases. Right. So I really tried to take a highly focused approach to kind of five key areas around go to market. Right. And, and what they should be. And they weren't, know, what I learned talking to the customers and who were having the most success was they weren't as obvious. wasn't just like sales, marketing support, customer success, you know, things like that. were much more, again, around, around outcomes and outputs.
than they were around kind of what I would call like departments or major, ⁓ you kind of phases of the life cycle. Right. So that's why when you, when you kind of look at some of them, you see they kind of become amorphous, like it, yeah, conversion rates, but that's conversion rates across the entire funnel. Right. You know, that's, you know, you're looking at throughput and efficiency.
Greg (06:58)
I ran a big revenue engine and I have a couple of them that we used to call it levers. We'd pull the levers and see which ones we could juice at the end, not, you know, not, not kill the other, the other areas. before we do that, you said something that was very interesting. said replacing humans with AI is a big topic. What's your actual point of view here?
Martin Schneider (07:19)
I don't think at this phase where we're at, any agent will replace a human 100%. For a lot of different reasons, right? ⁓
Greg (07:28)
sounds
like Martin's on the same team as we are. That AI is not going to replace a human 100 % in any of the seats.
Martin Schneider (07:36)
And
the people who have done it have paid the price already. Right. I'm not, again, I'm not going to name names, but I will say that users of popular customer support products who have tried to go from like 30 % deflection rates to like 90 % because higher churn, higher churn means you gotta go.
Clint (07:41)
you there?
Greg (07:59)
Yeah,
Those are the levers we were talking about. Don't destroy the other pieces, right.
Martin Schneider (08:07)
Exactly. So you're affecting other levers in a negative way without even thinking about it. And these, these knee jerk reactions to, you know, reducing staff to not, know, and then the people who have instead armed, you know, CSM and customer service reps, CSRs with the right tools and have augmented the handoff between, you know, lead development with still humans to AEs, you know, account and executives as humans.
They're the ones who are succeeding, right? Because they're the ones who who are nurturing the process at scale. They're leveraging AI BDRs to find, you know, well, what are those like 12,000 leads sitting there in our nurture bucket who might be, you know, four of them.
Greg (08:51)
Right. Yeah. We always talked about that. Let's look, you know, let's, take it over to revenue. Martin, let's take it over to revenue. When you were talking about efficiency and throughput, that KPI got, you know, got my attention really fast. Cause you know, before, before AI, these were the things that I was tasked with how to get the same amount of humans on my team to execute at SAS growth rates. Right. You know, that, that was my job.
And the only way to do that was we got to a point where we were at a hundred percent, you know, production possibility of humans. So we needed systems at that point. You know, so at, at when I see, when I see, you know, when I was reading through your report, you know, you were talking about the multiplier effect, the automation multiplier effect. Those were things we were just talking about. What kind of gains are you seeing? You know, we could see marginal gains with the systems we had.
Martin Schneider (09:53)
Yeah, it's about 40%. Right. And that's significant. Right. When you, you're looking at, when you're looking at time to qualify a lead now, 40 % is, I mean, some are even higher, right? Because it all depends on like, how are you classifying the seed? Like I said, like finding that needle in the haystack of like a, uh, what we thought might've been a dead lead, which actually was a good lead that was associated with maybe another division, things like that. There's all these verbatims that I won't get into that, that, that helped me put the report together.
Or if it's, you know, you're simple, you know, we've got a decent lead and we're doing bad discovery and things like that, you know, that 40 % is kind of critical, right? Because again, one of the hardest things to source is a good BDR, human. Right? Right. Because let's be, again, we're going to be honest about stuff here, right? Who typically gets those jobs as people right out of school? They don't have a lot of sales experience. They're eager. They want to learn. And they're willing to make a hundred phone calls a day.
but they don't know everything about your company. don't know everything about the product. There's all these things that like when you augment that with an AI BDR that can know everything about your product, works 24 seven, is multilingual, is all those things, 40 % is a conservative estimate, but the idea of like being able to shave of a 10 hour to two day lead discovery process, turning that down.
to six hours or only one day, that's huge, right? Because again, what are you doing? Then you get to that next phase of conversion, which I talk about in the next API list. You're optimizing your conversion because again, not only are you doing more volume, but more insides of the throughput, but you're more effective, right? And that's the thing is like you're actually, de-queuing faster. And that's the one thing, right? Is you're gonna...
Yeah, and I've got a theory about DQing, not theory, but I'll talk a bit about that in some of the other tools that I think people should be thinking about around how important it is to DQ because how many times have you been in a QBR and someone's talking about a deal and once you do inspection, you're like...
Because there's two levels of like what deals you should DQ right? Like is there literally like no discoveries been done or like discoveries done and we We don't even do what these guys are asking for right? So it's like how much of that like, you know, are you just trying to fill putt? know, what are you doing? Right? Why are you wasting everybody's time?
Greg (12:24)
biggest skills we teach. you got to get the junk out of the pipeline. Get the lead bucket.
Martin Schneider (12:30)
There's
subtlety to that where even where you get to that next level of differentiation and product and like what's a winnable deal where like a human BDR, even after looking at all this other stuff and doing some band discovery back and forth on some chat or email and things like that might push that lead forward and it might look good.
Greg (12:49)
Here's a thought. An AI BDR doesn't have to deal with hope. AI BDR doesn't get commission if it moves.
Martin Schneider (12:58)
It's the idea of, you know, driving a car, all you have to do is give it gas versus like if you have a horse, like a horse might get distracted and throw you off and they want to stop for a carrot, you know, like all these things. So it's the same idea, right? So you can much more predictively, or at least ⁓ you can scale these and understand these and look at these. Now, again, you have to learn from them and tweak and they will be self-learning. But at the same time, it's again, you're augmenting the human element, but even then it's not
perfect, but if you can at every phase of the process, eliminate unwinnable deals and optimize winnable deals, that gives you, know, that, cause the way I say like that, it all goes back to what I call like, what does the AI powered sales org look like? Like what is, what is the goal? It's fattening your margin, your operating margins by both discovering more deals through AI and
getting, you know, lowering your cost through efficiency and throughput, right? Because if you're doing both of those, and that's how you have to think about it, right? Because again, if you have more deals coming in because you're understanding and optimizing where you get to that SQL, right? And then start attacking those. But then at every phase of the life cycle, if it's more efficient and cost effective, you're lowering your cost to sell, right? And all your cogs go down. the- That margin.
Greg (14:18)
Right, yeah.
Martin Schneider (14:21)
of, cause again, you want fat pipeline, but you also want fat operating margins, right? I didn't even get into the revenue intelligence side of things. I've got another report on that, but, know, but that's the whole thing, right? It's, really an exciting time because we can actually start to do that. And that's where, you know, just taking the early approach of like putting that BDR in place, monitoring it closely, right? Making sure that you're not putting it there instead of humans. It's the augment the human activities that you're doing. You can do some really cool things and learn a lot.
about your actual go-to market because again, what's the goal here, right? Feed it back, you know? ⁓
Clint (14:56)
40
% productivity gains are pretty impressive. I got to tell you, like in the world of customer relationship management software, of the core business case around CRM software, frankly, regardless of the vendor, has always been you'll get a 20 % efficiency gain. Give your reps a day a week back so that they can spend more time with prospects. It's kind of the basic story around CRM. So what you're saying is...
People are seeing twice the efficiency gains with agentic AR than they would in just deploying CRM by itself.
Martin Schneider (15:29)
Yeah. Now, again, this is in my sample group. So, and let me describe who they mostly were. They were mostly kind of techie companies that were, ⁓ you know, a few thousand employees to pretty large, like, you know, some of the bigger tech companies in the world. What was their bandwidth issue? It was the human element of shifts, you know, language and knowledge, right? So what were you able to unlock, especially around the AIBDRs is it's all around that, right? It's 24-7.
global and knows everything that you feed into it and product right and total access and recall whereas like even even you know
Clint (16:03)
language every detail about the
Kind
like we're moving the inherent human friction around these high velocity use cases.
Martin Schneider (16:15)
Exactly. When you only need to hire maybe three humans to do.
Greg (16:18)
Yeah. You know, you're missing one piece that I heard a prior guest say, and maybe you're inferring it and you know, had global 24 seven, but one piece that, that, really got me as someone that had to operate with human head count was they talked about simultaneous conversations at the same time. So one AI agent was working maybe
100 phone calls at the same time or hundreds of leads at the same time. And that was a scale piece that was just so eye-opening.
Martin Schneider (16:51)
No, exactly. And that's the point. That's why it's, Hey, point this thing against your 10,000 nurture leads and you see who responds. You can go out on fishing expeditions in really low cost ways that resuscitate, know, a dormant dead, whatever you want to call them leads, or, or go out and analyze what's happened against interactions and things to say, Oh, this was a lead. You know, it's a really cool scenario that you really didn't see.
very much, right? know, CMOs and CROs pretend that they work together, right? But like, you know, how many times have they really been like, Oh, yeah, we're actually sharing this data rather than saying like, Hey, I did 115 % of my warm handoffs that I was supposed to do. My marketing team had a happy hour because of it. And then you're like, Yeah, but we did 72 % of quota. it was chill. So live that world. Right? Yeah, yeah.
We didn't delve into our past history together, Clint, you and I, but we've been there. We've been there. look, I've been on that side of it where I'm like, well, we did our job. We killed it on the lead. We gave you MQLs out the-
Greg (18:00)
But you handed over. I would be in the okay. I don't even want to be in this meeting now. Now I'm getting fired up. ⁓
Martin Schneider (18:08)
No, but it's the truth, right? So now you can finally be collaborative and you can actually be strategic together because you're learning from real stuff rather than, we said it was a good lead.
Greg (18:18)
Right. Right. Yeah. So talking about all these things you're discovering, what are some of the other cool things you've learned from actually talking to some, know, some of these companies? Yeah. mean, what are the, what's real?
Martin Schneider (18:30)
The biggest thing is the biggest thing stymieing agentic deployments is predictability of costs when there's no understanding of outcomes. And what I mean by that is we just went in a BDRs. Lead conversions, opportunity conversions, revenue, easy dotted and straight lines to go back to justify costs.
Clint (18:47)
Hm?
Martin Schneider (18:58)
But when you start to get into some of these agentic flows around, know, that are kind of in the middle, right? Around certain like customer success operations, certain customer support and things like that, where there really isn't like a revenue outcome. know, customer success, might have a renewals agent that goes out and does reminders and pulls information and says, is there anything leading into our QBR you want to talk about and get all that? And can tie that back to potentially an outcome, right? Cause you can look at.
You can look at trend rates, can look at NDR and all that kind of stuff. But there are a bunch of other ones where people are like, yeah, I don't know how to justify an unpredictable cost. And I've been at the big events, the big annual events at a lot of vendors in this that have agentic go-to-market CX, whatever you call them, offerings. They're putting customers in front of me and I'm talking to the customers. And some of them are like, yeah, there's some clear cut, deploy these first.
And there's other ones where we're like, we don't really know what this is going to cost because we don't, I don't know where, where I, not really sure what an action is and what's going to pull a credit, what's what isn't and, and how I'm going to plan for this because I'm not, I'm not the CTO, I'm not the CIO, I'm not the CFO. So I don't have the budget to actually throw this into production and let anybody touch it. And I don't really know how to throttle it. Right. I don't know how to. ⁓
Clint (20:20)
That's a fascinating point. So you're saying this kind of lack of clarity on being able to find out exactly how much money this is saving us is slowing down companies from deploying AI. ⁓
Martin Schneider (20:33)
Yeah. And we're more like it's, it's, don't know what this is going to cost. And I, back to what it's.
Greg (20:36)
I'm here, yeah.
Clint (20:41)
I can't even tell how much this going to save me because I can't even how much this going to cost me.
Martin Schneider (20:45)
Exactly.
there's just so much, you know, kind of fuzziness. again, it's not like anyone's saying this doesn't work and it doesn't do what you said it does. It's just like, you're not letting me know, looking at my organization, how much this I'm going to use, how much that is going to cost, and then how to apply that to KPIs to justify it. Because everybody moved so fast and jumped in and was like, because what happened, like, you know, I said earlier, less is more. You saw this race to the most agents.
I think we can go home. So what's been the outcome of that? So.
Clint (21:19)
So what's the answer on the cost factor? what, if I'm, if I was a buyer today for, for an agent to AI system, what should I be trying to make happen to, to manage my costs?
Martin Schneider (21:32)
Well, I mean, obviously like real forethought into who, where this gets applied, who uses the agent and what are the use cases that trigger these agentic flows? That's one thing. And you, and you, and cause again, you want them, you don't want to proliferate it, right? Too many, too many cooks spoil the broth, right? So if you have all these agents going around and you go to market, it's like, okay, now it starts to get confusing and there's all this stuff becomes over-managed. There's too many calls. So you're, you're, pulling credits that you shouldn't have to, right?
So you really do want to want to look at these from kind of from a mapping perspective. That's going to help you control your costs. And the other thing is again, tying to outcomes. Will, will this actually affect an outcome? Isn't a cause again, it's like, can we do it? Is it going to reduce some time? Is it going to make someone's life a little easier? Yeah, probably maybe. But is it going to affect outcomes? Meaning does it, does it, you know, that's where the BDR one becomes such a no brainer. It's like, I can talk to 20,000 more leads a month than I.
could with the humans that I've hired. That's a no brainer in terms of like, that going to, that by nature affects outcomes. And I can measure it because what happens in my lead flow is constantly monitored and measured and against KPIs, right? So, and I've got benchmarks, like how was my business performing before I put this in? Everybody's tracking that, right? So it's one of the easiest ones to look at and justify costs. Now you can start to look at, know, well, are the fishing expeditions working?
Should we stop that? Because again, because sometimes that's high volume with low response. But if you're like a, you know, if your average deal is 200K ARR, maybe it's worth it. You know what I mean? So it's like, so you can make those decisions, but you've got to really scrutinize it. It's like, just cause you can, doesn't mean you should. Right. And, and, and taking that to a, to a very, you know, truly a strategic, a method of mapping and looking. Cause again, less is more. lot of times agents can work in, in multiple in,
in multiple use cases, right? And across those things. again, it's like, don't be pulling credits that are going to cost money, right? And that's the other thing. No one's really talking about when the AI subsidies go away, this stuff's going to get even more expensive, right? When open AI and enterprise are public companies.
Clint (23:45)
when they need to start charging what the real costs are. take it to a different space here in the discussion right now. I'd to maybe hear a little bit more about how you use AI. Tell us a bit about your AI journey.
Martin Schneider (23:56)
My stuff is really like my use of AI is kind of surface level for the most part, right? Like there's some tools that I like. I do use Gemini to aggregate some ideas and to throw things together. I'll use NanoBanana and Firefly to generate some images. I use the AI stuff inside Canva because again, I'm trying to make things look good. I'm giving presentations. That stuff's huge, right? mean, the...
Clint (24:22)
I say.
Martin Schneider (24:26)
Again, it's not about human replacement, in some ways, man, I can't see go-to-market teams being as big as they used to be when I don't need the person making... If I'm like a product marketing... Yeah, I don't have the PowerPoint guy anymore. It's me just talking to Canva and it's bringing it out.
Clint (24:48)
marketing teams don't need to staff somebody whose job it is to respond to ad hoc requests for creative or deck cleanup or things like that. People are now able to do it.
Martin Schneider (25:00)
Yeah, and that kind of worries me because I wonder how that affects career paths inside like go to market, right?
Clint (25:09)
That's the one we're right in the middle of right now. No one really knows where that lands. But ⁓ my personal point of view is that we've got a future of more smaller teams, so more smaller companies than we have today.
Martin Schneider (25:22)
That's exciting. mean, one the things I think of when I do look at some of the really cool kind of go-to-market tools that I don't necessarily use, that I analyze and I talk to people, it's, we've got this concept of like AA exponentials and we believe, Ray Wong will talk about is there's going to be like a five-person company with a billion ARR. Like that's going to happen.
Clint (25:51)
already happening out there less than 100 people companies with with greater than a billion dollars.
Greg (25:56)
have
to stay relevant and you have to know the skills. mean, like, just asked you, said, you're a, you said, I'm not really a user kind of service user. And then you just listed off 17 tools. ⁓ Right? Yeah. Well, I've got a big question for you here. So for, for our listeners, we do have listeners that run groups and lead organizations that have zero AI in there. So for a business leader who knows they need to get started with AI.
hasn't started yet. Where should they start?
Martin Schneider (26:30)
Well, think I'm to give two examples because I think these are ones that are again, like no brainers that can be tied to outcomes, ⁓ low risk in terms of, ⁓ like, you know, crazy, you know, data secure, you know, all that kind of stuff. The guard rails are kind of there. and I guess, and they can, and they can give you exponential outcomes if you, you, if you put them in right. One is we talked about it. It's it's the BDR, right? The AI BDR, cause it doesn't replace humans. It doesn't have to, it helps you. It helps you.
discover insights inside the leads you've already paid for. And that's thing is like, try not to waste, know, it resuscitates lead spend, right? In really interesting ways, right? And again, it's like, if you leverage either, either inside the ones that are inside your CRM, some have it, some don't, or like a third party one, they're easy to get started. They go and they scan, you just got to feed it all your sales stuff and all, and just throw some emails now, you know, and that kind of thing, right? Pretty easy to get started. And again, just look at your conversion rates before and after the baselines are easy, right?
No brainer on that one. The other one, a lot of people don't think about is again, another short list I've put out and I've championed as a concept of strategic response management. That's a really fancy way of saying RFI RFP.
Greg (27:39)
yes. Okay, yes, yes.
Martin Schneider (27:41)
How do we do that today? We put someone in sales enablement who's really a project manager, doesn't know everything about the company, doesn't know anything about the product, doesn't know anything, doesn't know everything, right? And doesn't necessarily know when things get updated and not updated, right? We need to turn that into a machine, right? And figure that out. So how do you create the centralized repository of like winning answers that are the best? It's like, how do you get these tools together? And there's a bunch of them like responsive and Tribble and all these things that do this really cool thing where in like a couple days,
You've aggregated all your, all your product data and all this type of stuff. And you start to create these centralized repositories that then read the RFP and RFI and do two things. One, important thing we talked about. Deque early, right? Human doesn't want to look at a 400. Yeah, but do that in 30 seconds. Even an hour of reading a 200 question RFP that you cannot meet the needs of. Right. So how do you do that? Right. And then you're starting to go, cause again,
Clint (28:24)
Walk away from
Greg (28:24)
Bad
deal.
Martin Schneider (28:36)
You might've gotten that lead from the AI media that went to the human who looked at it quickly and said, oh cool. But then you get to the RFP phase and it's like, well, we can't win this deal. We'll get rid of that. Get that out of pipeline fast, right? So using these systems is really a no brainer for go-to market guys because it's going to increase win rates because you're going to deque early and often you're going to optimize your RFPs and RFIs to be differentiated and winning. the specialized packaged products are so much more effective than trying to DIY this with.
Clint (29:05)
was my next question. Can I build that in chat GPT myself or do I go to one of those?
Martin Schneider (29:10)
You just want to just as many deals as you were doing it manually, just faster, right? I just with less effort. You're losing the same amount, but with less effort, I guess. You want to lower your efforts. Because again, you're not going to have specialized, optimized, and then it's not going to learn around what you think matters. Again, because it's like, what are we trying to do? Are we an early stage SaaS company just looking to grow? Any deal is a good deal. Or are we looking for good money deals that are actually profitable and have
Greg (29:16)
Press faster, I love it.
Martin Schneider (29:39)
opportunity for expansion, right? So how do you look at that, right? So I think that's and a generic approach isn't going to do that for you. I mean, unless you put way too much work in it, then if you just paid a little SaaS bill for
Greg (29:55)
Okay, it's time for this week's AI Challenge. Now the AI Challenge is a takeaway assignment for our listeners to get their hands on some AI tools and do some exercises.
Clint (30:06)
Here's this week's AI challenge. We're calling it the five deal DQ check. Export five open opportunities from your CRM. Paste in the deal summary, discovery notes, close date, deal size, and your ICP criteria. Then ask AI one simple question. Should we continue pursuing these deals or DQ them? That's it. What you'll see immediately is which deals are driven by hope and which deals are
actually driven by real economic buyer clarity and are really truly viable. And keep in mind, AI doesn't care which deals are in or out in the end. It doesn't need the quarter. And let's see if your pipeline can survive that level of honesty.
Greg (30:49)
Yeah, look down into the show notes and you'll find the link that goes right to the blog. It's got all the instructions that you're going to need. Now, if you've just finished an AI deployment within your business, we'd love to hear about your experience, the good, the bad, all of it. So go to www.promptthis.ai, go to the contact us page, fill out the form and we'll be in touch to talk about the story.
Clint (31:17)
Well, Martin, I got to thank you for a bunch of great insights here today. Appreciate your time. This is a fantastic discussion as always. Tell me, can people find you?
Martin Schneider (31:29)
I mean, I'm easy to find at the Constellation website. So it's ConstellationR.com. I'm all over LinkedIn with, you know, usually videos every couple of days around, you know, the vendor events and the other things that we're doing. You can find us, if you look at our events page, it shows which events we'll be at and which ones we're putting on. this is, you know, that's what I've been doing. So come find me.
Greg (31:52)
That's great. Thanks a lot, Martin. Really sharp go to market and insights. This is really time well spent today.
Clint (32:00)
And always been a fan of Constellation Research. I think they do probably the best job out there of all the big research firms. great to have your voice join the podcast and educate our audience. Appreciate it.
Martin Schneider (32:13)
Thank
you guys. Happy to be here and happy to share some of the things I've learned from talking with way more customers than I thought I would. But, you know, lot of people are dipping their toes in, in agenda, congenitiv AI. Like I said, the ones who are taking a less is more and a much more strategic approach. They're the ones who are succeeding. So it was, it was an eye opening bit of research.
Clint (32:33)
And that's another episode of Prompt This.
AI Announcer (32:39)
Alright, listen up folks. Thanks for tuning in with Clint and Greg today, alright? You wanna get all the Prompt This episodes plus some real deep dive articles? Yeah, go to www.promptthis.ai. Kapis?
And hey, don't forget, hit that follow button right down there below. Don't be shy, all right?