I was in a roadmap planning session last month when a VP pulled up a slide: “75% of C-suite leaders expect AI to deliver ROI within six months.” The room nodded. Budgets got approved. Timelines got drawn. And I sat there thinking about the AI ROI expectations that were about to derail this team — because almost none of it matched what the data actually shows.
Most AI ROI expectations are built on hype-cycle optimism, not evidence. According to Deloitte’s 2025 AI survey, only 6% of organizations see AI payback in under a year. The real timeline is two to four years. And five specific wrong expectations cause the most damage to software teams. BAs and PMs are the ones who end up managing the fallout.
Why Are AI ROI Expectations So Dangerous Right Now?
The pressure is real and growing. According to Deloitte, 91% of organizations increased their AI spending last year. And 91% plan to spend even more this year. Yet PwC’s 2026 CEO Survey found that most CEOs still can’t show real ROI from AI.
So executives keep spending, but the returns aren’t there yet. And that pressure flows downhill. First, executives set bold targets. Then product managers build roadmaps around those targets. Next, business analysts write requirements based on assumptions no one checked. Finally, dev teams ship features under timelines that ignore how AI actually works.
The problem isn’t that AI can’t deliver value. It’s that the timeline and conditions executives expect don’t match what it takes to get there. Here are the five AI ROI expectations I see causing the most damage.
What Are the 5 AI ROI Expectations That Hurt Teams Most?
1. “We’ll See Returns Within Six Months”
This is the most common and most dangerous expectation. A survey cited by Deloitte found that 53% of investors expect positive ROI in six months or less. But only 6% of organizations actually see payback in under a year. And even among top projects, just 13% see returns within 12 months.
AI isn’t a feature you deploy in a sprint. It needs data pipelines, model training, testing in production, user adoption, and constant iteration. So each stage has its own timeline. And skipping any of them leads to problems later.
What to do: Set stakeholder expectations to 12–18 months for first results. Then build a phased plan that shows progress at 90-day intervals. This works better than a single “ROI or bust” deadline.
2. “Most AI Projects Succeed”
Executive confidence in AI is high. But the data tells a very different story. According to MIT’s GenAI Divide report, 95% of enterprise generative AI projects fail to show measurable returns. Also, S&P Global’s 2025 survey found that 42% of companies dropped most AI work during the year — up from just 17% in 2024. And PwC’s 2026 data shows only 12% of organizations hit both revenue growth and cost cuts from AI.
So why does the perception gap exist? Because of survivorship bias. Conference talks and vendor case studies only feature the winners. The failures stay quiet.
What to do: Build kill criteria into every AI project before it starts. Define what failure looks like in clear terms. Then agree on it with stakeholders before writing the first requirement. If you’ve read my post on when AI is the wrong solution, this is the same idea applied to ROI.
3. “We Can Measure AI ROI Like Any Other Tech Investment”
Traditional software gives you clear outcomes. You ship a feature, then you measure adoption, and then you calculate value. But AI doesn’t work that way. According to Deloitte’s survey, 58% of executives said that traditional ROI measures don’t work for AI. One executive in the report put it bluntly: “We only managed to get a ballpark estimate… it was hard to separate the gains from AI initiatives from those of other initiatives.”
AI is probabilistic. So its outputs vary. And its value is often indirect — better decisions, fewer errors, faster analysis. But those gains don’t always show up neatly in a P&L statement. I wrote about this exact problem in why traditional KPIs fail for AI features. The measurement gap is real.
What to do: Define how you’ll measure results before development begins. Use leading indicators like time saved and error rates alongside financial metrics. And accept that some AI value will be hard to isolate — but that doesn’t mean it isn’t real.
4. “A Successful Pilot Means We’re Ready to Scale”
This is where I’ve seen the most wasted effort. A pilot works in a controlled setting with clean data, a motivated team, and limited scope. But production is a different world. MIT found that only 5% of generative AI pilots deliver sustained value at scale. And BCG reported that 74% of companies struggle to capture value when moving from pilot to production.
Why does this happen? Because pilots skip the hard parts. They don’t deal with messy real-world data, edge cases, or integration with older systems. And they avoid the change management work that production requires. So a pilot proves the technology can work. But it doesn’t prove it can work in your environment, at your scale, with your constraints.
What to do: Treat pilot-to-production as a separate phase. Give it its own requirements document, timeline, and budget. And don’t let a successful demo turn into a promise to ship. If you need a framework for production-stage requirements, my post on AI feature requirements covers the approach I use.
5. “AI Will Pay for Itself Through Efficiency Gains”
Efficiency gains are real. But they rarely cover the full cost. The hidden costs of AI add up fast: model maintenance, retraining, compliance work, data governance, and the ongoing effort to keep systems accurate as conditions change.
And there’s also a perception gap. A Section survey of 5,000 white-collar workers found that almost 80% of C-suite leaders said AI saves them at least four hours per week. But two-thirds of workers said it saves them two hours or less. When the people who approve budgets see twice the value of the people doing the work, your ROI math is built on sand.
What to do: Build full cost of ownership into every business case. Include not just licensing and compute, but also ongoing costs: maintenance, monitoring, retraining, and team time to keep AI features working. The efficiency gains may still justify the spend — but only if you’re honest about the real costs.

How Should Your Team Handle Unrealistic AI ROI Expectations?
If you’re a BA or PM, you sit between executive ambition and delivery reality. You can’t control what leadership believes about AI. But you can control how projects get scoped and what assumptions go into requirements.
Here are three moves that work:
- Reframe the conversation around phased value. Instead of one ROI target, propose a staged plan. Show quick wins at 90 days, then measurable gains at 6 months, then full value at 12–18 months. Executives respond better to visible progress than to promises.
- Bring the data into planning meetings early. The Deloitte, PwC, and MIT numbers in this post aren’t controversial. They’re mainstream research. So presenting them early sets realistic foundations before commitments get locked in.
- Define success criteria before writing a single requirement. If the team can’t agree on what success looks like, then they can’t measure ROI. Get alignment on metrics, timelines, and kill criteria before any development begins.
Your job isn’t to kill AI projects. It’s to make sure the ones that move forward have realistic foundations.
What Comes Next
These expectations aren’t going away. Executive confidence in AI is rising, not falling. But the teams that set realistic foundations now will be the ones who actually deliver value in 12–24 months. And the ones who don’t will be explaining why the demo worked but the feature didn’t.
If this kind of thinking is useful to you, I write about it every week. Subscribe below and I’ll send new posts straight to your inbox — no spam, no fluff.
Frequently Asked Questions
What Is a Realistic Timeline for AI ROI?
Most organizations see satisfactory AI ROI within two to four years, according to Deloitte’s 2025 survey. Only 6% see payback in under a year. So for software teams, setting expectations to 12–18 months for first measurable outcomes is a realistic starting point. But full value takes longer.
Why Do Most AI Projects Fail to Deliver Expected ROI?
The main reasons are unrealistic timelines, poor measurement, and the gap between pilot success and production reality. MIT’s 2025 research found that 95% of enterprise GenAI projects fail to show measurable returns. Most fail not because the technology is broken, but because organizations don’t account for what it takes to deliver value at scale.
How Should Teams Measure AI ROI?
Traditional ROI measures often don’t work for AI because value is indirect and hard to isolate. So use a mix of leading indicators — like time saved, error rates, and decision speed — alongside financial metrics. And define your measurement approach before development begins, not after launch.
What Percentage of AI Initiatives Actually Meet AI ROI Expectations?
The numbers vary by study, but the range is consistent. Only 5–25% of AI initiatives meet expected ROI. PwC’s 2026 CEO survey found just 12% achieve both revenue growth and cost cuts. So the gap between executive expectations and actual results remains wide.
How Can BAs and PMs Push Back on Unrealistic AI ROI Expectations?
Bring data to planning meetings early. Research from Deloitte, PwC, and MIT provides credible evidence for realistic timelines. Then propose phased value delivery instead of single ROI deadlines. And define concrete success criteria and kill criteria before any development work begins. Your role is to ground the conversation in evidence, not to block AI adoption.





Leave a Reply