MDO logo

November 10, 2025

5 min read

The real ROI of AI

By Aaron Seelye

You may have seen the headlines lately: one study claims 95% of AI projects fail, while another says three-quarters of businesses are already seeing positive ROI from AI. Both are from top-tier schools, MIT and Wharton, respectively, and both are technically correct. The trick is understanding what each of them meant by “success.”

MIT’s Project NANDA looked at hundreds of AI initiatives across industries and concluded that only about 5% reached production and showed a measurable change in profit and loss. That sounds brutal, and it is. But when you dig into the details, the picture becomes less grim and more about definitions and timeframes. Their measure of “success” was incredibly narrow: they looked at whether, within months of implementation, the AI deployment had visibly shifted P&L numbers on the company’s balance sheet. Think about that for a second. That’s barely enough time for teams to get through the onboarding stage, much less achieve measurable financial transformation. It assumes that freed-up labor hours immediately translate into profits, which is rarely the case. Sometimes that extra time is used for training, for experimentation, for taking on new tasks, or simply catching up on work that’s been neglected for years. Those are good outcomes, but none of them would register as “success” under MIT’s definition.

Wharton’s AI Adoption Report, on the other hand, took a broader and frankly more realistic view. Their survey of around 800 C-level and senior executives found that roughly three in four companies are seeing positive ROI from their AI programs. When executives say a program is successful, they aren’t just patting themselves on the back for fun. These are people whose careers depend on accurate reporting and sound judgment. They see success not only in financials, but in things like improved morale, better decision-making, and enhanced customer experience. Those changes often lead to revenue improvements later, but they start as qualitative shifts: the kind that don’t show up in the next quarterly report but absolutely matter for the long-term health of the organization.

This difference in measurement explains the gap between “95% fail” and “75% succeed.” MIT was evaluating how quickly AI translated into line-item profit. Wharton was evaluating whether leaders saw real, tangible benefit. The two questions aren’t contradictory; they’re just aimed at different horizons. The MIT study was almost set up to fail. Expecting that a pilot would go to full production, achieve buy-in from frontline employees, and start moving the financial needle in just eight weeks. That’s not how transformation works in the real world. It’s like judging a construction project by how much rent it generates before the walls are up.

The truth is, success in AI doesn’t always look like a new revenue stream or a reduced headcount. Often it looks like incremental improvement, eliminating repetitive work, freeing up creative bandwidth, or enabling faster response times. Those things add up. They make employees happier, which improves retention, which improves customer satisfaction, which eventually affects revenue. But none of that fits neatly into a short-term financial metric. A business can absolutely be improving without it being visible yet on paper. In short, Success in AI doesn’t always show up on the P&L report; it often looks like fewer headaches.

That’s why it’s critical to define success before implementation begins. Every AI project should have clear KPIs, and those KPIs should make sense for the type of work being automated or enhanced. If your goal is to speed up customer response, track that. If it’s to reduce burnout, measure employee satisfaction and turnover. If you only look at dollars, you’ll miss the other kinds of value you’re creating, and those often lead to the dollars later. There’s nothing wrong with starting small. A year is often a more reasonable window to evaluate ROI, especially if your business runs on seasonal cycles. If you implement AI in the off-season, you won’t see the true impact until your busy period rolls around. That said, will you have to wait a full year to see any impact? Not at all. But if you’re expecting overnight transformation, you’re looking for a lottery ticket, not a plan.

Equally important is change management. Most failed implementations aren’t because the tech doesn’t work, they fail because people don’t. If you don’t have buy-in from the people actually using the tools, you’re sunk before you start. Employees need to see that AI isn’t a threat to their jobs, but a tool that makes their workday smoother. That requires communication, training, and trust. Once the front line sees that the changes help them, adoption becomes organic, and ROI follows naturally.

Ultimately, both studies tell the same story from different angles. MIT highlights how hard it is to achieve instantaneous, P&L-visible success, while Wharton reflects how companies are already seeing meaningful improvement in productivity, morale, and operational clarity. The takeaway isn’t that one is right and the other is wrong. It’s that ROI is not a single metric, it’s a timeline of outcomes. Quick wins matter, but structural change matters more, and one usually leads to the other.

The smartest strategy is to build for both. Start with manageable, well-defined AI projects that deliver visible value to your team, and then layer in the deeper process changes once you’ve earned that trust. That’s how you move from improving the workday to transforming the business, not overnight, not in a super-narrow timeframe, but step by step, in a way that lasts.


ChatGPT ai-for-small-business More-Days-Outside
Aaron Seelye

Aaron Seelye

Founder, MDO

Aaron Seelye has spent 25+ years helping businesses make technology actually work for them. As founder of MDO, he helps organizations apply automation and AI that deliver measurable results without unnecessary complexity.

Related Articles