LongCut logo

82% of Companies Are Seeing Positive AI ROI

By The AI Daily Brief: Artificial Intelligence News

Summary

## Key takeaways - **82% Companies Report Positive AI ROI**: 82% of companies reported positive ROI from AI, with 37% reporting high ROI meaning significant or transformational. A full 96% anticipate positive ROI within 12 months. [04:49], [04:57] - **Small Firms Outperform on AI ROI**: The smaller the organization, the higher the reported ROI, with small organizations of 1-50 people showing higher averages across impact categories like revenue increases of nearly 25%. Those small organizations also project higher ROI in the future. [05:34], [06:15] - **Time Savings Average 8 Hours Weekly**: AI saves on average just under 8 hours per week per use case, equivalent to a day per week or nearly 2 months of work each year, though 10% save 20-40 hours. [10:04], [10:15] - **Cost Cuts Nearly 50% in AI Use Cases**: Across use cases where cost savings was the primary benefit, AI cut costs nearly in half, with 27.3% reporting 75-100% savings. [10:18], [10:28] - **Strategic Benefits Beat Time Savings**: While time savings was the most common benefit, respondents focused on strategic benefits like improved decision-making, new capabilities, and increased revenue had higher overall ROI scores. [11:24], [11:45] - **Diverse Benefits Boost ROI**: Organizations with use cases representing more different benefit types had higher mean ROI, from 3.13 for one benefit type to 3.65 for eight benefit types. [12:11], [12:49]

Topics Covered

  • Small Firms Capture Higher AI ROI
  • Senior Roles Report Bigger AI Impact
  • Strategic Benefits Trump Time Savings
  • Portfolio Approach Boosts AI Returns
  • Agentic AI Excels in Risk Reduction

Full Transcript

Today we are doing a first readout of some of the results of our AI ROI benchmarking study and it turns out that AI is already nasonent though it may be driving quite a bit of value.

Now today we are finally doing a first readout of the AI ROI benchmarking study. This is the thing that I asked

study. This is the thing that I asked you folks to contribute to back in November and the reason it's taken a little longer than we thought to process it all is that you guys way overd delivered for which I am incredibly

appreciative. So, what we're going to do

appreciative. So, what we're going to do is talk a little bit about how we set this up, what the composition of the respondents were, and then we're going to get into what we actually found out.

First of all, let's talk about the setup. My big thesis heading into 2026

setup. My big thesis heading into 2026 is that there's going to be much more emphasis on understanding the real impact of AI rather than just doing things in the dark. Now, I do not believe that in short order we're going

to have any sort of super common or very clear standards when it comes to AI ROI.

I think a lot of people are going to experiment with a lot of things and that's definitely the spirit of this. In

no way are we contending that this is the only way to measure ROI. In fact,

one of our key acknowledgements is that this is all self-reported. However, the

way that we broke down different types of impact is that we put together eight impact or primary benefit categories that captured in our estimation a pretty big chunk of the value that people were getting out of AI deployments and

initiatives. That includes things like

initiatives. That includes things like time savings, cost saving, increased output, improvement in quality, increased revenue, new capabilities, reduced risk, and improved decision-m.

Now, as you can tell, some of these have a quantification that goes with them.

So, for time savings, it was hours saved per week. For cost savings, it was an

per week. For cost savings, it was an estimation of cost reduction in percentage terms. Increased revenue, increased output, and improved decision-making were all again estimates in percentage. And then new capabilities

in percentage. And then new capabilities and risk reduction were both qualitative fields where people could describe what the new capabilities were or how risk had been reduced. We also used a numerical scoring system, a 1 through

five scale where one is negative ROI below break even, two is break even, three is modestly positive, four is significantly positive, and five is transformational. Now throughout this

transformational. Now throughout this you will sometimes hear me refer to high ROI which is our shorthand for significant plus transformational.

basically anything above modest. I do

also want to point out, as this was a nuance that was lost in some reports this year, that negative ROI does not mean program failure. It can mean that, but at this stage, as early as we are,

it more often means an AI initiative that hasn't paid back yet. And of

course, there's no guarantee that it does. But when you dig into even the

does. But when you dig into even the very small percentage of people that were sharing use cases with negative ROI, it tended to be about high setup costs and nency of the programs rather than the AI just not working for its

intended purpose.

Now, as I mentioned, you guys really showed up. We had over,200 unique

showed up. We had over,200 unique respondents and over 5,000 total use cases. And so, where does that leave us

cases. And so, where does that leave us in terms of the rigor of this study? By

no stretch of the imagination is this some super scientific and highly controlled survey. We put it out to you

controlled survey. We put it out to you guys as the listening audience, asked anyone who wanted to to show up, and gave you the chance to self-report. We

had, as you'll see, a pretty wide diversity of contributors across different industries, org sizes, and roles, but there's certainly some concentrations. You're going to see a

concentrations. You're going to see a concentration in the technology industry as well as professional services. You're

also going to see a concentration around small enterprises and solopreneurs. It's

clearly a big chunk of this audience.

And while some of those things impact the results, our argument is not that our results are a definitive look on what AI ROI looks like right now.

Instead, what I believe is that the signal is so clear that as part of the emerging body of AI ROI exploration, this is a powerful signal that gives us

a strong sense of the trajectory of where things are going. So, let's talk a little bit more about the sample size.

You can see we had heavy concentration among small organizations of 1 to 50.

That represented about 44% of the total contributors. The rest were fairly

contributors. The rest were fairly evenly split. The next largest category

evenly split. The next largest category was from organizations with 5,000 plus at around 18% and then 51 to 200, 2011 to 1,000, 1,000 to 5,000 all had between

about 11 and 14%. We had a similar diversity of role although again that seale founder at 35.1% reflects the heavy concentration of small organizations and solopreneurs. We also

had 19% at director level, 15% at manager level, 14% who considered themselves an individual contributor, 8.5% at VP and 7 12% who said other as I

mentioned technology and professional services dominated with a lot of folks also coming from education, healthcare and manufacturing.

So what did we learn? The big banner highlight for sure is that people are right now realizing value from AI and they expect it to grow. 82% of companies

reported positive ROI from AI. 37%

reported high ROI, which again means significant or transformational. A full

96% anticipate positive ROI within 12 months. When you look across the use

months. When you look across the use cases, about 45% reported modest ROI right now, compared to 28.1% who reported significant ROI, 8.8% who

reported transformational, 12.5 who were at break even, and just 5.6 six who were at negative. In the anticipated ROI, the

at negative. In the anticipated ROI, the big expectation was a shift from modest to significant with almost exactly half anticipating significant increases in the next year. You can see that across

organization size, the average ROI reported per use case was right around that modest level of three. However,

there is a subtle but clear pattern where the smaller the organization, the higher the reported ROI. Now, I think in some ways this makes intuitive sense and I think it's about nimleness having

advantages. A lot of what AI is good at,

advantages. A lot of what AI is good at, saving time, increasing output, is especially relevant inside small organizations who are the most resource constrained. Now, those small

constrained. Now, those small organizations also project higher ROI in the future. Although again, on an

the future. Although again, on an organization level, every single organization size projects a move from the modest ROI level currently to a significant ROI level in the future.

When you go category by category, and we're going to do a little bit of that in this readout, small companies tend to overperform on each particular quantification of impact category as well. For example, among use cases whose

well. For example, among use cases whose primary benefit was increasing revenue, respondents from the smallest organizations reported an average of nearly a 25% revenue increase, whereas all the use cases from all the other

size organizations were between 10 and 15%. Which by the way is still nothing

15%. Which by the way is still nothing to laugh at. You also see some interesting differences of perceived impact by role. As I mentioned before, we definitely think that there is a soloreneure effect where the sea levels

and founders were seeing a much higher rate of significant and transformational ROI. In fact, among the sea level and

ROI. In fact, among the sea level and founders, over half of use cases were perceived as having high ROI right now.

Interestingly though, this slant towards more senior roles reporting higher ROI does seem to hold a little bit more generally. For example, VPs reported 28%

generally. For example, VPs reported 28% of their use cases having significant ROI as opposed to 18.1% for directors and 19.8% for managers. My speculation

on that is that the more senior you are, the more the use cases that you're involved in are big orwide and systemic and so have a better chance to have significant or transformational impact.

But that's certainly something that we want to dig into more in future surveys.

When you look at high ROI by industry, which again means the use cases that report significant and/or transformational ROI, it ranges from a low end of energy reporting around 23.5%

high ROI use cases all the way up to nearly half with education reporting 47.1%.

The biggest cluster of organizations is between 33 and 38% including healthcare, professional services, media, government and public sector, retail and e-commerce with financial services being slightly

lower at 25%. And technology slightly higher at 42.2%.

By the way, a big part of the reason for technology having a higher reported ROI on average is the concentration of coding use cases, which often outperformed other categories of use cases when it came to high reported ROI.

Now looking across the distribution of primary benefits, this I think probably won't surprise you. Time-saving was the most common impact area across all use cases. Over a third of use cases said

cases. Over a third of use cases said that their primary benefit was time savings. That was followed next by

savings. That was followed next by quality improvement which represented 15% of use cases, increased output, which represented around 14%. New

capabilities which represented over 12% and then improved decision-m cost savings increased revenue and risk reduction were all between around 4 and 9%. When you aggregate use cases per

9%. When you aggregate use cases per organization the average organization had use cases representing around 2.7 different use cases. So basically most organizations who were sharing their use

cases shared use cases that had between two and three different primary benefits. And so on an organization

benefits. And so on an organization level, time savings once again was the dominant category of use case with around 80% of organizations having some use case whose primary benefit was time

savings. On an organization level, the

savings. On an organization level, the next three were increased output at around 40% and new capabilities and quality improvement both at around 35%.

Now, I'm giving you some of the highlights, but one of the things that's really interesting is to start to dig into the detailed results where you can actually see some pretty interesting differences in how different types of

organizations prioritize different types of use cases. For example, while timesaving as the primary use case was ubiquitous across all organization sizes, smaller organizations definitely

had much more emphasis on both increased output and new capabilities than did large organizations. Again, I think that

large organizations. Again, I think that there is something intuitive about this where those organizations that are resource constrained are taking advantage of AI to do things they couldn't do before and do more of the things that they previously were

resource constrained to do less of. Now,

let's talk about some of the quantification of these different benefits. Within the context of time

benefits. Within the context of time savings, there's a fairly wide distribution of how much time people were saving. Some folks were saving just

were saving. Some folks were saving just a couple of hours. A big chunk was saving between 2 and 5 hours. But 10% of people were saving between 20 and 40 hours. Another 17% were saving between

hours. Another 17% were saving between 10 and 20 hours and even 3% were saving 40 hours or more. The average across all of this was just under 8 hours, meaning that on average AI was saving a day per

week or nearly 2 months of work each year. When it came to cutting costs,

year. When it came to cutting costs, across all of the use cases where cost savings were the primary benefit, AI cut costs nearly in half. 27.3%

reported cost savings between 75 and 100%. 18.3% had it between 15 and 75%.

100%. 18.3% had it between 15 and 75%.

The smallest category was 10% savings or less representing just 7.8% of the costsaving use cases. When it came to increases in output, it was significant.

Across every org size, the average use case increased output by greater than 50%. Once again, we see a distribution

50%. Once again, we see a distribution where each org size smaller than the last has a higher increase in output all the way up to 81.7% for the average use case increasing output among the use

cases of companies that were between 1 and 50 people. Now, in terms of some of the qualitative, when people were discussing the new capabilities it gave them, around 15% reference speed and

scale, around 14% of those use cases reference new insights, and around 12% reference personalization. on risk

reference personalization. on risk reduction. About 20% of the use cases

reduction. About 20% of the use cases referenced early warnings, 19% referenced error catching, and 10% referenced compliance. Now, one of the

referenced compliance. Now, one of the big learnings is that while time savings was the most common benefit, it wasn't the most valuable. Respondents who

focused mostly on time savings reported lower overall ROI. The respondents who instead focused on strategic benefits like improved decision-making, new capabilities, and increased revenue had

higher ROI scores overall. And since new capabilities seem to be correlated with high ROI, what were the types of new capabilities that were appearing? For

this, we did some textual analysis. So,

a single new capability use case could represent a number of these different areas. But around 53% of these reference

areas. But around 53% of these reference creative generation in some way, almost a third, 30% were related to coding or technical capabilities. and 27% had to

technical capabilities. and 27% had to do with new insights and analysis. One

really interesting finding is that systems level thinking and taking a portfolio approach to ROI seems to lead to higher ROI. So if you look at the mean ROI by the number of benefit types

within an organization, in other words, if you look at organizations who had use cases that represented just one benefit category versus organizations that had use cases that represented all eight

different types of benefit categories at each level, the more different types of benefit you had, the higher the reported ROI overall. So, for example,

ROI overall. So, for example, organizations that had use cases with just one benefit type had a mean ROI of 3.13 compared to those that had four

benefit types who had a mean ROI of 3.35 compared to those who had eight benefit types who had a mean ROI of 3.65, more than halfway between modest and

significant ROI overall. I did an episode recently about how AI value compounds, and I think this is another example of that type of phenomena. We

also clustered the different use cases by the category of work they were as opposed to just what their primary benefit was. Content and communications

benefit was. Content and communications related use cases were the highest at 25.4%.

Code and software development was next at 19.6%. Customer sales and marketing

at 19.6%. Customer sales and marketing was next at 10.5%. And then data and analytics, document legal and compliance, HR, recruiting and learning, operations and supply chain, and finance and accounting were all between 2 and

1/2 and 10%. for each of them. Despite

the different category, the top benefit on average was still time savings with the average time saved ranging between 6.7 hours a week and 11.9 hours per week. Across the different clusters, the

week. Across the different clusters, the category that had the highest cost savings was code and software development, who saw on average 60% cost savings for use cases where that was their primary benefit. And in terms of

quality improvement, the highest was in data and analytics where those use cases focused in that area that had quality improvement as their primary benefit saw a 45% improvement on average. But what

about agents? Obviously, one of the biggest topics of conversation in 2025 was agents and identification. But how

many of these use cases were assisted AI versus agentic AI? We actually broke it up a little further into three categories. Assisted AI, where the human

categories. Assisted AI, where the human initiates every single interaction.

automation, which includes things like workflows pipelines scripts and processes, and then agentic AI, which is really autonomous work execution.

Honestly, the distribution kind of strikes me as probably about right from what we might expect. Assisted AI

represented 56.6% of use cases.

Automation AI managing entire workflows represented just under 30%. And agentic

AI represented around 13.8%.

Now, if anything, we think that there may be some hype inflation around agents where things that are a little bit closer to a more simple workflow automation are considered agentic AI, but still these numbers resonate with,

for example, what we saw on Menllo's Enterprise AI report as well. It was

also interesting to see where agents are showing up. The number one area by

showing up. The number one area by percentages for Agentic AI was risk reduction, followed by new capabilities and cost savings. time savings actually had the least agentic AI, suggesting that use cases that are for people

primarily about time savings are often just about doing their core work that they have to manage a little bit faster on a daily basis. So let's sum things up. Overall, we are seeing positive ROI.

up. Overall, we are seeing positive ROI.

More than four in five AI users are reporting value above break even. One in

three, meanwhile, are seeing major business impact of significant or transformational ROI. It's actually even

transformational ROI. It's actually even more than one in three. It's about 37%.

Approximately 1 in 11 are reporting game-changing results with transformational ROI. And importantly,

transformational ROI. And importantly, there is near universal optimism about future gains. A full 95.7%

future gains. A full 95.7% anticipate seeing increased ROI in the future. Time saving is the most common

future. Time saving is the most common benefit and is currently on average equivalent to reclaiming one workday per week. Where AI is applied to cost, it

week. Where AI is applied to cost, it cuts it nearly in half. where AI is applied to increasing output. People are

doing 40% more with the same resources.

And while it's still nent, we are seeing some real revenue impact with the median revenue increasing use case increasing revenue by 12%. This of course all stands at stark odds to some of the

reports that we've seen this year of AI underperforming. So what is that

underperforming. So what is that attributable to? First of all, there is

attributable to? First of all, there is of course the methodology of those reports which in many cases were using some pretty wonky methodology to report value. And on the more skeptical side,

value. And on the more skeptical side, we do have to acknowledge, of course, that this is self-reporting, which is notoriously difficult. And it's

notoriously difficult. And it's self-reporting from an unbelievably infranchised, hyperengaged audience. If

you are listening to a daily AI podcast, you better believe that you're going to be in the top 10% of AI users overall.

And so, I do think that that is a relevant caveat. Then again though, the

relevant caveat. Then again though, the results aren't all that higher than some other credible surveys that we've recently gotten like the Warden study which found 74% of companies getting positive returns from Genai. Our

benchmarking study is only a little above that at 82%. Meaning that

directionally they're telling the same story.

Hopefully it was helpful for you though to have a little bit more of an impact level breakdown. Certainly this is

level breakdown. Certainly this is something that has been extremely useful in helping us understand where things are currently for a huge array of business AI users. Now, if you are interested in this sort of original

research and the type of benchmarking that it could turn into, go to aidbintel.com.

aidbintel.com.

You can sign up to get updates about future research. And also, if you want

future research. And also, if you want to be even more engaged, a thing that I really wanted to do following this very haphazard and totally random study was to try to get a little bit more specific

about creating a tracking panel that has more concentrated and representative samples across different org sizes, titles, and industries so that we can go from a random one-off study to something

that's a little bit more persistent and lets us better understand how things are changing over time and gives us more tools to help you and your organizations figure out where you stand relative to others.

You can find more about that again at aidbintel.com.

aidbintel.com.

But otherwise, that's going to do it for today's readout of the AI ROI benchmarking study. Appreciate you

benchmarking study. Appreciate you listening or watching as always. And

until next time, peace.

Loading...

Loading video analysis...