Disclaimer: This article is an independent analysis and commentary on the 2025 McKinsey AI Report. McKinsey & Company does not endorse, sponsor, or have any affiliation with AInDotNet or the viewpoints expressed here.

McKinsey: 64% See Innovation Gains — But Only 39% See EBIT Gains
Artificial intelligence is generating excitement, demos, prototypes, internal showcases — and almost no enterprise-level financial returns.
According to the 2025 McKinsey AI Report:
- 64% of companies say AI improved innovation
- Only 39% say AI improved EBIT
That gap isn’t caused by the technology.
The gap exists because most companies have no ROI discipline for AI.
Innovation is optional.
EBIT is mandatory.
And enterprises are confusing the two.
AI is not missing — the business discipline around AI is missing.
The Real Reason AI Doesn’t Improve EBIT
Most organizations apply AI upside-down:
- They start with “cool ideas.”
- They chase novelty and hype.
- They stand up pilots that don’t connect to actual business problems.
- They treat AI as a research project instead of an operational tool.
This creates “AI theater” — demos that look impressive but never generate real business value.
To fix this, enterprises need a framework grounded in engineering discipline, not “innovation goals.”
The “Faster, Better, Cheaper” Framework
If AI isn’t improving at least one of these three outcomes, you’re not doing AI — you’re doing experiments:
. Faster
Cycle time
Throughput
Task duration
Customer response speed
2. Better
Accuracy
Quality
Consistency
Decision reliability
3. Cheaper
Reduced labor hours
Reduced waste
Reduced error correction
Reduced licensing or tooling cost
Every AI project should map mathematically to at least one of these — and ideally two or three.
The companies that consistently deliver EBIT lift?
They use this framework to decide what to automate and what to leave alone.
Rule #1: Use Automation for What Automation Can Do
Most companies overuse AI for tasks that are deterministic and rule-based.
If something can be automated using:
- workflows
- rules engines
- SQL queries
- C# logic
- Power Automate
- stored procedures
- integration scripts
- or basic RPA
then automation is cheaper, more scalable, and more trustworthy than AI.
AI should never be your first option.
AI should be your last option — when automation fails.
Rule #2: Only Use AI Where Automation Fails
AI is probabilistic.
Automation is deterministic.
AI belongs in the gaps automation can’t cover:
- interpreting unstructured data
- pattern recognition
- summarizing knowledge
- generating options
- natural language tasks
- subjective classification
- insight extraction
- providing recommendations
When enterprises reverse this — using AI for deterministic tasks and automation for complex ones — they end up with:
- inflated cost
- slower systems
- harder troubleshooting
- unpredictable accuracy
- no measurable ROI
AI must be used strategically, not universally.
The Prototype → MVP → Production Process (The Discipline That’s Missing)
This is where most enterprises completely fail.
They run AI projects like this:
Pilot → Innovation Award → Never Actually Used
What they should be doing is:
1. Prototype (1–2 weeks)
Test feasibility.
Test basic accuracy.
No polish.
No integration.
Just: “Does this idea even work?”
2. MVP (2–3 months)
Add minimal workflow integration.
Test with one team.
Measure actual performance.
Start ROI tracking.
This is where most projects should die.
MVPs that don’t generate ROI should be killed quickly.
3. Production (6-12 months)
Logging
Governance
Security
Identity integration
Data permissions
Human-in-the-loop
Monitoring
Versioning
Enterprise-grade reliability
Skipping this maturity process is why enterprises get stuck in the “pilot graveyard.”
Your approach — Microsoft-native, .NET-integrated, iterative — solves this problem directly.
Companies don’t need new stacks.
They need discipline.
Why Executives Treat AI Incorrectly
Executives keep treating AI like:
- a strategy
- a transformation
- an innovation initiative
- a competitive differentiator
- a magic accelerant
But AI is not any of these things.
AI is simply a tool — and should be evaluated exactly like automation.
Executives approve AI projects for the wrong reasons:
- “AI seems important.”
- “Our competitors are doing it.”
- “We need something to show the board.”
- “This will help our innovation optics.”
- “Let’s experiment and see what happens.”
None of these lead to ROI.
EBIT gains come from AI projects that:
- replace cost
- reduce cycle time
- eliminate labor hours
- increase throughput
- reduce error-related rework
- augment high-value decision making
AI only creates business value when tied directly to operational metrics.
The Real Message: AI ROI Is a Discipline — Not a Feature
McKinsey isn’t telling us that AI is failing.
McKinsey is telling us that companies are failing to use AI in a disciplined, operational manner.
The organizations generating real EBIT lift are the ones that:
- Map every AI idea to the “faster, better, cheaper” framework
- Start with automation, not AI
- Use AI only for what automation can’t do
- Follow prototype → MVP → production
- Score AI use cases based on real business value
- Build with the tools they already own (Microsoft/.NET)
- Log everything
- Keep humans in the loop
These companies don’t see innovation spikes.
They see bottom-line performance.
Where Microsoft and .NET Fit Into the ROI Equation
This is where AInDotNet stands alone.
1. Most enterprises already own 80% of the required AI stack
Azure
Microsoft 365
Teams
SQL Server
SharePoint
Power Platform
.NET
Active Directory
No new tools.
No rip-and-replace.
No multi-million-dollar platforms.
2. .NET teams already understand enterprise architecture
Identity
Security
Logging
Governance
Performance
Scalability
Distributed systems
This is exactly where AI projects normally fail.
3. AI becomes a plug-in, not a science project
You embed AI inside existing systems, instead of creating a separate AI tool stack that no one knows how to maintain.
4. ROI becomes measurable
C#
SQL
Power Automate
Azure Functions
Copilot integration
Everything plugs naturally into the Windows + Microsoft ecosystem — and connects to the data companies already trust.
This makes ROI predictable instead of mysterious.
Conclusion: AI Isn’t Failing — AI Management Is Failing
Innovation is good.
But innovation without ROI discipline is a distraction.
The companies that win with AI aren’t the ones doing the most experiments.
They’re the ones treating AI like automation:
Measured
Scoped
Tested
Audited
Integrated
Governed
Evaluated
Prioritized
With this mindset, enterprises stop doing “AI innovation.”
They start generating EBIT.
And that’s what separates AI dabblers from AI performers.
Formal Disclaimer:
This article contains independent analysis and commentary on the publicly available 2025 McKinsey AI Report. AInDotNet, its authors, and its associated brands are not affiliated with, sponsored by, or endorsed by McKinsey & Company. All references to McKinsey’s findings are for discussion and educational purposes only. Any interpretations, opinions, or conclusions expressed are solely those of the author.
Frequently Asked Questions
Why does AI improve innovation but not EBIT in most companies?
Because most organizations use AI for experimentation instead of operations. They run pilots, demos, and prototypes but rarely redesign workflows or tie AI to measurable cost savings, cycle time reductions, or throughput gains. Innovation increases — but profitability does not.
What is the biggest reason AI pilots fail to deliver ROI?
Lack of discipline. Companies jump straight into building “cool” AI projects without using a structured process: Prototype → MVP → Production. Most pilots never reach production because they were never tied to real business value.
How can enterprises measure real AI ROI?
By using the “Faster, Better, Cheaper” framework:
- Faster: reduced cycle time or increased throughput
- Better: increased accuracy or decision consistency
- Cheaper: reduced labor hours or error-related costs
If an AI use case doesn’t improve one of these, it shouldn’t be built.
Should companies use AI or automation first?
Always automation first. Automation is cheaper, more predictable, more scalable, and easier to maintain. AI should only be used when automation cannot solve the problem — usually when dealing with unstructured information, subjective tasks, or complex pattern recognition.
Why do executives treat AI incorrectly?
Because AI is often sold as a “transformation strategy” instead of a tool. Executives view it as innovation, optics, or competitive advantage instead of a practical way to automate work. This mindset prevents operational ROI.
What is the Prototype → MVP → Production methodology?
A disciplined AI delivery framework:
- Prototype: quick feasibility test
- MVP: minimal integration + small audience
- Production: fully secured, logged, governed, and monitored system
Companies that follow this path generate consistent ROI. Those that skip it get stuck in endless pilot mode.
What kinds of AI use cases deliver the fastest ROI?
Use cases that:
- Automate repetitive tasks
- Reduce manual review, summarization, or data entry
- Improve decision quality
- Remove friction in a workflow
- Integrate directly into existing line-of-business tools
Examples: email triage, reporting assistants, decision support, document processing, knowledge retrieval.
Why do AI projects collapse when scaling?
Because they lack:
- Logging
- Monitoring
- Identity/security integration
- Workflow integration
- Data quality controls
- Governance
- Performance engineering
Most AI pilots are built by teams who don’t understand enterprise architecture.
How does the Microsoft/.NET ecosystem improve AI ROI?
Because most enterprises already own:
- Azure
- Microsoft 365
- Teams
- SQL Server
- SharePoint
- Power Platform
- .NET developers
This reduces cost, training, approvals, security complications, and architecture sprawl — giving AI a native place to fit in the business.
Why is automation often better than AI for ROI?
Automation is:
- Deterministic
- Cheaper
- More accurate
- Easier to test
- Easier to maintain
- Easier to scale
AI should be used only when automation fails — not before.
How can companies build AI that employees actually use?
By embedding AI inside tools employees already use:
- Outlook
- Teams
- Microsoft Copilot
- SharePoint
- Existing .NET applications
AI should not live in a separate dashboard, portal, or standalone app.
What causes unexpected AI costs?
Three major sources:
- GPU usage
- Vendor lock-in to expensive proprietary tools
- Data consolidation projects that weren’t necessary
Your approach avoids all three by using Microsoft-native services and consumption-based pricing.
Why do AI tools often fail when connected to real enterprise data?
Because many popular AI tools don’t understand:
- Data governance
- Security groups
- Permissions
- Identity
- API throttling
- Complex business logic
- Legacy systems
But .NET developers understand all of these — which is why they should own enterprise AI delivery.
How can companies ensure trustworthy AI outputs?
Through required enterprise guardrails:
- Logging every input/output
- Request/response auditing
- Human-in-the-loop approvals
- Confidence scoring
- Escalation workflows
- Versioning
- Quality review cycles
Trust is created through engineering, not through “more AI.”
What is the fastest way to get EBIT impact from AI?
Start with:
- 10 high-value use cases (your scoring spreadsheet)
- Start with automation first
- Use AI only where automation fails
- Build quick prototypes
- Deploy real MVPs inside existing tools
- Track “faster, better, cheaper” metrics
This process consistently delivers measurable financial return.
Want More?
- Check out all of our free blog articles
- Check out all of our free infographics
- We currently have two books published
- Check out our hub for social media links to stay updated on what we publish
