Sunday, January 18, 2026

Agentic AI – Why ROI breaks at the workflow level.

 


Most leadership teams are no longer debating whether to invest in agentic AI – the real question is why returns remain so uneven.

A recent McKinsey study (Sept ’25), drawing on 50+ real agentic AI builds, captures many of the now-familiar challenges – difficulty attributing value, unclear user pain points, fragile agent–human collaboration, and poor performance in low-variance, highly standardized workflows. From my own experience building and scaling AI systems, three lessons (from the study) stand out that I think need a deeper look-in from a strategic and operating standpoint.

1.  “Workflows (not agents) are the main source of value”

Most failed or underwhelming agentic implementations I’ve seen share the same pattern: agents are layered onto the tail end of broken workflows in the hope of quick wins. The result is surface-level optimization that doesn’t address deeper issues around data quality, usability, model training, or deployment discipline.

The true advantage comes from treating workflows as economic units – segmenting them across service delivery, product experiences, and process controls creates clarity on implementation and measurement. More importantly, it allows agentic AI investments to be tied directly to the performance KPIs and, ultimately, EBITDA – which is increasingly the lens through which C-suite is prioritizing AI spend, and rightly so.

2.  “Mapping processes and defining user pain points are critical to designing effective agentic systems”

Strong models don’t compensate for weak process design. Once workflows are clear, the differentiator is how well organizations map user pain points and instrument real touchpoints. Users need seamless experiences and painless troubleshooting, whenever needed.

Strategic innovations in the agentic systems that combine real-time inputs, journey-level measurement, and dynamic decisioning stand to make the most impact for the end users. This is where customer experience (CX) stops being a “soft” concept, and being the most direct indicator of customer success, translating into economic value and as a planning currency across industries – another C-suite lens for measuring AI program success.

3.  “Humans remain essential for oversight, compliance, and judgement”

Agentic AI systems introduce real complexity: data dependencies, security exposure, regulatory risk, and privacy concerns sit right at the core of service delivery. Bias, drift, and edge-case failures are still very much part of today’s reality, and human judgment remains essential for oversight, tagging, and de-risking. The cost of getting this wrong isn’t theoretical – it shows up quickly as customer harm, compliance exposure, or brand damage. Maintaining the right skills and capacity alongside agentic systems isn’t optional; it’s foundational to any sustainable AI roadmap.

As agentic AI moves from experimentation to scale, it becomes an operating model decision, and not a technology problem. Exactly why experienced AI practitioners need to be engaged with strategy and planning leaders – helping them think through where agents make sense, where humans must stay in the loop, and how both come together to create competitive differentiation.

For most leadership teams, the real opportunity lies in aligning strategy, workflows and human judgment - experiential lessons that deliver the value available with AI investments.

Monday, January 12, 2026

AI’s EBITDA Promise – And, why most companies fail to realize it.


A recent Bain & Company study (Sep ’25) cited 20–25% EBITDA gains from AI adoption, with agentic AI visions starting to take shape. While promising, these numbers largely reflect outcomes from tech-forward companies that have an advantage in mature foundational (data) and processing (talent) capabilities.

For most non-tech-forward enterprises, the challenge lies in translating the belief into reality – defining the right AI strategy, setting realistic ROI expectations, aligning investments to business outcomes, and procuring services that align with their overarching goals. In my view, this is where it becomes imperative to work with AI practitioners, who can collaboratively help “translate” the AI promise into an organization’s specific context. These experienced practitioners help outline AI roadmap for strategic advantage, define the right KPIs for ROI measurement, prioritize a step approach to a pragmatic AI investments philosophy, and can be a great "interlocuters" between the technology providers and the functional teams.

The same Bain study also highlights familiar barriers in data silos, IP and security risks, vendor lock-in, and missing standards. I would argue the reality is often worse – many organizations struggle with a fragmented understanding of the benefits AI can truly deliver (and how to measure it), difficulty prioritizing use cases with credible ROI, limited budget flexibility for experimentation, and executive hesitation to place big bets.

This is where AI practitioners can act as “human in the loop,” who helps “shape decisions” like; defining target use cases, outlining budget-talent requirements, creating flexible architectural views, and avoiding common pitfalls, while navigating fragmented data, modeling efficacy, application relevance, and the absence of industry-defining standards.

AI transformation isn’t a technology problem – it’s a business translation problem.

Tuesday, January 6, 2026

Program Management in AI Transformations: What We Are Really Managing.

 As I looked for a topic to close out 2025, I found myself returning to program management, particularly in the context of AI and digital transformation programs, where its importance is often recognized only after momentum starts to slip.

Few initiatives expose weaknesses in program leadership as quickly as AI. These programs cut across data, technology, operations, and business teams. Expectations are high, timelines are compressed, and assumptions often go untested. Yet the fundamentals of program management in these efforts remain underfunded and undervalued.

What I have repeatedly observed is that AI programs rarely stall because of algorithms or models – they stall because of misalignment around expectations, focus, and timing. Here are some more insights.

>  Managing expectations: where most AI programs quietly go off track

Expectation management in AI initiatives begins well before the first model is built. Many programs start with ambitious goals, but imprecise definitions of success.

A request for an AI feasibility assessment or pilot often carries unspoken assumptions – production-ready solutions, measurable ROI, or a clear path to enterprise scale. In reality, those outcomes depend heavily on data readiness, integration complexity, governance, and change adoption.

Data, in particular, is frequently overestimated. Teams often discover late that critical data is fragmented, inconsistently governed, or simply not usable in its current form. What was positioned as a quick pilot, struggles because foundational assumptions were never surfaced early.

What you are really managing is alignment – between AI ambition, data reality, scope, and achievable timelines.

>  Managing focus: discipline matters more than sophistication

AI teams are naturally inclined to explore. Models can be refined, features extended, and architectures improved almost indefinitely. Without discipline, this curiosity can quietly undermine outcomes.

I have seen AI pilots expand into sophisticated technical builds while losing sight of the original business question they were meant to answer. The result is often an impressive solution in search of a decision.

In transformation programs, focus matters more than technical elegance. Clear, decision-oriented outputs, impact estimates, risks, and operational implications create momentum. Deeper technical work can and should follow once value is established.

What you are really managing is focus – ensuring effort remains anchored to both business and technical possibilities.

>  Managing timelines: protecting sponsor credibility

AI programs are especially sensitive to delays. Dependencies on data access, privacy approvals, business inputs, and validation cycles introduce uncertainty that must be actively managed.

Delays become issues when their impact is not communicated clearly and early. Sponsors are then left explaining slow progress without the context or options needed to manage expectations internally.

In AI programs, transparency around timing does more than manage delivery – it protects credibility, both for the team and for the leaders championing the initiative.

What you are really managing is trust – and the sponsor’s ability to lead the transformation with confidence.

A closing reflection

AI initiatives, more often, lose momentum quietly, through misaligned expectations, drifting focus, and unspoken delays. The programs that succeed are designed to surface reality early, force disciplined choices, and protect trust as complexity increases.

That, more than any model or platform, determines whether a program delivers value.