Sunday, January 18, 2026

Agentic AI – Why ROI breaks at the workflow level.

 


Most leadership teams are no longer debating whether to invest in agentic AI – the real question is why returns remain so uneven.

A recent McKinsey study (Sept ’25), drawing on 50+ real agentic AI builds, captures many of the now-familiar challenges – difficulty attributing value, unclear user pain points, fragile agent–human collaboration, and poor performance in low-variance, highly standardized workflows. From my own experience building and scaling AI systems, three lessons (from the study) stand out that I think need a deeper look-in from a strategic and operating standpoint.

1.  “Workflows (not agents) are the main source of value”

Most failed or underwhelming agentic implementations I’ve seen share the same pattern: agents are layered onto the tail end of broken workflows in the hope of quick wins. The result is surface-level optimization that doesn’t address deeper issues around data quality, usability, model training, or deployment discipline.

The true advantage comes from treating workflows as economic units – segmenting them across service delivery, product experiences, and process controls creates clarity on implementation and measurement. More importantly, it allows agentic AI investments to be tied directly to the performance KPIs and, ultimately, EBITDA – which is increasingly the lens through which C-suite is prioritizing AI spend, and rightly so.

2.  “Mapping processes and defining user pain points are critical to designing effective agentic systems”

Strong models don’t compensate for weak process design. Once workflows are clear, the differentiator is how well organizations map user pain points and instrument real touchpoints. Users need seamless experiences and painless troubleshooting, whenever needed.

Strategic innovations in the agentic systems that combine real-time inputs, journey-level measurement, and dynamic decisioning stand to make the most impact for the end users. This is where customer experience (CX) stops being a “soft” concept, and being the most direct indicator of customer success, translating into economic value and as a planning currency across industries – another C-suite lens for measuring AI program success.

3.  “Humans remain essential for oversight, compliance, and judgement”

Agentic AI systems introduce real complexity: data dependencies, security exposure, regulatory risk, and privacy concerns sit right at the core of service delivery. Bias, drift, and edge-case failures are still very much part of today’s reality, and human judgment remains essential for oversight, tagging, and de-risking. The cost of getting this wrong isn’t theoretical – it shows up quickly as customer harm, compliance exposure, or brand damage. Maintaining the right skills and capacity alongside agentic systems isn’t optional; it’s foundational to any sustainable AI roadmap.

As agentic AI moves from experimentation to scale, it becomes an operating model decision, and not a technology problem. Exactly why experienced AI practitioners need to be engaged with strategy and planning leaders – helping them think through where agents make sense, where humans must stay in the loop, and how both come together to create competitive differentiation.

For most leadership teams, the real opportunity lies in aligning strategy, workflows and human judgment - experiential lessons that deliver the value available with AI investments.

No comments: