Skip to content

Exec X AI Magazine

Your AI assistant is about to get a promotion

A summary of the Financial Times' Trusted Data for Agentic AI, hosted in partnership with Informatica. February 25th 2026

KS

Khaled Shivji

CEO and Co-Founder at S.AI.L · February 26, 2026

Listen to this article. 8m27s.

The views expressed in this article are the author's own. Not all observations and conclusions reflect statements made by participants during the FT Live's seminar

The FT Live's seminar is available to view via replay until 24 March 2026. Watch the replay here

FT Live would welcome your feedback (Ed - don't shoot the messenger). Complete the evaluation form here and be entered into a prize draw to win a GBP 200 e-voucher. Terms and conditions apply

FT Live panelists: Steve Holyer (Informatica), John Foley (Financial Times), Andrew Reiskind (Mastercard), Vida Ahmadi Mehri (Electrolux Group), Marc Beierschoder (Deloitte), Mercedes Pantoja (Siemens Healthineers)
FT Live · Trusted Data for Agentic AI · February 25, 2026

TL;DR

Three priorities emerged for any executive refreshing their AI strategy:

  • Build your data foundation. You need a single source of truth your agents can rely on. This is a non-negotiable prerequisite. It means investing in data quality and in the systems that provide rich contextual information (Ed - I cannot stress this enough!)
  • Establish a decision governance framework. Your Office of Responsible AI must define what decisions can be automated, which require a human in the loop, and who owns the outcome when something goes wrong
  • Focus on core process transformation. Isolated pilots are not a strategy. (Ed - Innovate internally and cross-functionally using several ideation workshops). Competitive advantage comes from redesigning how you run your products, services, and processes end-to-end

The $50 million case study

A consumer products enterprise used agentic AI and core process transformations to cut its product launch cycle from eight weeks to three days. The projected P&L impact: over $50 million. I have no doubt this project will succeed; this is exactly how enterprise AI moves the needle beyond individual productivity towards an AI plus future

The digital full-time employee

An AI agent is a digital full-time employee, or “Digital FTE.” It is a tireless analyst that works 24/7 and can execute tens of thousands of decisions an hour. Its critical limitation is a lack of common sense. It does not understand context, ambiguity, or the unwritten rules of your business

Laying off employees to fund AI initiatives is a value-destroying move. Those employees possess the tacit knowledge required to fill the gaps and navigate internal politics and unwritten processes

Even digital FTEs suffer from performance degradation, or “AI drift.” Told to optimise for speed, an agent may deprioritise complex but critical tasks. Nothing breaks. Your dashboards remain green. But performance degrades. By the time you notice, the damage is done

Manage your digital workforce with the same rigour as your human one. This requires clear roles, defined responsibilities, performance reviews, and a governance framework that specifies which decisions an agent is permitted to make

Vida Ahmed Meri, the Data and AI Governance Officer at Electrolux, argued for a shift from data governance to decision governance. You must define the boundaries of an agent's autonomy. The board must decide who is accountable when the agent gets it wrong.

Context is critical. When an agent is drafting an email, a data error is an inconvenience. When it is executing a trade, launching a product, or adjusting a supply chain, a data error can wipe out productivity gains

The Mandela Effect on AI stocks

A false narrative is shaping the market. The statistic that “99% of AI proofs-of-concept fail to launch” is a fiction, often misattributed to McKinsey & Co. It has been repeated so often it has become folklore. This is the Mandela Effect in action: a collective false memory

The reality is not that POCs fail to launch, but that they fail to scale. The reasons are misaligned expectations and a failure to integrate AI into the core processes that drive business value. Companies are not eager to report how many of their AI pilots have stalled, for fear of the impact on their share price

Lets unpack this: MIT Nanda's July 2025 ‘State of AI in Business Report 2025’ report found that despite $30–40 billion in enterprise investment into generative AI, 95% of organisations are getting zero return

MIT Nanda's report was released quietly on GitHub. In August, bankers began returning to the office and the report went viral. MIT Nanda quickly placed it behind a soft gate (Ed - I don't blame them!).

But, for a week, tech stock volatility swept the market. CoreWeave (NASDAQ: CRWV) and Palantir (NASDAQ: PLTR) stocks took the biggest hits respectively falling 21.2% and 14.4%

AI sell-off deepened as MIT research cast doubt on returns. Nvidia, Palantir, Oracle and CoreWeave daily closing prices fell sharply in mid-August 2025. CRWV -21.2%, PLTR -14.4%, ORCL -4.8%, NVDA -2.7%
AI sell-off: Nvidia, Palantir, Oracle and CoreWeave daily closing prices, August 2025. Source: Yahoo Finance

Cloud computing watch site Futuriom released a scathing report describing MIT Nanda's report as ‘weird’ citing:

We aren't AI cheerleader purists—there are certainly many problematic areas of AI as well as investment patterns that warrant bubble fears—but the MIT NANDA report paints an irresponsible and unfounded picture of what's happening in Enterprise AI.

My takeaway from this: the real risk to AI-exposed stocks is not the failure rate of POCs, but the perception that widespread adoption will fail generate a return on investment

That's why your strategy must be built on delivering value, not on avoiding a misremembered failure statistic

Who is accountable when the agent gets it wrong?

The board needs to make a call on this:

  • confine the digital FTE to a box? Each side represents a limit to an agent's autonomy, or
  • enable the agent to think, learn and act within a set of parameters akin to Anthropic's responsible scaling policy where the consequences of increased computing power, and computational complexity can (potentially) be predicted (Ed - Anthropic's RSP (v3) dropped two days ago)

The Final Word: Should I walk to the car wash?

Ask an AI to draft a new email. Remove the canned platitudes and press send. (Ed — always check AI generated emails. Always!).

The ChatGPT ‘car wash’ test is doing the rounds as a benchmark on why providing Gen AI with context is so important

AI needs context i.e the “who, what, when, where, why” behind the data. Otherwise, if you ask an AI whether to walk to a car wash, it will insist walking is healthier and beneficial even if your prized motor stays dirty

And this is one of the many reasons why AI SaaS built for enterprise are failing to provide their clients with the returns they promise. The SaaS is often rushed out of the door, and poorly evaluated pre-production (AI orchestrators are put in charge of evaluating other AIs (Ed - seriously!). Post-deployment, AI SaaS suffers from a lack of context due to layoffs stripping valuable tacit knowledge out of the enterprise, and crippling Reinforcement Learning from Human Feedback (RLHF)

Sad to say, but some Enterprise generative AI apps look cheap compared to slick, multi-billion dollar valued counterparts like Claude and ChatGPT. And as a result, users give up and go back to using shadow AI

The FT Live's seminar proved that there's no wiggle room for ignorance. As the hype about generative AI begins to settle, the message is clear: fail to plan, plan to fail

KS

Khaled Shivji

CEO and Co-Founder of S.AI.L (Strategic AI Leadership), an enterprise AI consultancy. Principal-led, compliance-first, and cloud-agnostic, S.AI.L provides trusted advice. khaled@execxai.com