Stop Evaluating AI Tools. Start Auditing Your Data.

Written by Simon Spyer | Apr 10, 2026 2:28:36 PM

The ROI of your next AI investment has already been decided. Determined by the state of your data architecture, long before the first vendor reached out.

Marketing Week's recent piece on AI and martech makes the polite observation that tool effectiveness depends on data hygiene. Accurate enough but it stops short of the uncomfortable reality: most brands are running AI vendor evaluations as a form of expensive theatre. The tools can't deliver what they promise because the data foundations don't exist to support them. 

The failure is one of sequencing: businesses evaluate capabilities before assessing readiness. The result is procurement cycles that burn budget and internal credibility on decisions that were structurally made before the first demo deck was opened.

Vendor Demos Are Answering the Wrong Question

When a CMO sits through an AI tool demonstration, the implicit question is whether the platform can do what the business needs. The vendor answers enthusiastically, the demo is impressive, the case studies are compelling.

The question that never gets asked is whether the data environment can actually let the tool perform as demonstrated.

AI tools are pattern recognition engines requiring specific inputs to produce useful outputs:

  • Personalisation engines need unified customer profiles

  • Predictive models need historical data with consistent taxonomies

  • Real-time decisioning needs integration pathways that most enterprise architectures do not have.

The vendor knows this. The vendor's job, however, is to sell the platform rather than audit your readiness. They will assume your data is in better shape than it is. They will nod when you describe your customer data platform, even when the underlying reality is four disconnected systems with inconsistent identifiers.

By the time marketers discover the gap, the contract is signed and the implementation team is troubleshooting data quality issues that should have disqualified the project at the outset.

The Data Readiness Audit Is the Actual Evaluation

A data readiness audit answers the questions that determine whether AI tools can deliver value in your specific environment:

  • whether a persistent customer identifier exists across channels;

  • how clean the historical transaction data is;

  • what latency exists between data capture and availability;

  • how much manual reconciliation happens before reporting.

These are commercial questions with direct budget implications. A personalisation engine can't lift conversion if the customer profile it references is 72 hours stale. A predictive model can't forecast demand if the training data contains three different product taxonomies from a merger four years ago.

The audit reveals what is actually possible here, now, with the data you have.

Gartner's research from early 2025 makes the stakes plain: 63% of organisations lack the data management practices needed for AI and Gartner predicts that 60% of AI projects not supported by AI-ready data will be abandoned before 2027.

Most enterprises skip the audit because it feels like delay — adding process before progress. But the delay is already embedded in the project. It simply arrives later, during implementation, when it costs more and damages more credibility.

The Architecture Decides Before the Demo Ends

Every AI marketing promise sits on a data foundation most enterprises have not yet built.

  • Personalisation requires identity resolution.

  • Real-time decisioning requires integration velocity.

  • Predictive analytics requires clean, consistent historical data at sufficient volume. These are preconditions that must exist before a contract is worth signing.

A CMO under pressure to demonstrate AI value from existing martech investments faces a difficult reality. The vendors are pitching capability. The board is expecting results. The architecture may disqualify most of those tools before the demo ends.

According to Salesforce's State of Data and Analytics report, 84% of data and analytics leaders believe their data strategies require a complete overhaul before their AI ambitions can succeed. That figure reflects years of deferred data work that can't be compressed once an AI contract is live.

The enterprises seeing real returns from AI martech share a common trait. They unified customer identifiers before buying personalisation platforms. They cleaned historical data before training predictive models. They built integration pipelines before purchasing real-time decisioning tools. Data readiness was the investment. Tool selection was the implementation.

What a Readiness-First Approach Actually Looks Like

The shift requires changing where evaluation begins.

Start with a data inventory rather than a vendor shortlist. Document what customer data exists, where it lives, how it flows. Identify the gaps and inconsistencies. A focused team can produce a meaningful readiness assessment in weeks.

Match the inventory against capability requirements. If AI-powered personalisation is the goal, identify exactly what data inputs the use case requires. If predictive modelling is the objective, specify what historical data would need to be clean and available. The gap between what you have and what you need becomes the actual project scope.

Only then begin vendor conversations. With a known architecture in hand, you can ask specific questions about data requirements and integration assumptions. You can identify precisely where a vendor's promises depend on conditions you do not meet.

This approach is slower at the start. It's faster in total because data limitations surface during planning rather than during implementation — before the contract is signed, not after the go-live is delayed.

The Cost of Evaluating Backwards

The cost of skipping readiness extends well beyond wasted licence fees. It includes procurement cycles that consume months of internal resource, implementation teams troubleshooting problems that could have been identified in week one and internal credibility spent on initiatives that underdeliver.

The evidence of a widening gap is accumulating. The share of marketers who say they can prove AI ROI dropped from 49% to 41% in a single year, according to MarTech's 2026 industry analysis. In retail, the decline was steeper still, from 54% to 38%, despite steady adoption. Early wins on content speed and automated segmentation were real but shallow. The harder requirement — connecting AI activity to revenue — demands data infrastructure most teams have not built.

AI tool proliferation is accelerating. The martech landscape now comprises over 15,000 tools. Every week brings another platform with another impressive demo and case studies drawn from companies whose data environments look nothing like yours.

The enterprises that will see returns are the ones that stop asking what AI tools can do and start asking what their data allows. ROI is decided before the contract is signed. The question is whether your team discovers that during procurement or during implementation.