Brief quality beats AI intelligence with Judit Petho

Written by Simon Spyer | Apr 28, 2026 3:45:11 PM

The intelligence behind your AI matters less than the brief you write for it. Most organisations are rushing to deploy AI solutions without understanding how to communicate what they actually need. The result is expensive technology solving the wrong problems at impressive speed.

Judit Petho has spent years helping senior leaders diagnose problems before they reach for solutions. Her argument is uncomfortable: the quality of your brief determines whether AI amplifies your intelligence or your confusion. We unpack this in our latest video and in this post. 

Why CEOs are replacing people with the wrong mindset

Half the C-suite believes AI agents can replace half their people. This is dangerous thinking driven by FOMO rather than strategic clarity. Leaders see competitor announcements and internal pressure to "do something with AI" without understanding what specific problems they need to solve.

The rush to replace people treats AI agents like employees rather than tools that need conversation. LLMs require precise instruction, context and ongoing refinement. They can't read between the lines or fill gaps in strategic thinking. When you brief an AI agent poorly, you get confident answers to the wrong questions.

CEOs who approach AI as a headcount replacement miss the real opportunity: using AI to amplify their best people's decision-making capability. The most successful AI implementations enhance human judgment rather than replace it.

How senior leaders lose problem-solving clarity

The higher you climb in an organisation, the more you lose the ability to break problems apart. Senior leaders become accustomed to delegating detail work and thinking in broad strategic themes. When it comes to briefing AI, this becomes a liability.

AI requires specificity. It needs to understand not just what you want but why you want it, what success looks like and how the output will be used. Most executives can't articulate these details because they have not needed to for years. They know something is wrong but can't diagnose whether it is a data problem, a process problem or an alignment problem.

This diagnostic gap becomes expensive when you deploy AI to solve symptoms rather than root causes. You waste months building sophisticated solutions for problems you have not properly defined. The AI performs perfectly within its brief while missing the actual business need entirely.

Why measurement problems are actually alignment problems

When organisations struggle to measure AI impact, the problem is rarely the measurement methodology. It's that senior leaders have never aligned on what success actually looks like. They assume shared understanding where none exists.

Our approach is three-level measurement to prevent getting lost in data.

  • Level one: business outcomes that matter to the board.

  • Level two: operational metrics that drive those outcomes.

  • Level three: activity measures that feed the operations. Most AI projects start at level three and wonder why executives are not convinced.

Fear drives the lack of clear goal setting. Leaders worry that specific, measurable commitments create accountability they can't deliver. It's safer to talk about "transformation" and "innovation" than to commit to increasing conversion rates by 15% in six months.

But AI can't optimise for transformation. It optimises for the specific objectives you give it.

How performative communication wastes organisational energy

The biggest energy drain in AI adoption is performative communication. Teams spend more time talking about AI strategy than implementing it. Meetings become showcases for AI literacy rather than problem-solving sessions.

When everyone needs to sound intelligent about AI, conversations lose precision. Instead of saying "our customer segmentation is inconsistent and costing us revenue", teams say "we need to leverage AI to optimise our customer journey". The first statement can be briefed to an AI agent. The second cannot.

The solution is ruthless specificity about problems before solutions. What exactly is broken? How do you know? What would fixed look like? These questions feel basic but most organisations cannot answer them clearly. Until you can, AI agents will solve problems you do not actually have.

Watch the full conversation on The Precision Brief to hear Judit's full framework for briefing AI effectively.

Subscribe to the newsletter for more insights on what actually moves the needle in data-driven decision making.