Let me start with apologies about recent radio silence, but it has been a busy period at work, and I also took some time off in the Scottish Highlands. Between distillery visits and dodging hellish clouds of midges, I found myself thinking about one of the fundamental principles of good data strategy and what this means in a world where AI is consuming so much of our collective brain space.
Business before tools
There are a handful of principles so fundamental that every business leader should have them tattooed somewhere visible. Somewhere they'll see them every morning before making decisions that could cost their company millions.
"Business before tools" should be at the top of that list.
It sounds insultingly simple. Yet in our rush toward complexity, this principle is repeatedly overlooked. The shinier the technology, the more we seem to forget that tools should serve strategy, not drive it.
Over the years I've had many conversations with executives that go something like this:
"We spent (insert big number) million on a new platform. The demos were incredible. The vendor showed us exactly what we wanted to see. Twelve months later, we're spending more on keeping the damn thing running than we've saved or made, and we're still not making better decisions. It's a waste of time."
When I probe further, I find that the clamour for technological salvation has drowned out important considerations: people, processes, and business reality. It's a common problem and an expensive lesson.
Before asking "what can tools do for us," we should ask, "what business problems do we need to solve?"
AI presents unprecedented opportunities, but amid such rapid change, we risk neglecting this fundamental principle. It's early days, but I'm already seeing a familiar pattern emerge.
'AI regret' is creeping into the discourse (GenAI being the main issue). McKinsey recently reported over 80% of companies haven't seen a tangible impact on EBIT from GenAI. I believe GenAI will start creating real impact when embedded strategically, but such high levels of regret suggest many organisations are asking the wrong questions.
New tools, an old problem
Many AI tools may be new to organisations, but we've seen this problem before. It happened with enterprise software in the 90s, cloud computing in the 2000s, and mobile apps in the 2010s. The technology changes, but the fundamental mistake remains: leading with tools instead of value.
When organisations approach data problems tool-first, they follow a predictable path. They identify impressive capabilities, imagine use cases that might leverage those capabilities, then retrofit solutions into existing processes. The result is often expensive, over-engineered solutions that solve problems no one actually had.
Klarna has already given us a cautionary tale. In February 2024, the Swedish fintech made headlines claiming its AI assistant was "doing the equivalent work of 700 full-time agents" and handling two-thirds of customer service conversations. The company positioned this as a massive success, touting faster resolution times and cost savings.
By May 2025, CEO Sebastian Siemiatkowski reversed course, telling Bloomberg the company was recruiting humans again after the AI approach led to "lower quality" service. Critics noted that while AI might have been doing the work of 700 agents, "it's really doing the work of 700 really bad agents."
Instead of starting with "how can we improve customer satisfaction and reduce service costs?" I suspect some genius asked "how can we replace humans with AI?" They led with AI's impressive capabilities rather than the specific value they needed to deliver to customers.
The Klarna case shows how tool-first thinking can create impressive-sounding metrics that don't translate to real business value. Had they started with customer value questions - What do our customers need from support? How do we measure service quality? What drives satisfaction in our specific context? - they might have designed a hybrid approach from the beginning.
What makes AI different
While value-first thinking remains unchanged, AI presents unique considerations that business leaders must navigate carefully.
📊 The data dependency challenge: Unlike previous technologies, AI's effectiveness is fundamentally tied to data quality and availability. You can't simply install AI like software. A CRM system works with whatever data you input, but an AI model trained on poor data will produce poor results regardless of algorithmic sophistication. Your value assessment must include a realistic audit of your data landscape.
👩🎓 The "learning" factor: Traditional tools are deterministic - they produce predictable outputs for given inputs. AI systems learn and evolve, meaning performance can improve over time, but they can also degrade or behave unexpectedly. This introduces ongoing management overhead that must be factored into value calculations.
🥷 The skills gap: Implementing a new CRM might require training your sales team on a new interface. Implementing AI often requires entirely new skill sets. The human capital investment is typically much higher and harder to source.
☁️ The explainability problem: When your accounting software calculates a total, you can trace exactly how it arrived at that number. When an AI model makes a recommendation, the reasoning may be opaque. This has practical implications for regulated industries, customer service, and strategic decision-making.
The hidden cost that could sink AI projects
The McKinsey research and the Klarna example don't reveal the biggest hidden cost (in my opinion, at least): ongoing data management.
Companies budget for AI development, infrastructure, and talent. But they risk underestimating the operational overhead of keeping their AI systems fed with suitable data.
This goes beyond data quality. The real challenge is ensuring data remains suitable for changing circumstances. Your AI model was trained on historical data reflecting your business environment at a specific point in time. But business environments are never static.
Market conditions shift. Customer behaviour evolves. Regulatory requirements change. Competitive landscapes transform. Each change can gradually, or suddenly, make your data less relevant to current reality.
For example: you might implement AI for demand forecasting based on three years of historical sales data. It works beautifully for six months. Then a new competitor enters your market, consumer preferences shift, or supply chain disruption changes buying patterns. Suddenly, your AI's predictions are off. Your historical data, perfect for yesterday's circumstances, is increasingly unsuitable for today's reality.
The operational overhead is likely to be substantial and ongoing. You need people constantly monitoring model performance, identifying when data patterns shift, sourcing new data reflecting current conditions, and retraining models accordingly. Most critically, you need constant vigilance about your data's continuing relevance to your business context.
This is a perpetual process that can easily require more ongoing investment than the original AI development. The longer you delay addressing data suitability issues, the more expensive the fix becomes. Small changes can be managed with incremental updates. Ignore the drift too long, and you might need to completely rebuild your dataset or restructure foundational models - essentially starting over.
Starting with business value and readiness
Companies that will succeed with AI start with business value, but they will also conduct honest assessments of their readiness to capture that value. This means asking traditional value questions alongside AI-specific considerations:
👴 Traditional value questions:
What processes in our business are inefficient or error-prone?
Where do our customers experience friction or dissatisfaction?
What decisions do we make with incomplete or delayed information?
Which tasks consume disproportionate resources relative to their value?
🤖 AI-specific reality checks:
Do we have the data quality and volume needed to train effective models?
Can we source or develop the technical expertise required for implementation and maintenance?
Are we prepared for the ongoing cost of model monitoring, retraining, and updating?
Do our processes and culture support working with probabilistic rather than deterministic outputs?
What are the consequences if the AI system makes mistakes, and can we design appropriate safeguards?
Do we have the operational capability to continuously ensure our data remains suitable for changing business circumstances?
Build, buy, or partner?
Unlike previous technology waves where the choice was typically between building custom solutions or buying off-the-shelf software, AI presents a more complex decision problem. You might build custom models, use pre-trained models via APIs, partner with AI-native companies, or combine multiple strategies.
Each path has different value propositions and risk profiles. Building custom models offers maximum control and potential competitive advantage but requires significant investment in talent and infrastructure. Using AI APIs provides faster implementation but may limit differentiation and create vendor dependencies. Partnering with AI companies can provide expertise but may reduce control over solution direction.
The value-first approach helps navigate these choices by focusing on which path best serves your specific business objectives rather than which is most technologically impressive.
Managing the experimentation imperative
AI introduces tension between the need for strategic focus and the value of experimentation. Unlike installing a CRM where you can specify requirements upfront, AI projects often require iterative experimentation to discover what's possible and valuable.
This doesn't invalidate the value-first principle, but it requires a more nuanced approach. Successful AI adoption often involves a portfolio strategy: focused investments in high-value use cases combined with smaller experimental projects exploring emerging opportunities.
The key is maintaining discipline about resource allocation while remaining open to unexpected discoveries. Set clear boundaries for experimental spending and timeline constraints for proving value.
The ongoing investment reality
Traditional software implementations follow a familiar pattern: upfront investment, training period, then ongoing maintenance costs. AI systems require continuous investment in ways previous technologies didn't.
Models need retraining as data patterns change. Performance monitoring is ongoing and resource-intensive. Regulatory requirements around AI use continue evolving. The talent required to maintain AI systems commands premium compensation and may be difficult to retain.
But the biggest ongoing cost- the one I believe may catch many organisations off guard- is the data management operation required to keep AI systems relevant as circumstances change.
Budget for this from day one, or risk the dreaded AI regret.
These ongoing costs must be factored into value calculations from the beginning. An AI solution showing positive ROI in year one might become cost-prohibitive by year three if these hidden costs aren't properly anticipated.
Moving forward
The fundamental wisdom remains unchanged: tools should serve strategy, not drive it. Technology investments should solve problems, not create them. Companies that remember this will build sustainable competitive advantages with AI.
But AI demands adapted thinking. The stakes of poor implementation may be higher due to complexity and ongoing costs. The path to value may be less linear due to AI's experimental nature. The skills and organisational capabilities required may be more substantial than previous technology adoptions.
The most successful companies will apply timeless business discipline while adapting to AI's unique characteristics. They’ll start with business value but realistically assess their ability to capture that value given AI's specific requirements and constraints.
They will treat AI as a powerful tool that can transform their business- but only if implemented with the same strategic rigour they would apply to any major business investment, plus the additional considerations that AI's unique properties demand.
I’m a great believer that the best technology remains the technology you don't notice - it just makes your business work better. AI happens to be a particularly powerful and complex way to achieve that goal, with a particularly expensive ongoing appetite for fresh, suitable data.
As ever, really enjoyed this! Other than having a stiff neck from nodding throughout- I came to a slightly different perspective at the end.
Describing AI as a tool slightly undersells its impact. Driven by rich data if can fundamentally change how work is done- and offer new strategies.
AI is both a resource (means) and a new approach (ways).
What if the game is to be more adaptive- to stay closer to market changes- or lead them? Here the data feedback from the real world to the business model offers a competitive edge.
It might not be a tool- closing the prediction error in your business model could be the ultimate advantage. Shiny objects could be the strategy!