I’ll admit it: I’m one of those men who frequently thinks about the Roman Empire. Living on Akeman Street, an old Roman road, might have something to do with it. But lately, my thoughts have wandered further back in time to the ancient Greeks. One figure in particular: Socrates.
Socrates’ method, questioning assumptions, encouraging reflection, and reasoning to conclusions, feels more relevant than ever. As we become increasingly dependent on AI, the kind of critical thinking he championed is essential.
Walking on the human side
People often talk about ‘the human side of data.’ It’s an odd phrase, considering everything we do with data is inherently human. The reason, of course, is that tech tends to get most of the airtime.
Data has always been framed as a technology problem with a technology solution. Tools matter, of course, but the obsession with silver bullets persists. Despite billions spent on tech, many organisations still struggle.
Why? Because the people and culture aspects, the true business side of data, are consistently underappreciated.
I try not to get too frustrated, these days. But when data professionals say they ‘work in tech,’ this old guy shakes his head. No, you don’t. You work in business, using tech. Big difference.
(Enough said).
Don’t get me wrong, I like a shiny object like the best of them. But I’ve never cared about the tech in isolation. If my job is to dig a hole, I care about the hole, not the shovel, unless it’s broken or inefficient. Or unless I need a digger (If so give me a John Deere, please).
My interest in human behaviour led me here in the first place. With a degree in Human Geography, I was drawn to work that would let me explore how people and systems interact. That’s why I started out in retail and marketing analytics.
(Marketing is a good place for understanding human behaviour, as long as you're okay with it being mostly about selling people more stuff. There are more noble domains, of course.)
What’s kept me in data is precisely that: people. Success has always come down to human factors, never just the tools or processes. I’ve seen what scrappy, skilled, business-minded data teams can achieve, even with clunky tech.
People and AI
AI is changing jobs, and no doubt eliminating some. But we’re not yet living in a billionaire’s fever dream, where people are removed from the equation.
In fact, we need to consider the human dimension more than ever.
I don’t deny AI’s potential. Marketers were early adopters of machine learning and customer-facing AI. But I try to avoid bandwagons. Hype is fueled by grifters and self-proclaimed strategists (just browse LinkedIn). Yet beneath that noise are pragmatic professionals who have been quietly doing the work, long before the hype merchants moved on from Web 3.0.
If you’re in data, it’s fine to be excited about AI. But you also have a responsibility to temper that excitement. Your job is to evaluate applicability, validate the data it depends on, assess ethical implications, and understand limitations.
More than that, we’re duty-bound to consider what AI means for people, its developers, users, beneficiaries, and even its victims, depending on your perspective.
The term ’human-in-the-loop’ may be crude, but the idea matters: people should remain actively involved in AI systems, training them, validating them, scrutinising their outputs.
We can’t isolate AI from humanity. We built it. We use it. We respond to it. We are responsible for it.
Enter Socrates
What would Socrates make of all this?
The barefoot philosopher who left no writings and annoyed powerful people into sentencing him to death. Without him, we’d have no Plato’s Republic, no Aristotelian logic, no Enlightenment.
(Try being a data professional in a world where the Enlightenment didn’t happen).
Socrates didn’t teach by telling. He taught by asking. That method, of questioning, probing, and challenging assumptions, is foundational to critical thinking.
And it’s exactly what we need when engaging with AI.
These tools produce oceans of information, and now reasoning (or a version of it). Without critical thinking, we risk accepting their output without understanding its flaws or biases.
Socratic questioning helps us ask the right things of AI:
What assumptions is this model making?
Whose perspective is missing?
Who benefits from this outcome, and who doesn’t?
This mindset is crucial in data work full-stop, whether or not you care about ancient philosophy.
The ultimate head-to-head
In this corner…
Weighing in at billions of parameters, trained on the internet, capable of poetry, passing the bar, diagnosing cancer, and maybe stealing your job: artificial intelligence.
And in the other corner…
Sporting a threadbare robe, barefoot, armed only with questions, and a talent for irritating the elite: Socrates.
One promises answers to everything. The other demands we never stop questioning.
(I asked AI to create me a Socratic data leader - looks like a lot I know to be fair.)
As machines grow more capable, some human skills will inevitably atrophy. That’s the story of industrialisation. But the one skill we can’t afford to lose is critical thinking.
AI is replacing manual and mental tasks alike. But rather than deskilling, we need to get wiser. That’s how we stay in the loop.
We must ask deeper, harder questions, the kind Socrates relished.
Socrates never touched a keyboard or dealt with a dodgy dashboard. But his way of thinking is more relevant than ever.
He reminds us that how we think matters just as much as what we think.
Why philosophy belongs in data & AI
Many dislike the phrase ‘data-driven.’ It suggests machines are in charge, and people, metaphorically lobotomised, are merely passengers.
The same goes for ‘AI-driven,’ because blindly following its outputs is dangerous:
It creates false certainty: measurable doesn’t always mean meaningful.
It neglects what can’t be quantified: empathy, dignity, justice.
It promotes passive thinking: when decisions feel automatic, we stop questioning.
Socrates would have encouraged us to resist that drift.
Instead I suspect he will have championed the following:
Intellectual humility: wisdom starts with admitting what we don’t know, helping us avoid treating AI as omniscient.
Ethical reasoning: AI reflects flawed human choices. We must ask not just ‘Does it work?’ but ‘Is it right?’
Curiosity and dialogue: AI outputs should be the start of a conversation, not the be all and end all.
Critical thinking: automating decisions is fine, but it shouldn’t mean we abandon all thought.
A survival tool for the future
If you’ve made it this far, then let me tell you something unfashionable that those who have already checked-out probably don’t care to hear:
Data leaders could do well to embrace a little philosophy.
It can sound abstract or high-minded. But without it, we’re flying blind.
Philosophy helps us navigate ambiguity, weigh competing values, and resist easy answers. It slows us down just enough to ask: What kind of world are we building? And why?
(Questions rarely asked by those pushing the next shiny thing.)
Plato, who was famously Socrates’ student, warned against societies led by those with technical skill but no moral insight. That warning echoes profoundly these days.
We can build extraordinary tools. But deciding how we use them is a fundamentally philosophical task.
Lead with questions
That’s how we keep ethics at the centre of AI strategy. We must invest in the right AI, but always remember the questions Socrates asked 2,400 years ago:
“What is the good life?”
“How should we live?”
I don’t really believe Socrates would have hated AI. I like to think it would have intrigued him. After all, it’s a mirror of humanity, our beliefs, flaws, and values.
And it would have given him endless opportunities to do what he did best: ask more questions.
Bonus: light relief
The closest we can get to seeing Socrates in action is this classic Monty Python sketch where dead Greek philosophers take on their German counterparts. Still makes me laugh.
Thank you for the laugh.. Made my day! And the wisdom that we need to think critically more than ever now...