The Pilot’s Compass: AI May Be The (Surprisingly) Human-Friendly Solution To Decades Of Technology Complexity

But Simplicity Of Use Brings Its Own Risks

Get clear direction with The Pilot's Compass: freely available, opinion-based research notes from Pilot Research, offered via our website and LinkedIn

A Simile Too Far, AI Is Like Ducks: Masters Of Calm On The Surface Supported By Furious Effort Beneath

I live in an area of the United Kingdom famous for, amongst other things, its definitional Aylesbury duck. The Aylesbury duck is found on the town’s coat of arms, locals are known to call themselves ducklings, and (approve or not) you’ll find it on the menus of high-end restaurants around the world. In what might be the most ambitious of my technology-focused similes to date, AI technologies are - to the user - like the gliding magnificence of an Aylesbury duck swimming across the surface of a twinkling village pond. Supporting this graceful simplicity, beneath the waterline, are large, webbed feet moving at pace, propelling it through a murky, low-visibility environment where weeds and debris obstruct its otherwise graceful progress.

Hopefully, at this point you might have an idea where this line of reasoning is going. Either that, or you think I’ve entirely lost my grip on reality. Humans are “designed” to interact with other humans, not tabs, columns, cells, keyboards, mice, or… and so on. The rise of Large Language Model (LLM) -powered AI assistants and yes, fine, AI agents, is the triumph of human-native interaction versus the long legacy of computing and its various forms of interaction, input, and commands. But it’s more than just the user interface (UI).

History Repeats Itself Creating An Ever Bigger Problem

For those with longer memories, concepts in business IT such as the Enterprise Service Bus (ESB), Service-Oriented Architecture (SOA), and more recently, the ubiquity of Application Programming Interfaces (APIs) know about the challenge of tackling ITs legacy. That legacy comes in many forms, from outdated systems, critical financial and resource management systems and banking platforms that were originally conceived decades ago, or a host of cloud-based software-as-a-service (SaaS) solutions that should - but don’t always - talk to each other. For good measure, add to this the “big data” created by all of the above and the Internet of Things, alongside a generous helping of the Internet and you have the ultimate construct of disparate data generating systems in a massive, growing, interdependent, and heterogenous digital monstrosity.

Humans created the technology, the technology does what humans can do but far faster, at a greater scale, and not aligned to some golden international standard: we have a technology-created problem that only technology, in this case AI, can overcome.

“Early Grey, hot.” It’s Called Natural Language For A Reason

Returning to ducks, let’s say above the water is the AI interface, plain text or speech (natural language for those with an IT hat to wear) which is easily understood, approachable, and does what it’s expected to do. Beneath the waterline, lies the murky depths of data and systems to be navigated to produce the effortless appearance above.

At this point the simile fails, spectacularly, given that pond water is crystal clear by comparison to the ever-changing spaghetti junction of software and data that is the fog of war enshrouding modern technology. But the idea, at least in my view, stands - AI in the form of LLM-powered AI assistants / agents / interfaces are the first part of the final step in the evolution of how we interact with technology and data.

Why? Because in 99.9%+ of cases, most people do not need or want to know how the web of software, data, and computing hardware that sits behind the application they’re using works. Or, put another way, behind the interface it could be an army of guinea pigs running a treadmill-powered mechanical doohickey, as long as it comes up with the solution.

Simplicity Hides Dangers Lurking Beneath The Surface

If you’ve listened to me speak or read my work you’ll know I regularly say, “What could possibly go wrong?” The answer to this question in this case, is: quite a lot. With the rapid adoption of a technology that hides the mass of data it's trained on and accesses to augment its answers, while also acting as digital filler to connect disparate applications and services, a number of immediate concerns arise.

I suggest that primary amongst these is that we are running the risk of creating a solution of systems so complex and obscured that no one can understand how it arrives at the answers it does, how it works, and therefore how to effectively maintain it. AI may well be the new UI that helps us navigate myriad systems and data but how much confidence and trust do we have in its outputs? How do we check its work? When an underlying system is updated and changes its process or data output, perhaps fails or is compromised, or the data itself is corrupt / wrong / insufficient, how do we know with certainty? Our ability to create technology looks to be surpassing our ability to effectively manage it.

Of course, this raises other questions about the degree to which we rely on the output of AI and take responsibility for action taken upon it by either humans or machines.

Ducks, Dinosaurs And The Importance Of Explicability

Currently I’m working on a framework for evaluating trust in AI solutions and I keep coming back to a core principle. That is, can you explain the inputs, working, and output of the system in a way that anyone, not just someone with a PhD in computing science, could understand?

I don’t know how closely related ducks are to dinosaurs, but to borrow from the film Jurassic Park I don’t care whether the system has two million lines of code, or a trillion plus. Even if it does what it should do, effectively and efficiently (measured against the value of what it is producing) then it’s already scaled well beyond most people’s ability to understand it. Like a duck, or dinosaur for that matter, though, can I explain it without necessarily understanding the minutiae of each individual process and mechanism that makes it work?

If yes, there’s hope. If not, I strongly suggest we’re into territory that starts to align with another favourite saying of mine: just because we can, should we? We don’t all have advanced degrees and doctorates in computer science, mathematics, and data science. But, somehow, we are already and increasingly subject to the output of the systems these disciplines create.

Bottom line? I’m forced to borrow from the quote, “If you can’t explain it simply, you don’t understand it well enough.” Whether that was Einstein or not, the question that hangs over the apparent simplicity of AI as an interface to our complex, growing collective of systems and data is clear:

“If you can’t explain it, should you really depend on its output?”

Whatever Direction Your Compass Points…

Thank you for reading this Pilot’s Compass note. It will be available on both Pilot Research's website and LinkedIn. Is it possible to tackle this subject in around a thousand words? Of course not. My hope is that it’s brief enough to take the time to read and - agree or disagree - it perhaps proved somewhat thought-provoking. I welcome your comments, feedback, and ideas at tom@pilotresearch.co.uk

Previous
Previous

The Pilot’s Compass: AI Boosterism Should Stop, But It Won’t: There’s Too Much At Stake

Next
Next

The Pilot’s Compass: To Make Digital Labour Work Means Making Work, Work For Humans