When Can I Say It’s AI?
At the moment it seems as though absolutely every single project or new tool uses some form of Artificial intelligence. Machine learning and deep learning techniques are no longer enough – it has to be “cutting edge AI MLops…” In fact, these buzzwords seem to be used so frequently they have lost much of their meaning. As technology advances, and the UK prepares for the AI summit, the first question we need to ask is “when can we accurately say a solution involves true AI?”
What is AI Anyway?
Lets start by thinking about what AI meant just a few years ago – it conjured images of i-Robot with thinking robots which started to be cognisant. The leap in computational programming to an algorithm that can not only undertake causal reasoning (think the ladder of causation created by Judea Pearl) but learn and adapt through experiences – in short, it can mimic human thinking, reasoning and learning. It’s important to remember that AI is not machine learning, which are algorithms and trained on data to undertake complex mathematical tasks, but they can only operate within the bounds of the data they have been trained on.
So what is the reality behind the buzzwords?
There is no denying that simply branding a solution “AI” or “ML” can build excitement and potentially help get projects funded. Indeed it can lead to interesting conversations about what is AI (in truth very little); deep-learning, machine learning and just really good statistics and mathematics. All too often these latter techniques get over-looked or ignored simply because they are not the latest “buzzwords”. In reality, the fundamental concepts of statistics, linear algebra, probability, and calculus are at the heart of machine learning.
It’s easy to get caught up in the wonders, hype and marketing of AI, but misusing these terms can also lead to misunderstanding and mistrust. Many so-called “AI” applications rely on simple rules rather than advanced algorithms that mimic human learning. Whilst there is nothing wrong with using complex algorithms when they are needed, there is also nothing wrong with using those fundamental techniques which often give more explainable and robust results.
Demystifying AI for Decision Makers
Demystifying “AI” or, for now lets talk about demystifying the statistics and mathematics behind the algorithms isn’t always easy. Yes the approaches can be complex, but in that situation it is up to the mathematicians and statisticians to develop the communication skills needed to explain what is going on at a level that the stakeholders can understand. However, aligned with that is the need for those who commission and use the outputs of these models and tools to learn what types of questions to ask, for example: how have the models been developed, what’s the impact of the training data, where is it legitimate to use the results, what has been learnt from the developing the model, how transparent and explainable are the results? By coming together, teams developing and using AI and ML can support each other in learning more about how the algorithms are built and used.
Emphasize the Human Element
The reason for bringing together everyone who is involved in AI, ML and deep learning is because all these approaches use maths, not magic. It is people who design the models, interpret the results and crucially it is real people who are impacted by the results and any decisions based on them. Just like any technology, AI and ML have their limitations, errors, uncertainties and biases, none of which should be ignored. Rather, everyone in society should be involved in discussions about the data that is used, where these techniques can have impact and, more importantly, where for now their use should be limited. This approach leaves no-one behind and helps get stakeholders to understand the work being done to harness the power of these approaches to drive innovation.