top of page

Explainable AI, not just the Latest Fad but a Necessary Component of Value Creation



It’s been a little while since I last blogged, so what better topic to come back with than something close to the hearts of everyone at Massive Analytic – explainable AI and how it’s a necessity for creating value in an enterprise setting.

This blog will be an expanded version of a talk I gave at The European AI Conference 2022 last week, a superb event organised jointly between Startup Network Europe and Transatlantic AI eXchange – it brought together some of the great thinkers in our space and I feel very privileged to have been invited to speak there in front of 1100 attendees. I do highly recommended following both of those organisations and keeping an eye out for future events!

Now on with the blog…

 

The past decade we’ve seen AI move from a buzzword into a reality for businesses. In fact enterprises now have several different AI approaches to choose from — deep learning, neural networks (both convolutional and recurrent), transfer learning to name a few, not to mention the more traditional machine learning methods. However despite the sheer number of companies claiming to be creating AI and the demand from customers, statistics show that AI as a decision-making tool for enterprises is far from mature – with the reality being that adoption is still lacking in many respects. There are a variety of statistics backing this up such as a 2021 Gartner® Application Innovation Implementation Survey finding that, although 84% of respondents are using AI, 41% are only making limited use and 16% are making no use at all,* or this Forbes survey, that found only 20% of businesses were fully utilising AI. The reasons for this are varied, there’s no one barrier to adoption; some of these are technical, data governance, lack of skills and the like – however the one I’m going to focus on today is culture, or put simply, trust. More than that I’ll be discussing how to engender trust in AI for long term value creation – with explainable AI.


Use of AI and ML in Application Development by Software Engineering Teams

I think it goes without saying that to begin creating value from AI, we must first implement it - but as we’ve seen most businesses are reticent to hand over decision making to a machine – the Forbes survey highlighting lack of trust in AI as the prevailing reason. So why is trust such a sticking point for businesses? Well, it's two-fold; one issue is accuracy – do I trust that the AI has got it right, and will it get it right repeatedly, not just to create value but retain current value? Secondly, how can I have confidence the AI has got it right when the processes the AI uses to make predictions and make decisions aren't easily understood? With so many different AI techniques how can humans be expected to understand how a machine makes its decisions? It’s because of this that we’re seeing a new buzz word emerging, explainable AI, and like its cousin AI before it, I expect explainable AI to be something that will bombard your inboxes, hit those top ad spots and stalk your digital persona. It will be some time before most of those claims are reality, but I do believe explainable AI is here to stay. Why? Because it is becoming essential.


To answer why that is we need to ask a different question, what is the promise of AI in the first place? For me the answer is simple, AI should be about de-risking decision making, helping you make the right decision at the right time. But a "black box", an unexplainable AI, does the opposite; it introduces risk into the business and the minds of decision-makers. The risk is being held accountable for decisions that weren't adequately understood in a worst-case scenario, when a wrong decision was made. Yet, businesses can't afford to sit on the fence about AI in today's digital world, or their competitors who use AI to make data-driven decisions will overtake them. This leaves us with a catch - 22, we can’t afford to not use AI but using AI in its present state brings risks of its own – evidently too much risk for many. So how do we combat this? Well you’ve guessed it – with explainable AI.


But what's meant by explainable AI? Well, another word for it might be interpretable, putting the processes behind the AI into simple business language, explaining the connections that are being made and having traceability to follow the AI's journey from data to decision. Then there's transparency, how many times has AI made the news for the wrong reason? You can detect bias and protect against techniques that might have misinterpreted the data before entering circulation with transparency. Remember, the AI must be accurate to create value. Well, suppose you can characterise an AI model, its accuracy, fairness, parameters with the outcomes. In that case, you can train that model better, get more accurate insights, avoid missteps, and therefore create even more value.


I’ve spoken a lot about trust but beyond just trust for the first time we're moving towards regulations being put in place to govern the development of AI. The EU's AI Act looms large here. Explainability in AI will soon become fundamental if you want to use AI at all whether you're a vendor or a consumer. From NATO, to the World Health Organisation, to Microsoft – organisations, businesses and government bodies are all talking about how to build responsible AI – and something common to all these definitions is explainability. Responsible AI is a much broader topic than just explainable AI but we have written some blogs on the subject here and here, so do check those out after you’re done here.


Making your AI traceable, transparent and therefore explainable opens the door for broader adoption of AI, for more buy-in from stakeholders and ultimately more value creation. The actual value of AI isn't just in the insights it reveals but in how those insights are automated. But unless that AI complies with regulations, is without bias, is trusted enough to embed into our businesses, we'll never see the full reward – and full value creation will be left unrealised. Explainable AI is the missing piece of the puzzle for wider acceptance of AI both in business and society.

At Massive Analytic we have a roadmap of explainable AI product features to help our customers undeexplainable-ai-not-just-the-latest-fad-but-a-necessary-component-of-value-creationrstand their data better and de-risk decision making. Our patented technology Artificial Precognition also uses innately explainable possibilist decision trees to get its answers – providing accuracy and explainability in one.


Learn more about how we’re engineering explainable AI in our products by checking out our paper, or contact us for more information: customer.success@massiveanalytic.com


References


Gartner, “SurveyAnalysis: AI AdoptionSpans Software Engineering and Organizational Boundaries”, Van Baker, Benoit Lheureux, November25, 2021.


*Total number of respondents- 109. Question- To what degree are software engineering teams in your organization incorporating AI and ML into application and software development?


GARTNER is a registered trademark and servicemark of Gartner,Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.













215 views6 comments
bottom of page