top of page

Explainable AI, not just the Latest Fad but a Necessary Component of Value Creation



It’s been a little while since I last blogged, so what better topic to come back with than something close to the hearts of everyone at Massive Analytic – explainable AI and how it’s a necessity for creating value in an enterprise setting.

This blog will be an expanded version of a talk I gave at The European AI Conference 2022 last week, a superb event organised jointly between Startup Network Europe and Transatlantic AI eXchange – it brought together some of the great thinkers in our space and I feel very privileged to have been invited to speak there in front of 1100 attendees. I do highly recommended following both of those organisations and keeping an eye out for future events!

Now on with the blog…

 

The past decade we’ve seen AI move from a buzzword into a reality for businesses. In fact enterprises now have several different AI approaches to choose from — deep learning, neural networks (both convolutional and recurrent), transfer learning to name a few, not to mention the more traditional machine learning methods. However despite the sheer number of companies claiming to be creating AI and the demand from customers, statistics show that AI as a decision-making tool for enterprises is far from mature – with the reality being that adoption is still lacking in many respects. There are a variety of statistics backing this up such as a 2021 Gartner® Application Innovation Implementation Survey finding that, although 84% of respondents are using AI, 41% are only making limited use and 16% are making no use at all,* or this Forbes survey, that found only 20% of businesses were fully utilising AI. The reasons for this are varied, there’s no one barrier to adoption; some of these are technical, data governance, lack of skills and the like – however the one I’m going to focus on today is culture, or put simply, trust. More than that I’ll be discussing how to engender trust in AI for long term value creation – with explainable AI.


Use of AI and ML in Application Development by Software Engineering Teams

I think it goes without saying that to begin creating value from AI, we must first implement it - but as we’ve seen most businesses are reticent to hand over decision making to a machine – the Forbes survey highlighting lack of trust in AI as the prevailing reason. So why is trust such a sticking point for businesses? Well, it's two-fold; one issue is accuracy – do I trust that the AI has got it right, and will it get it right repeatedly, not just to create value but retain current value? Secondly, how can I have confidence the AI has got it right when the processes the AI uses to make predictions and make decisions aren't easily understood? With so many different AI techniques how can humans be expected to understand how a machine makes its decisions? It’s because of this that we’re seeing a new buzz word emerging, explainable AI, and like its cousin AI before it, I expect explainable AI to be something that will bombard your inboxes, hit those top ad spots and stalk your digital persona. It will be some time before most of those claims are reality, but I do believe explainable AI is here to stay. Why? Because it is becoming essential.


To answer why that is we need to ask a different question, what is the promise of AI in the first place? For me the answer is simple, AI should be about de-risking decision making, helping you make the right decision at the right time. But a "black box", an unexplainable AI, does the opposite; it introduces risk into the business and the minds of decision-makers. The risk is being held accountable for decisions that weren't adequately understood in a worst-case scenario, when a wrong decision was made. Yet, businesses can't afford to sit on the fence about AI in today's digital world, or their competitors who use AI to make data-driven decisions will overtake them. This leaves us with a catch - 22, we can’t afford to not use AI but using AI in its present state brings risks of its own – evidently too much risk for many. So how do we combat this? Well you’ve guessed it – with explainable AI.


But what's meant by explainable AI? Well, another word for it might be interpretable, putting the processes behind the AI into simple business language, explaining the connections that are being made and having traceability to follow the AI's journey from data to decision. Then there's transparency, how many times has AI made the news for the wrong reason? You can detect bias and protect against techniques that might have misinterpreted the data before entering circulation with transparency. Remember, the AI must be accurate to create value. Well, suppose you can characterise an AI model, its accuracy, fairness, parameters with the outcomes. In that case, you can train that model better, get more accurate insights, avoid missteps, and therefore create even more value.


I’ve spoken a lot about trust but beyond just trust for the first time we're moving towards regulations being put in place to govern the development of AI. The EU's AI Act looms large here. Explainability in AI will soon become fundamental if you want to use AI at all whether you're a vendor or a consumer. From NATO, to the World Health Organisation, to Microsoft – organisations, businesses and government bodies are all talking about how to build responsible AI – and something common to all these definitions is explainability. Responsible AI is a much broader topic than just explainable AI but we have written some blogs on the subject here and here, so do check those out after you’re done here.


Making your AI traceable, transparent and therefore explainable opens the door for broader adoption of AI, for more buy-in from stakeholders and ultimately more value creation. The actual value of AI isn't just in the insights it reveals but in how those insights are automated. But unless that AI complies with regulations, is without bias, is trusted enough to embed into our businesses, we'll never see the full reward – and full value creation will be left unrealised. Explainable AI is the missing piece of the puzzle for wider acceptance of AI both in business and society.

At Massive Analytic we have a roadmap of explainable AI product features to help our customers undeexplainable-ai-not-just-the-latest-fad-but-a-necessary-component-of-value-creationrstand their data better and de-risk decision making. Our patented technology Artificial Precognition also uses innately explainable possibilist decision trees to get its answers – providing accuracy and explainability in one.


Learn more about how we’re engineering explainable AI in our products by checking out our paper, or contact us for more information: customer.success@massiveanalytic.com


References


Gartner, “SurveyAnalysis: AI AdoptionSpans Software Engineering and Organizational Boundaries”, Van Baker, Benoit Lheureux, November25, 2021.


*Total number of respondents- 109. Question- To what degree are software engineering teams in your organization incorporating AI and ML into application and software development?


GARTNER is a registered trademark and servicemark of Gartner,Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.













213 views6 comments

댓글 6개


Call Escort
Call Escort
2023년 8월 28일

Thanks to you for providing interesting blog post. It's really helpful. Call girls in Gurgaon 😚

좋아요

Yumi Vega
Yumi Vega
2022년 12월 20일

Thanks for a very interesting blog. What else may I get that kind of info written in such a perfect approach? I’ve a undertaking that I am simply now operating on, and I have been at the look out for such info.

검증 카지노제이나인 https://j9korea.com/

카지노 추천 제이나인 https://j9korea.com/

카지노 쿠폰 제이나인 https://j9korea.com/

사설 카지노 제이나인 https://j9korea.com/

모바일 카지노 제이나인 https://j9korea.com/


좋아요

Ramon Miles
Ramon Miles
2022년 12월 05일

if you are charging on giving assistance then it is not good I as a student of HCMI I never took any help and give charges to them or do some help. If you really want to help someone never charge them. I completed my study If you want me to help you then do contact me.

좋아요

Gungfu G. Superme
Gungfu G. Superme
2022년 11월 17일

เว็บพนันออนไลน์ที่เปิดให้บริการแบบไม่ผ่านเอเย่นต์ ซึ่งทางเว็บของเรานั้นได้มีเกมพนันมากมายให้ท่านได้เลือกเล่น ไม่ว่าจะเป็น คาสิโนออนไลน์ หรือ เดิมพันกีฬาออนไลน์ ซึ่งทางเว็บของเรานั้นได้เปิดให้บริการมานานมากกว่า 100 เกม เช่น แทงบอลออนไลน์ บาคาร่า สล็อตออนไลน์ เกมยิงปลาออนไลน์ รูเล็ตออนไลน์ และยังมีเกมพนันออนไลน์อื่น ๆ อีกมากมายให้ท่านได้เลือกเล่นอีกเพียบ

좋아요

John Rok
John Rok
2022년 8월 31일

This blog post will be a more in-depth version of a talk I gave at The European AI Conference 2022 last week. This event, which was superbly organized by Startup Network Europe and Transatlantic AI exchange, brought together some of the best thinkers in our field, and I feel extremely fortunate to have been asked to speak there in front of 1100 attendees. If you are a student and need an assignment then I suggest that you should contact the best dissertation help service. This is a helpful service for everyone.

좋아요
bottom of page