Show your work! Why explainable AI is crucial for sustainable adoption

As the regional vice president for the Middle East & Turkey at Dataiku, Sid Bhatia works strategically with C-suite executives at some of the best-known data-driven organizations in the region. He brings close to a decade and a half of sales leadership and management experience in the information management and analytics space, having worked with companies like IBM, Cloudera and Sybase (SAP), before joining Dataiku. With experience at organizations of different maturity levels—start-up, growth, established—and backed with a successful record of exceeding performance & budget goals by aligning field efforts with organizational objectives, Sid specializes in driving high performing teams that are helping organizations uncover their true potential through data as Machine Learning, Artificial Intelligence & Data Science become core pillars underpinning their strategy, providing all stakeholders actionable insights to guide their decision-making process.

So-called "black-box AI" can lead to bad decisions, with little post-mortem capability that would allow stakeholders to determine points of failure. To generate trust in such systems, we must expose the path between data and actionable information or indeed action itself, as is the case in fully automated architectures

  

Much has been made of the Middle East’s accelerated move to artificial intelligence (AI) through mass migration to the cloud in recent years, hastened by remote work and the need for agility across organizations amid COVID-19 lockdowns. A future has been sketched if not carved in stone by commentators who are rightly certain there is no going back.

We were always headed here. As far back as 2017, analyst firms like Deloitte and PwC’s Strategy& were chronicling GCC governments’ digital transformation programs. The region is pinning its economic hopes on AI and associated technologies. But trust is important. Remember math class, when a single figure offered up as the answer to a complex problem was never sufficient? Images of teachers peering sternly over their glasses come to mind. “You must show your work,” they would say.

And as adults, we would never issue, or accept, an invoice with a single-figure total. Breakdowns are indispensable this is an essential part of life. Whether you are declaring the area of a triangle or the amount due on a multi-million dollar project, trust requires that you show how the sandwiches were made.

Democratization of AI

In the world of artificial intelligence, this conversation is referred to as “the democratization of AI”, and we need to keep it front of mind. If AI is to be our future partner in innovation, we must trust it. And to trust it, we must be candid about its inner workings.

Yet, enterprises from the Levant to North Africa are willing to let advanced algorithms make decisions on their behalf. So-called “black-box AI” can lead to bad decisions, with little post-mortem capability that would allow stakeholders to determine points of failure. To generate trust in such systems, we must expose the path between data and actionable information or indeed action itself, as is the case in fully automated architectures.

Transparency, accountability

Middle East enterprises are subject to growing regulatory burdens. They cannot afford gaffes at scale, such as Apple suffered with its credit card. A company at Apple’s level may be able to recover from such errors, but a growing start-up — such as those that make up significant portions of Middle East economies is unlikely to brush it off as easily.

The regional FSI sector, hungry for growth opportunities, could face serious problems if regulators cannot question decisions made because of black-box AI. Denied loans, varying credit limits and even fees need to be penetrable.

Another industry in growth mode, and similarly subject to scrutiny by Gulf regulators, is healthcare. Medical providers across the region have already begun to weave AI into their strategies, but it is not hard to imagine why transparency will be important in establishing smart tech as a mainstay in medical care. Human analysis of findings is essential for accuracy. Indeed, many machine-learning models require human-expert feedback to fine-tune accuracy and become viable in production environments.

Peeling the onion

Think of the black box’s opposite as “explainable AI,” or white-box AI. If we can answer questions as to how an AI system reached its conclusions, we can drive vital debate on the direction some technologies are taking and how those paths can be redirected towards more positive, trusted outcomes.

Also consider the governance angle. The responsible use of AI leads to good business for private enterprise. It leads to more desirable social impact for governments. The ability to deliver noticeable value, untainted by error, prejudice, or other negative elements is surely the goal of AI. Given the suspicion that automation faces globally for its potential to supplant human workforces, it is hard to imagine a surge in AI adoption that will not be accompanied by an intensification in regulatory requirements. Under such circumstances, black-box systems will wither on the vine.

Sustainably prosperous

Exposing the algorithm as part of the results dashboard is a natural next step for AI solutions. Metrics such as weights numeric values applied to data to denote the relative importance of one observation over another should be on full display. Mathematical models are improving all the time and user interfaces are continually being improved to deliver broader views to end-users on the processing of data.

Also vital will be collaboration between enterprises. Open data platforms that allow the honing of models based on the experience and information-gathering of different contributors will lead to greater democratization and more accurate results. In the end, we will be left with a richer information ecosystem, more informed decision makers, greater trust in AI, and more sustainably prosperous societies.

© Opinion 2021

Any opinions expressed in this article are the author’s own

Disclaimer: This article is provided for informational purposes only. The content does not provide tax, legal or investment advice or opinion regarding the suitability, value or profitability of any particular security, portfolio or investment strategy. Read our full disclaimer policy here.