Opening the AI Black Box with Explainable AI

People are calling upon AI to make life-or-death decisions. In such situations, explainability is absolutely critical.

It’s no surprise, therefore, that the US Department of Defense (DoD) is investing in Explainable AI (XAI). “Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners,” explains David Gunning, program manager at the Defense Advanced Research Projects Agency (DARPA), an agency of the DoD. “New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.”

Healthcare, however, remains one of AI’s most promising areas. “Explainable AI is going to be extremely important for us in healthcare in actually bridging this gap from understanding what might be possible and what might be going on with your health, and actually giving clinicians tools so that they can really be comfortable and understand how to use,” says Sanji Fernando, vice president and head of the OptumLabs Center for Applied Data Science at UnitedHealth Group. “That’s why we think there’s some amazing work happening in academia, in academic institutions, in large companies, and within the federal government, to safely approve this kind of decision making.”

The Explainability Tradeoff

While organizations like DARPA are actively investing in XAI, there is an open question as to whether such efforts detract from the central priority to make AI algorithms better.

Even more concerning: will we need to ‘dumb down’ AI algorithms to make them explainable? “The more accurate the algorithm, the harder it is to interpret, especially with deep learning,” points out Sameer Singh, assistant professor of computer science at the University of California Irvine. “Computers are increasingly a more important part of our lives, and automation is just going to improve over time, so it’s increasingly important to know why these complicated AI and ML systems are making the decisions that they are.”

Some AI research efforts may thus be at cross-purposes with others. “The more complex a system is, the less explainable it will be,” says John Zerilli, postdoctoral fellow at University of Otago and researcher into explainable AI. “If you want your system to be explainable, you’re going to have to make do with a simpler system that isn’t as powerful or accurate.”

The $2 billion that DARPA is investing in what it calls ‘third-wave AI systems,’ however, may very well be sufficient to resolve this tradeoff. “XAI is one of a handful of current DARPA programs expected to enable ‘third-wave AI systems,’ where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena,” DARPA’s Gunning adds.

Skynet, Explain Thyself

Resolving today’s trust issues is important, but there are trust issues in the future as well. If we place the growing field of XAI into the dystopian future that some AI skeptics lay out, we may have a path to preventing such nightmares.

Whether it be HAL 9000, Skynet, or any other Hollywood AI villain, there is always a point in the development of the evil program where it breaks free from the control of its human creators and runs amuck.

If those creators built explainability into such systems, however, then running amuck would be much less likely – or at least, much easier to see coming.

Thanks to XAI, therefore, we can all breathe a sigh of relief.

Read the source article in Forbes.

Author:

Just a figment of your imagination.