Explainable AI

Machine learning

Machine learning (ML) and artificial intelligence (AI) are most useful when they complement humans, as human experts already have deep understanding, usually based on a combination of intuition and logic. Humans routinely make decisions in high impact domains, where the safety of humans and large amounts of money are at stake. In a fast-moving context like the smart grid, technology and processes are changing so fast that human intuition cannot keep up, so we want to augment human capabilities with AI capabilities. For AI to have a role in such decisions, it must gain the confidence of the human experts.

Conventional AI is notorious for giving answers without explanations, which is fine if it’s a movie recommendation or a draft of a letter, but if it’s a high impact decision more is necessary.

Explainable AI (XAI) is a new capability that may be delivered by various means.  Sanity checking results from a large language model (LLM) will often give more confidence; this can be accomplished via symbolic logic or some other proof mechanism; the details of the proof will often serve as a rationale that ultimately explains the essence of the result.  Sanity checking for science and engineering problems can also be done mathematically; for example, if my AI told me that a certain molecular conformation is best for a given purpose, perturbing the conformation slightly should not get a better answer.  Similarly, if my AI told me that a set of results are all roughly equivalent, advanced math techniques should confirm that conclusion.  Any of these sanity checks gives reasons why a human expert should (not) trust AI’s results.