While recent advances in natural language processing are widely publicized, deployment in critical applications like healthcare and defense is lagging behind.
Why? Because people don’t trust AI agents.
Why? Because the agents can’t explain their decisions in normal human terms.
Why? Because they manipulate language input without understanding what it means. Language understanding is not only about language: it requires commonsense knowledge about the world and the agents in it.
Why? Because much of what people express and understand through language is not overtly present in text or dialog.
But isn’t it unrealistic to teach agents to understand language? Not at all, as long as agents can learn about language and the world as part of their normal operation.
In this talk, I will briefly describe a proven computational methodology for developing agents of this kind – agents that will be able to explain their decision-making to people in a way that will engender trust.